id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2309.13544 | Related Rhythms: Recommendation System To Discover Music You May Like | Machine Learning models are being utilized extensively to drive recommender
systems, which is a widely explored topic today. This is especially true of the
music industry, where we are witnessing a surge in growth. Besides a large
chunk of active users, these systems are fueled by massive amounts of data.
These large-scale systems yield applications that aim to provide a better user
experience and to keep customers actively engaged. In this paper, a distributed
Machine Learning (ML) pipeline is delineated, which is capable of taking a
subset of songs as input and producing a new subset of songs identified as
being similar to the inputted subset. The publicly accessible Million Songs
Dataset (MSD) enables researchers to develop and explore reasonably efficient
systems for audio track analysis and recommendations, without having to access
a commercialized music platform. The objective of the proposed application is
to leverage an ML system trained to optimally recommend songs that a user might
like. | Rahul Singh, Pranav Kanuparthi | 2023-09-24T04:18:40Z | http://arxiv.org/abs/2309.13544v1 | # Related Rhythms: Recommendation System To
###### Abstract
Machine Learning models are being utilized extensively to drive recommender systems, which is a widely explored topic today. This is especially true of the music industry, where we are witnessing a surge in growth. Besides a large chunk of active users, these systems are fueled by massive amounts of data. These large-scale systems yield applications that aim to provide a better user experience and to keep customers actively engaged. In this paper, a distributed Machine Learning (ML) pipeline is delineated, which is capable of taking a subset of songs as input and producing a new subset of songs identified as being similar to the inputted subset. The publicly accessible Million Songs Dataset (MSD) enables researchers to develop and explore reasonably efficient systems for audio track analysis and recommendations, without having to access a commercialized music platform. The objective of the proposed application is to leverage an ML system trained to optimally recommend songs that a user might like.
## 1 Introduction
The magnitude of data available in the MSD [1] not only opens up possibilities for ML, but also warrants applications which leverage distributed workflows with efficacious data-processing and analysis. Song-recommendation systems are one such application capable of suggesting tracks in real-time. The work presented in this paper aims to address the following question -
_"Given a set of songs that we know a user likes, can we predict a disjoint set of songs that is likely to match the user's song choice?"_
We briefly discuss the methodology, techniques and metrics used in building the ML pipeline. This is followed by an explanation of the software infrastructure deployed and the computational challenges faced. The final section expounds on the results, analysis and inferences drawn.
### Task
As mentioned in the previous section, we aim to recommend songs to a user given input song(s). We do this by selecting a subset of audio tracks from the MSD that are classified as being similar to the input. From this subset, we compute the count of similar artists and determine the top-n frequently occurring similar artists, where 'n' is a tuned hyperparameter. Finally, we present the songs retrieved by these chosen artists belonging to this subset of the original dataset as the recommended songs.
### Dataset
In the ML pipeline, we relied on the Million Song Dataset, which comprises audio features and metadata extracted from a million music tracks. A complete description of the features along with an example of the metadata schema can be found at the MSD web-page _linked here_. The MSD was constructed using several other datasets as a part of a collaborative effort between Echo Nest and LabROSA, created under an NSF grant using a cluster of community contributed datasets such as SecondHandSongs dataset and musiXmatch dataset. A few noteworthy statistics of the dataset are highlighted in Table 1.
The dataset can be downloaded as h5 files with the metadata and audio features. As seen in Figure 1, the audio features are mel-frequency cepstral coefficients (MFCC) which are essentially spectrograms of the recordings with each time-step having twelve frequencies. There are a total of 54 features. Among other attractive attributes, the data includes features such as 'danceability', 'loudness', 'key confidence', 'bars confidence', 'artist familiarity' and 'artist location'.
## 2 Methodology
In this section, we describe the overall ML pipeline used from end-to-end and also explain the various stages involved in developing the proposed application.
### Data Retrieval and Analysis
We used Amazon Web Services (AWS) as the cloud infrastructure to host the system. Data acquisition and pre-processing involved retrieving all 280GB of data and converting the files from HDF5 to a suitable format for analysis, which took about three days. The dataset residing on an AMI (Amazon Machine Image) was first obtained on an S3 bucket, after which we set up an Amazon EC2 instance from where we processed the data and converted it into csv files. These files were uploaded back to S3, and finally we were able to interact with the data using an Elastic Map-Reduce (EMR) instance running PySpark. Once the data was loaded, we performed some analysis of the features and computed some statistics associated with the features. Analyzing the data helped in data engineering and feature selection.
\begin{table}
\begin{tabular}{l l} Attribute & Magnitude \\ \hline Size & 275 GB \\ Unique Artists & 44,745 \\ Dated Tracks & 515,576 \\ Songs & 1,000,000 \\ \hline \end{tabular}
\end{table}
Table 1: MSD Statistics
Figure 1: High-level block diagram of the ML Pipeline
### Feature Selection
After assessing the various features of the dataset, we decided to condense the dataset by considering only the mean of some features rather than the entire sequence that was originally present and by dropping features with zero variance and those with sparse entries. Next, based on our experimental results, we decided to drop most features which solely consisted of strings from the condensed dataset and considered features only with numerical values for the machine learning algorithm. While these features were used to train the clustering model, we relied on textual features in the subsequent phase such as artist_terms - a list of strings describing the genre of the song - to generate recommendations and unique strings (track_ID and artist_ID) to identify individual audio tracks within clusters.
### Machine Learning
We used K-means to cluster our data such that it is capable of extracting the necessary information from a given subset of music tracks. Figure 2 depicts the process adopted for clustering.
We used a clustering algorithm, an unsupervised learning mechanism for our task as we hypothesized that this would enable the algorithm to classify songs based on a diverse set of similarities rather than a single attribute like genre.
### Model Optimization
We performed feature selection, training and tuned hyperparameters in a cyclic manner in order to optimize our model. For each value of k, we used silhouette scores to measure how similar an object is to the cluster it was assigned compared to the other clusters. We also verify that the recommended songs are similar to the input song and not something absurd. After each phase of experiments, the results were analyzed and the performance was improved in a subsequent iteration by pruning ineffective features and appending new features to our dataset. To choose the right number of clusters for our K-means model, we first conducted grid search on 10% of the MSD, and using the results of this experiment we selected an initial batch of features to use. We grew the dataset to 25% and repeated the process, before finally conducting random search on the entire data in order to choose the optimal number of clusters.
## 3 Computation
The system was hosted on Amazon EMR which made it seamless to deploy clusters by providing a secure, reliable and flexible cloud based big data platform and built-in tools. Data pre-processing was done using Python while the rest of the code-base used PySpark. The main libraries used were pyspark.ml, pyspark.sql, pandas, numpy and s3fs. Initially, we planned to launch a 3-node EMR cluster with an m4.xlarge lead node and two r4.2xlarge core nodes and had estimated an overall expense of $50 with 5 hour daily usage for a period of one month. Most of the computational challenges were faced in the early stages of the pipeline. It took over two days to retrieve the entire dataset in the pre-processed form. First, we loaded the entire dataset and discovered that we could not interact with the HDF5 file format directly using PySpark. So we converted the files to npy format only to realize that there were major formatting issues and a large chunk of data was lost in the conversion. We stopped this data-load midway and converted the data to csv files instead, and made sure to parallelize the process and drop some unnecessary features at this stage - which made the loading much faster. At this point, we also had to update our cloud infrastructure by switching entirely to m5.xlarge nodes, which helped us stay within the $100 budget. Apart from the data retrieval stage,
Figure 2: Breakdown of the unsupervised learning process
the run-time estimates made were accurate, with the major time consumption occurring in the training and hyperparameter-search stages of the ML model. Using the entire dataset, model training took about an hour to complete on average. The inference and visualization code ran in approximately one and five minutes respectively. Running grid-search was not an option as it would take days to complete, so we relied on random search which we ran on a limited number of configurations. In retrospect, capacity planning was a great learning experience. Given the size of the dataset and the formatting of the data, performing analysis and making the right estimates - of storage and compute requirements as well as expenses - proved crucial.
## 4 Results and Analysis
This section briefly describes the experimental setup as well as some of the results obtained, analysis conducted and inferences made. Following the iterative development strategy characteristic of our ML-pipeline, we were able to make incremental updates based on inferences made after running experiments, and this helped boost our results.
We conducted experiments by varying the value of 'k' - the number of clusters and obtained the highest silhouette score for k = 5. We found that the silhouette scores decreased almost monotonically as we increased the values of k, as seen in the figure.
During feature selection, we identified some features which were negatively impacting the model performance and introducing a skew in the predicted cluster centroids. Some of the pruned features included attributes from songs metadata, segments/sections/loudness/bars confidence. We also dropped the song_length feature, which was negatively influencing our results and we concluded that it was not useful in terms of the task, which is also in line with the logical interpretation - since song-similarity is unlikely to be influenced by the length of the tracks.
Although the silhouette score was best when k = 5, we found that the song recommendations were not satisfactory. After analysis, we found that the best trade off between song recommendations and silhouette score was at k = 20.
For better understanding, a few simplified instances of system output have been included in the Appendix at the end, where you can see the input song and the corresponding recommendations produced. These include a result from the final model as well as an initial system result which produced sub-par recommendations.
Figure 4: A truncated example of song recommendations for k = 20
Figure 3: Visualization of model performance based on silhouette score v/s number of clusters
## Acknowledgments
We would like to thank Virginia Smith and Ameet Talwalkar for their constant guidance, for providing us useful suggestions and also for giving us an opportunity to work on this task. We are grateful to Baljit Singh and Ignacio Maronna for their constructive feedback and support.
|
2309.15653 | Direct Sensing of Remote Nuclei: Expanding the Reach of Cross-Effect
Dynamic Nuclear Polarization | Dynamic Nuclear Polarization (DNP) has revolutionized the field of
solid-state NMR spectroscopy by significantly enhancing the sensitivity of
nuclear magnetic resonance experiments. Conventionally, cross effect DNP relies
on biradicals to transfer polarization from coupled electron spins to nearby
nuclear spins and subsequent relay to target nuclei via spin diffusion
mechanism. However, the direct transfer of polarization to distant nuclei
remains a significant challenge, limiting its applicability in various
contexts. In this work, we propose a novel biradical design concept that
involves a very strong electron-electron coupling, with a magnitude of hundreds
of MHz, which enables efficient direct polarization transfer from electron
spins to nuclear spins over much longer distances, exceeding 2.0 nm. We discuss
the potential of this tailored biradicals in scenarios where conventional spin
diffusion mechanisms are inefficient or when direct nuclear spin sensing
through electron spin interactions is desired. Our study presents a promising
avenue for expanding the scope of cross effect DNP in solid-state NMR
spectroscopy and opens new opportunities for investigating a wide range of
biological and material systems. Our research also provides insight into the
DNP buildup time of commercially available biradicals. | Amaria Javed, Asif Equbal | 2023-09-27T13:40:18Z | http://arxiv.org/abs/2309.15653v1 | # Direct Sensing of Remote Nuclei: Expanding the Reach of Cross-Effect Dynamic Nuclear Polarization
###### Abstract
Dynamic Nuclear Polarization (DNP) has revolutionized the field of solid-state NMR spectroscopy by significantly enhancing the sensitivity of nuclear magnetic resonance experiments. Conventionally, cross effect DNP relies on biradicals to transfer polarization from coupled electron spins to nearby nuclear spins and subsequent relay to target nuclei via spin diffusion mechanism. However, the direct transfer of polarization to distant nuclei remains a significant challenge, limiting its applicability in various contexts. In this work, we propose a novel biradical design concept that involves a very strong electron-electron coupling, with a magnitude of hundreds of MHz, which enables efficient direct polarization transfer from electron spins to nuclear spins over much longer distances, exceeding 2.0 nm. We discuss the potential of this tailored biradicals in scenarios where conventional spin diffusion mechanisms are inefficient or when direct nuclear spin sensing through electron spin interactions is desired. Our study presents a promising avenue for expanding the scope of cross effect DNP in solid-state NMR spectroscopy and opens new opportunities for investigating a wide range of biological and material systems. Our research also provides insight into the DNP buildup time of commercially available biradicals.
## 1 Introduction
Dynamic Nuclear Polarization (DNP) has emerged as a powerful technique for enhancing the sensitivity of solid-state Nuclear Magnetic Resonance (NMR) spectroscopy [1, 2]. Imagine studying a wide range of complex biological molecules, a material with unique properties, or a catalytic surface with high resolution and sensitivity at the atomic level. These are precisely the kinds of challenge where DNP opened up a treasure trove of opportunities for NMR spectroscopists [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]. DNP sensitivity enhancement is achieved through the transfer of polarization from electron spins to nuclear spins through a complex interplay between electron and nuclear spins, which is controlled by microwave radiation through different mechanisms [14]. Two primary mechanisms for achieving
DNP in solid-state systems are Solid Effect (SE) and Cross Effect (CE) DNP [15, 16]. The SE DNP transfer mechanism entails the simultaneous flipping of both electron and nuclear spins, achieved by using off-resonant microwave irradiation [17]. The intense microwave power requirement for this quantum mechanically forbidden transition poses a significant constraint, especially in the context of high-field DNP instruments [18].
In contrast to SE DNP, CE DNP is a more efficient process that takes advantage of electron-electron (\(e\)-\(e\)) couplings in addition to electron-nuclear (\(e\)-\(n\)) interactions [19, 20]. This is accomplished through a triple-flip transition of an electron-electron-nuclear (\(e\)-\(e\)-\(n\)) system, which is regulated by the strength of the \(e\)-\(e\) coupling and \(e\)-\(n\) hyperfine coupling. CE DNP is especially advantageous in high-magnet-field conditions, where microwave power is limited, as it requires much less microwave power than SE DNP due to the quantum mechanically allowed microwave-induced transition. The key to CE DNP's efficiency is meeting a resonance condition where the difference between the Electron Paramagnetic Resonance (EPR) frequencies (\(\Delta\omega_{e}\)) of the two coupled electron spins is equal to the nuclear Larmor frequency (\(\omega_{0n}\)).
The conventional CE DNP mechanism relies on the interplay of spin properties within the polarizing agent (shown in the circle). Commonly, nitroxide-based biradicals are used in contemporary DNP applications. These biradicals have unpaired electron spins with an anisotropic g-tensor, which is larger than the nuclear Larmor frequency. At high magnetic fields, this g-anisotropy is necessary to reach the CE resonance condition, which requires the energy separation between two coupled electrons to match the nuclear Larmor frequency. Therefore, the orientation of the g-tensors of the electron spins and \(e-e\) and \(e-n\) couplings between the two electrons and nuclear spins must be
Figure 1: Schematic illustrating the conventional DNP polarization transfer mechanism, utilizing the intricate spin diffusion network within the system. This network operates within the biradical, extends into the solvent, and further propagates to the target’s surface before penetrating its bulk.
carefully tuned. It is important to note that in conventional DNP, the polarization is transferred from electrons to the nuclei close to the electron spins. The enhanced nuclear polarization of these proximate nuclear spins (encircled) is then relayed to target nuclear spins through spin-diffusion, which involves flip-flop transitions between nuclear spins. The efficiency of DNP of the target sample process is dependent on the presence of an effective spin diffusion network connecting the polarizing agent to the target molecule.
Despite the fact that spin-diffusion based DNP transfer is useful in certain experimental setups, it has its own limitations. For example, when investigating systems with slow or negligible spin diffusion dynamics, the enhancement of a target nucleus is fundamentally dependent on the direct polarization transfer. Quantum-mechanically simulated enhancement of DNP for \(H\) nuclei positioned at varying distances (\(r_{en}\)) from the electron spins, thereby exploring different hyperfine coupling strengths, shows that as \(r_{en}\) increases, the direct CE DNP transfer exhibits a dramatic decline (Fig. 2). This is particularly evident in the range of small distances, with the enhancement dropping from 450 at a distance of 5 \(\AA\) to a mere 120 at 10 \(\AA\) at 7 T field conditions. Further separation to 20 \(\AA\) results in an enhancement of less than 10, indicating a negligible polarization transfer efficiency of conventional biradicals at longer distances. These findings demonstrate a critical limitation of conventional biradicals for applications such as spin sensing or the transfer of spin polarization to nuclei (e.g. \({}^{1}H\) or \({}^{19}F\)) situated outside the biradical molecule, to the solvent, or directly to the target molecule. Consequently, there is a need for innovative approaches that can extend the reach of CE DNP, allowing for efficient polarization transfer to nuclei at longer distances.
## 2 Results
To understand why direct DNP transfer is truncated at longer \(r_{en}\)distances, we need to understand the CE mechanism. In the high field approximation regime, the CE transfer Hamiltonian (\(\widetilde{H}_{CE}\)) for a model two electron (\(S\)) and a nucleus (\(I\)) system is dependent on the amplitude, \(\omega_{een}\), and is
Figure 2: Without a well-defined spin diffusion network, the enhancement of remote nuclei relies heavily on direct polarization transfer, indicated by the purple arrow. Quantum mechanical simulations of DNP reveal that using a conventional biradical (\(eeH\) spin system), the direct transfer to distant nucleus (\(H\)) diminishes exponentially with increasing distance as shown in the right panel.
expressed by the equation.
\[\widetilde{H}_{CE}=\omega_{CE}S_{1}^{+}S_{2}^{-}I_{3}^{+}+c.c.\]
\[\text{where, }\omega_{CE}=\frac{\omega_{e_{1}e_{2}}(\omega_{e_{1}H}-\omega_{e_{2}H })}{\omega_{0H}} \tag{1}\]
Evidently, the triple flip or CE transition rate (\(\omega_{CE}\)) is intricately dependent on the strength of the electron-electron (\(e-e\)) and electron-nucleus (\(e-n\)) couplings. In the case of a biradical with constant \(e-e\) coupling (\(\omega_{ee}\)), when we extend \(r_{en}\), the \(e-n\) coupling (\(\omega_{en}\)) diminishes. Consequently, the efficiency of CE transfer also decreases. Taking this CE Hamiltonian into consideration, we explore the feasibility of tailoring the spatial arrangement of two electron spins, with the aim of facilitating CE DNP transfer for nuclear spins situated at varying distances.
In our study on CE DNP transfer, we explored three scenarios involving polarization transfer to a \(H\) at distances of 5 \(\AA\) (short), 10 \(\AA\) (intermediate), and 20 \(\AA\) (long). Polarization transfer to a shorter distance illustrates transfer within the biradical, while intermediate distances reflect transfer to solvent nuclei, and longer ranges represent direct transfer to the target molecule. We maintained constant spin parameters, including relaxation rate constants, microwave power, and frequency. The only variable under changed was the \(e-e\) coupling strength, examining how changes in \(r_{en}\) affected optimal \(e-e\) coupling. Our findings reveal a compelling relationship between \(e-e\) coupling and
Figure 3: Quantum mechanically simulated direct CE DNP transfer as a function of \(e-e\) coupling for three scenarios with e-H distances fixed to: 5 \(\AA\), 10 \(\AA\), and 20 \(\AA\). Stronger \(e-e\) coupling is required for maximum DNP enhancement over longer e-H distances. The simulations were performed using a \(eeH\) spin system mimic the experimental condition of 7 T filed and 10 kHz spinning. Refer to the ‘Methods’ section for detailed spin system information.
DNP enhancement for varying \(e-H\) distances. At the shortest \(e-H\) distance (5 \(\AA\)), we observed a maximum DNP enhancement of 550 with an \(e-e\) coupling strength of \(\approx\) 50 MHz. Extending the e-H distance to 10 \(\AA\), we found a stronger \(e-e\) coupling of \(\approx\) 120 MHz was needed for a maximum DNP enhancement of 450. At the longest e-H distance (20 \(\AA\)), an even stronger \(e-e\) coupling exceeding 300 MHz was required for a maximum DNP enhancement of \(\approx\) 100. These simulations underscore a crucial trend: the need to tailor the \(e-e\) coupling strength depending on the transfer scenario. Direct DNP transfer over longer e-H distances necessitates significantly stronger coupling than conventional DNP-mediated transfers to short distances and subsequent spin diffusion. Our analysis also indicates that in the current experimental setup (microwave power much weaker than e-e coupling), SE DNP cannot achieve long-range transfer. This insight carries substantial implications for understanding and optimizing DNP processes, providing valuable guidance for enhancing DNP efficiency in spin systems with variable \(e-H\) distances.
### Time dependent effects from Magic Angle Spinning
DNP mechanisms vary significantly between static and MAS settings. In MAS DNP involving a biradical, three critical time-dependent interactions emerge: 1-spin flip (induced by microwaves, saturating electron spins), 2-spin \(e-e\) flip-flop (leading to electron polarization exchange through coupling), and 3-spin triple-flip (leading to CE DNP transfer) transitions. These interactions are periodic and temporally separated due to sample rotation, as explained in recent literature [21, 22, 23]. For a comprehensive understanding of CE DNP's microscopic intricacies, we employ the Landau Zener model [21, 24]. Notably, nitroxide radicals' g-anisotropy at high magnetic fields induces time-dependent energy levels for electron spins (Fig. 4, panel-i), resulting in dynamic resonance or _rotor events_ during MAS. One of the \(e-e\) and \(CE\) rotor events are highlighted for visualization. During these rotor events, Zeeman energies of the electrons converge and diverge from one of the resonance conditions, leading to real-time rotation and mixing of eigenstates, and hence causing transfer of population between states. This phenomenon is known as level anti-crossings (LACs) [25]. The probability of the population transfer depends on the interplay between the rate of energy change (\(\frac{dE_{0}}{dt}\)) and the magnitude of perturbation at the LAC. Table 1 lists three distinct LAC types and their corresponding Landau Zener parameters.
To elucidate our numerically simulated findings from Fig. 3, we microscopically analyze DNP of a single, optimally oriented, biradical in a spinning rotor. Our observations elucidated the need for stronger \(e-e\) coupling in CE DNP transfers to more distant nuclei for long \(e-H\) distances. We present a microscopic energy and spin polarization trajectory analysis at an \(e-H\) distance of 10 \(\AA\) for two distinct \(e-e\) couplings: (1) an intermediate \(e-e\) coupling of 60 MHz, ideal for conventional DNP (short-range transfer), and (2) a stronger \(e-e\) coupling of 180 MHz optimal for transfer at 10 \(\AA\). It is important to mention that the \(e-e\) coupling (\(\omega_{ee}\)) encompasses both exchange (J)
Figure 4: The Zeeman energy trajectories of two electrons for a single orientation of bis-nitroxide radical under MAS are displayed in Panel (i). Vertical lines indicate \(e-e\) and CE rotor events. Electron-electron coupling trajectories are depicted in Panel (ii). Polarization of electron and nuclear spins under MAS-DNP condition for intermediate (solid lines) and strong (dashed lines) \(e-e\) couplings are shown in Panels (iii) and (iv), respectively. When the \(e-e\) coupling is too strong, it minimizes the polarization difference between electrons (Panel \(iii\)) but increases the adiabaticity of the CE events (Panel \(iv\)).
and dipolar (D) contributions and therefore modulates under MAS due to dipolar anisotropy. The modulation is within the range: \([\frac{-D}{2}+J,D+J]\), as depicted in Fig. 4, panel-ii. The Zeeman energies of the two electron spins, calculated in the \(\mu\)w rotating frame, remain the same in both cases 1 and 2, as demonstrated in panel-i.
From Table 1, it is evident that the \(e-e\) coupling plays a major factor in both \(e-e\) and \(CE\) rotor events. An optimal \(e-e\) coupling facilitates adiabatic polarization exchange between the electron spins during e-e rotor events, generating a characteristic shape resembling two intersecting sigmoids. This exchange of polarization between electrons is crucial for maintaining a substantial polarization difference (\(\Delta P_{e}\)) needed for large polarization transfer at the CE rotor event. A 60 MHz \(e-e\) coupling is sufficient for achieving adiabatic polarization exchange between electrons, as seen in the electron polarization profile in Fig. 6, panel iii (solid lines).
Contrarily, the triple-flip transition of the CE rotor event has a weak transition moment integral, as it is a second-order perturbative effect that is scaled down by the nucleus's Larmor frequency (Equation 1). For \(\omega_{ee}\)= 60 MHz and \(r_{eH}\) = 10 \(\AA\) (i.e. \(\omega_{eH}\approx 0.079\) MHz), \(\omega_{CE}\) will be just 0.016 MHz at 7 T. This small \(\omega_{CE}\) leads to a small \({}^{1}H\) spin polarization enhncement during the CE rotor event (solid line, panel iv). As the \(e-n\) distance increases further, the adiabaticity of the triple-flip transition diminishes, rendering conventional biradicals ineffective for long-range transfer. However, the decline of \(\omega_{CE}\) for long \(e-n\) distance can be mitigated by enhancing the \(e-e\) coupling strength to 180 MHz, increasing \(\omega_{CE}\) to 0.048 MHz (panel iii, dashed line ). This cumulative effect of multiple CE events effectively boosts nucleus polarization.
Notably, a large \(e-e\) coupling enhances CE adiabaticity, promoting efficient and rapid CE transfer to coupled nuclear spins and facilitating polarization transfer to more distant nuclei. However, it is crucial to avoid excessively large \(\omega_{ee}\). If the \(e-e\) coupling becomes overly large, surpassing adiabaticity requirements, an undesirable phenomenon emerges in the electron polarization profiles.
Figure 5:. The difference in polarization between electron spins (\(\Delta P_{e}\)) is dependent on the strength of the electron-electron coupling under MAS. When the coupling is too weak (red shade), the exchange of polarization is non-adiabatic, resulting in a small \(\Delta P_{e}\). Optimal electron-electron coupling (green shade) leads to adiabatic exchange and the highest \(\Delta P_{e}\). If the coupling is too strong (blue shade), \(\Delta P_{e}\) is minimized due to off-resonance exchange effects.
In such cases, the slope of the sigmoid-shaped polarization profile diminishes, resulting from polarization exchange even under off-resonant conditions due to significant perturbation. This minimizes the polarization difference between the electron spins, disrupting the high field approximation and increasing homogeneous coupling between them. This effect is further illustrated by mapping the polarization difference between electrons, \(\Delta P_{e}\), at the CE rotor event as a function of \(e-e\) coupling strength in Fig. 5. Clearly, both too weak and excessively strong coupling result in small \(\Delta P_{e}\) due to non-adiabatic exchange and off-resonant polarization exchange caused by increased homogeneous coupling. Hence, achieving a balanced and customized e-e coupling based on the e-H distance for polarization transfer becomes a crucial consideration.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline Transition & Rotor-event & Perturbation (E\({}_{1}\)) & dE/dt & Resonance Conditions \\ \hline
1-spin & \(\mu w\) & \(\omega_{1\mu w}\) & \(\propto\)\(g_{aniso}.B_{0}\), \(\omega_{r}\) & \(\omega_{0e_{i}}=\omega_{\mu w}\) \\
2-spin & \(e-e\) & \(\omega_{ee}\) & \(\propto\)\(g_{aniso}.B_{0}\), \(\omega_{r}\) & \(\omega_{0e_{i}}=\omega_{0e_{j}}\) \\
3-spin & \(CE\) & \(\frac{\omega_{e_{i}z_{j}}(\omega_{e_{i}n}-\omega_{e_{j}n})}{\omega_{0n}}\) & \(\propto\)\(g_{aniso}.B_{0}\), \(\omega_{r}\) & \(\omega_{0e_{i}}-\omega_{0e_{j}}=\pm\omega_{0n}\) \\ \hline \end{tabular}
\end{table}
Table 1: Landau Zener parameters of LACs corresponding E\({}_{1}\), dE/dt and resonance conditions for two electron spins, \(e_{i}\) and \(e_{j}\).
Figure 6: Quantum-Mechanically simulated direct CE DNP transfer as a function of \(r_{en}\) for three Standard bisintroxide polarizing agents. Here \(n\) is a \(H\) spin. Each data point in ”\(\star\)” represents the performance of a tailored bisintroxide polarizing agent at the specific \(r_{en}\). The simulations were conducted to mimic experimental conditions at 7 T magnetic field strength and 10 kHz spinning frequency, using an \(eeH\) spin system. Refer to the ’Methods’ section for detailed information.. The panel on the right displays a zoomed in view of long-distance regime.
## 3 Discussion and Outlook
Many biradicals have been documented in the literature; however, a substantial number of these radicals do not perform optimally. Only the selected few have demonstrated efficient DNP performance, utilizing nuclear spin diffusion. Previous DNP studies have analyzed the combined effects of J-coupling and dipolar coupling, as well as their relative magnitudes compared to J and D [26, 27]. It would be intriguing to analyze these radicals' performance in terms of their effectiveness in directly transferring polarization to a distant nucleus. Several parameters, such as relative g-tensors, relaxation rates, and electron-electron coupling strength, influence the radical's performance [28, 29, 30]. To simplify our analysis, we kept the relaxation rates constant, averaged over the relative g-tensor, and focused solely on the impact of inter-electron couplings on direct DNP transfer. In Fig. 6, we compare the CE DNP of three popular and commercially available polarizing agents: Totapol [31], Amupol [32], and Asympol [33], at a 7 T magnetic field.
The coupling in Totapol is the smallest, and consequently, its performance deteriorates rapidly as the \(e-H\) distance increases. Amupol offers better performance than Totapol but exhibits low transfer efficiency at longer distances, indicating that Totapol and Amupol are efficient when an effective spin diffusion network connects the biradicals to the solvent and subsequently to the target molecule. Direct polarization transfer in these two cases is inefficient. This inefficiency is exacerbated at higher magnetic fields and faster spinning rates than at the present conditions of 7 T and 10 kHz, respectively [23, 27]. Fig. 6 also shows that among all commercially available bis-nitroxides, Asympol provides the best performance for long-range transfers. We also compare the performance of these bis-nitroxides with tailored biradicals, where the electron-electron coupling is optimized for each \(e-H\) distances. Numerical simulations indicate significant potential for enhancing long-range DNP
Figure 7: The experimentally obtained buildup of \({}^{1}\)H DNP of glycine using Amupol (red) and Asympol (yellow) biradicals dissolved in DNP juice matrix (d\({}_{8}\)-glycerol:D\({}_{2}\)O:H\({}_{2}\)O). The experiment was conducted at 14 T, 10 kHz spinning and 100 kelvin condition.
capabilities. At an \(e-H\) distance of 15 \(\AA\), a tailored biradical can achieve an enhancement of 240, compared to 140 obtained with Asympol at 7 T. Moreover, at an \(e-H\) distance of 20 \(\AA\), an enhancement of 110 can be achieved, compared to 60 obtained with Asympol at 7 T.
Asympol can transfer polarization over longer distances more effectively due to its strong electron-electron coupling of 200 MHz [27, 33]. This feature facilitates direct transfer to the solvent molecule just outside the biradical, resulting in a faster DNP buildup. In Fig. 7, we compare the experimental buildup rates of Amupol and Asympol at 14 T, 100 K, and 10 kHz spinning, dissolved in a glycerol-water glassing matrix. At a concentration of 10 mM, Asympol exhibits a buildup rate of 0.9 s, whereas Amupol yields 6.8 s under the same experimental conditions. However, the experimentally obtained rate is a composite of contributions from different transfer pathways. A more controlled experiment is currently in progress and will be presented later.
In conventional spin diffusion-based mechanisms, the presence of nearby protons is indispensable. Proton-induced relaxation significantly impacts the efficiency of DNP transfer or sensing, underscoring its importance in developing effective DNP systems. In cases where DNP transfer relies on a polarizing agent via spin diffusion, the presence of strongly coupled protons becomes pivotal, as recently demonstrated by Venkatesh et al and Perras et al. in separate studies [34, 35]. Incorporating a protonated biradical that establishes a robust proton network to facilitate efficient spin diffusion becomes essential in such scenarios.
Conversely, when DNP transfer occurs independently of the polarizing agent, an alternative strategy can be more advantageous. Deuteration of the molecular structure can emerge as a favorable approach in this context, as it extends the relaxation time constant for electrons [36]. By mitigating the influence of proton-induced relaxation, deuteration significantly increases T\({}_{1e}\), presenting a valuable tactic to optimize DNP performance in systems where direct transfer outside the polarizing agent plays a predominant role. For efficient direct DNP transfer to distant target nuclei, it is also proposed to deuterate nearby hydrogen (\({}^{1}\)H) spins around the electron spins. However, this effect requires experimental validation.
## 4 Conclusion
In this paper, we explore the effect of electron-electron interactions on the distance dependence of direct DNP transfer using the Cross Effect mechanism under MAS. We introduce a novel biradical design principle that extends the reach of DNP to more distant nuclei. We suggest the use of tailored biradicals for efficient transfer to short, intermediate and long-range \(e-n\) distances. Notably, a bisnitroxide optimum for short range transfer is not optimum for long range transfer and vice versa. For long range transfer, we propose the use of biradicals with strong \(e-e\) coupling to expand the range of CE DNP. This could help to overcome the restrictions of conventional SE DNP and CE DNP methods and improve the applicability of DNP in solid-state NMR spectroscopy, particularly in cases where spin diffusion is inactive or direct sensing of nuclei using electron spin interactions is desired. By using a strongly coupled electron spin system, it is possible to unlock the full potential of CE DNP for efficient long-range polarization transfer, creating new possibilities for studying complex systems with higher sensitivity. For instance, a strongly coupled biradical could be used to
polarize \({}^{19}\)F spins labels in a biological sample even without the need of a fluorinated biradical or fluorinated solvent. Additionally, this could enable direct polarization transfer to the bulk/core of a material, as opposed to DNP-sens which selectively polarizes the surface [37]. Many radicals have also been proposed, some of which exhibited low performance for conventional spin diffusion based DNP [38]. However, these radicals can be utilized for their long range transfer, which is something under investigation in our lab and will be a subject of future discussion.
## 5 Methods
Cross Effect DNP is a complex phenomenon influenced by various underlying parameters. In experimental settings, the rational design of biradicals with diverse \(e-e\) coupling, while maintaining constant spin parameters, proves challenging. Hence, we employed advanced quantum mechanical simulations within the Liouville space framework to investigate remote nuclear sensing via cross-effect DNP under magic angle spinning, employing the SPINEVOLUTION simulation package. Our model system consisted of two electrons (\(e_{1}\) and \(e_{2}\)) and a proton (\({}^{1}\)H) with preset g-tensor values (gx=2.0097, gy=2.0065, gz=2.0024), resembling a nitroxide radical. In practical scenarios, the relative molecular orientation of spin labels exhibits flexibility, which we accounted for by varying and averaging the relative g-tensor of \(e_{2}\) with respect to \(e_{1}\) using Euler angles (\(\alpha\), \(\beta\), \(\gamma\)) in the (\(z-y-z\)) convention. The magnetic field, spinning frequency, microwave irradiation frequency, microwave power, and temperature were maintained at 7T, 10 kHz, the peak of ZQ DNP, 800 kHz, and 100K, respectively, unless otherwise specified. We consistently utilized a protracted DNP buildup time (xx seconds) to achieve steady-state nuclear spin polarization. To ensure distinct hyperfine couplings for \(e\) spins in all orientations, we selected the orientation of the \(e_{1}-H\) hyperfine couplings carefully, allowing variations in hyperfine coupling or e-n distance magnitude while preserving orientation. Nuclear spin-lattice relaxation time (T\({}_{1H}\)) remained at 2 seconds, while electron spin-lattice relaxation (T\({}_{1e}\)) and spin-spin relaxation (T\({}_{2e}\)) constants were set at 2 milliseconds and 10 microseconds, respectively, based on experimentally determined relaxation rates under similar conditions.
### Acknowledgments
The authors would like to thank New York University Abu Dhabi (NYUAD) for the financial support of this work, Core Technology Platforms and High Performance Computing facilities of New York University Abu Dhabi for facilitating experimental and theoretical DNP research. The authors would also like to thank Waqas Zia for the support with HPC. Contribution from AJ and AE was supported by Tamkeen under the NYU Abu Dhabi Research Institute grant CG008. We would like to thank Prof. Songi Han (UCSB) and Prof. Anne Lesage (CNRS) for fruitful discussions on DNP mechanisms. |
2309.05440 | Emissions and energy efficiency on large-scale high performance
computing facilities: ARCHER2 UK national supercomputing service case study | Large supercomputing facilities are critical to research in many areas that
impact on decisions such as how to address the current climate emergency. For
example, climate modelling, renewable energy facility design and new battery
technologies. However, these systems themselves are a source of large amounts
of emissions due to the embodied emissions associated with their construction,
transport, and decommissioning; and the power consumption associated with
running the facility. Recently, the UK National Supercomputing Service,
ARCHER2, has been analysing the impact of the facility in terms of energy and
emissions. Based on this work, we have made changes to the operation of the
service that give a cumulative saving of more than 20% in power draw of the
computational resources with all application benchmarks showing reduced power
to solution. In this paper, we describe our analysis and the changes made to
the operation of the service to improve its energy efficiency, and thereby
reduce its climate impacts. | Adrian Jackson, Alan Simpson, Andrew Turner | 2023-09-11T13:26:20Z | http://arxiv.org/abs/2309.05440v1 | Emissions and energy efficiency on large-scale high performance computing facilities: ARCHER2 UK national supercomputing service case study
###### Abstract
Large supercomputing facilities are critical to research in many areas that impact on decisions such as how to address the current climate emergency. For example, climate modelling, renewable energy facility design and new battery technologies. However, these systems themselves are a source of large amounts of emissions due to the embodied emissions associated with their construction, transport, and decommissioning; and the power consumption associated with running the facility. Recently, the UK National Supercomputing Service, ARCHER2, has been analysing the impact of the facility in terms of energy and emissions. Based on this work, we have made changes to the operation of the service that give a cumulative saving of more than 20% in power draw of the computational resources with all application benchmarks showing reduced power to solution. In this paper, we describe our analysis and the changes made to the operation of the service to improve its energy efficiency, and thereby reduce its climate impacts.
HPC; Net Zero; Energy Efficiency; Emissions; ARCHER2 2
## 1 Introduction
Large scale HPC systems have a key role to play in addressing the current climate emergency. They provide a digital laboratory where researchers can model and simulate areas of direct impact: climate modelling, renewable energy solutions, improved energy storage technologies, etc. while avoiding resource and emissions intensive physical experiments. However, these large systems are themselves large consumers of electricity and resources and, as such, are a source of significant emissions from their manufacture and installation as well as from their day-to-day operations [1]. Furthermore, the power draw of large HPC systems is significant and, particularly during times where there is competition for power on shared electricity grids; HPC systems must strive to be good "grid citizens". Finally, there are cost considerations. Historically, the cost of large scale HPC systems was dominated by the capital cost with the operational electricity costs a small component. This is no longer true, with lifetime electricity costs now matching or even exceeding the capital costs for large scale HPC systems in many countries.
In this paper we discuss the origin of emissions from a large scale HPC system, ARCHER2; describe the characteristics of the power draw of ARCHER2 broken down by different system components, and then review specific activities we have taken to improve the energy efficiency of the ARCHER2 system and their impact on application performance. We finish with some conclusions from the work and a description of future directions of interest.
### Archer2
ARCHER2 [2] is the UK National Supercomputing Service funded by UK research councils (UKRI), managed by the Engineering and Physical Sciences Research Council (EPSRC) on behalf of UKRI with support provided by EPCC, hosting by the University of Edinburgh and the hardware provided by HPE.
The focus of this paper is the hardware component of the ARCHER2 service, which is a HPE Cray EX system with 750,080 compute cores. The system hardware is summarised in **Error! R reference source not found..**
ARCHER2 supports over 3000 users working on a broad range of research in the physical and environmental science areas with the major research areas being materials science, climate/ocean modelling, biomolecular modelling, engineering, mineral physics, seismology and plasma physics. It supports hundreds of different software packages looking at research problems that cannot be treated on other, smaller HPC facilities in the UK.
## 2 Emissions
The hardware associated with the ARCHER2 service has two sources of climate emissions:
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|} \hline
5860 compute nodes (750,080 compute cores) & 2\(\times\) AMD EPYC\({}^{\text{TM}}\) 7842 2.25 GHz 64-core processors 256/512 GB DDR4 RAM 2 Slingshot 10 interconnect interfaces & 768 Slingshot switches Dragonfly topology \\ \hline Storage & 1PB NetApp storage 13.6 PB ClusterStor L300 (HDD-based) 1 PB ClusterStor E1000 (NVMe-based) & 1 PB ClusterStor E1000 (NVMe-based) \\ \hline \end{tabular}
\end{table}
Table 1: ARCHER2 hardware summary
Operational emissions (scope 2 emissions) associated with the generation of electricity used to power the hardware and associated cooling/facility infrastructure.
2. Embodied emissions (scope 3 emissions) associated with the manufacture, shipping and decommission of the hardware.
There are no scope 1 emissions associated with the ARCHER2 hardware as there is no energy generation associated with the service - this is true for most large-scale HPC systems. A detailed audit of the emissions from ARCHER2 and emissions scenario modelling are underway and will be the subject of a future paper.
As a brief summary, in scenarios where the carbon density from scope 2 emissions is zero or very low (<30 gCO2/kWh), the emissions from ARCHER2 are dominated by the scope 3, (embodied) emissions. In these cases, the best emissions efficiency is obtained by extracting the most output from each node hour (nodeh) for as long as possible. Anything that reduces the performance of applications on ARCHER2, will reduce the overall emissions efficiency. When the carbon intensity of scope 2 emissions is moderate (30-100 gCO\({}_{2}\)/kWh), then the scope 2 and scope 3 emissions contribute roughly equally to the overall lifetime emissions. In this scenario, emissions efficiency is a combination of achieving energy efficiency and maximising application performance. However, if the emissions are dominated by scope 2 emissions (i.e. carbon intensity of the electricity used is high: >100 gCO/kWh), the emissions efficiency becomes dependent on the energy efficiency of the applications. In these scenarios, improving the operational energy efficiency by, for example, sacrificing application output per nodeh to improve application output per kWh, will improve the emissions efficiency. Evidentially, the approach to maximising emissions efficiency of large-scale HPC systems over their lifetime is dependent on the balance between the scope 2 and scope 3 emissions for the particular system: when scope 3 emissions dominate, optimise for application performance irrespective of energy efficiency; when scope 2 emissions dominate, optimise for energy efficiency, even if this has a detrimental impact on application performance. Many large-scale HPC systems will need to find a balance between application performance and energy efficiency to find a practical route to reduce emissions associated with operating such services.
## 3 ARCHER2 Power Draw
Irrespective of emissions, energy (or power) efficiency is still an important practical consideration for most large-scale HPC systems for many reasons, including:
* Limits on the amount of power that can be provided by the local power grid and competing demands for power. Data centres must be good grid citizens and be able to respond flexibly to fluctuating power demands, particularly during times of power shortages, where reducing the power draw of HPC systems can free up resources for other, critical, infrastructure.
* Desires to improve the cost efficiency of large scale HPC systems. Operational energy costs are now a major component of the lifetime costs of operating an HPC system.
* Higher power draw by HPC systems lead to higher cooling requirements increasing the overheads of running an HPC system.
Given the emissions profile for ARCHER2, and the additional practical reasons for improving energy efficiency, we have undertaken several initiatives to improve the energy efficiency of ARCHER2. This work was performed specifically within the context of reducing the power draw of ARCHER2 during Winter 2022/2023 when there were concerns about power shortages on the UK power grid.
To assess the impact of different initiatives on the power draw of the ARCHER2 service we need an understanding of the baseline power draw. To do this, we produced two sets of data:
1. Information on power draw of individual components.
2. Measurements of the baseline power draw over a few months.
### Power draw of individual components
Table 2 shows the estimated idle and loaded per-component power draw. This data is a combination of measurements from the ARCHER2 system and estimates provided by the hardware vendor (HPE).
Based on this information, we expect the power draw of the compute nodes to dominate on ARCHER2. Power draw associated with the interconnect switches are also a substantial component, but other system components (particularly storage) do not have a significant impact on the overall power draw of the system so can be discounted, at least initially, when considering ways to improve energy efficiency.
### Baseline power draw measurements
We measured the baseline power draw of ARCHER2 compute cabinets (which includes all compute nodes and interconnect switches, approx. 90% of the total ARCHER2 power draw) from Dec 2021 to Apr 2022 - the timeline is shown in Figure 1. The mean power draw over this period was 3,220 kW. Compute node utilisation on ARCHER2 over all periods considered in this paper is consistently over 90% so the difference between idle node/switch power draw and loaded node/switch power draw does not need to be considered.
The mean value of 3,220 kW is lower than the sum of all loaded compute cabinet values from Table 2 (3,400 kW). This difference is partially due to the system not having 100% load (which is impossible to achieve due to scheduling overheads) and to differences in the power draw for different software running on the
\begin{table}
\begin{tabular}{|l|l|r|r|r|} \hline
**Component** & **Notes** & **Idle (kW)** & **Loaded** & **Approx.** \\ & & **[each]** & **(kW)** & **\%** \\ \hline Compute nodes & 5,860 & 1,350 & 3,000 & 86\% \\ nodes & nodes & [0.23] & [0.51] & \\ \hline Slingshot & 768 & 100-200 & 200 & 6\% \\ interconnect & switches & [0.10-0.25] & [0.25] & \\ \hline Other & 23 & 100-200 & 200 & 6\% \\ Cabinet & cabinets & [4-9] & [9] & \\ Overheads & & & & \\ \hline Coolant & 6 CDUs & _96_ & _96_ & 3\% \\ Distribution & & _[16]_ & _[16]_ & \\ Units & & & & \\ \hline File systems & 5 file & _40_ & _40_ & 1\% \\ & systems & _[8]_ & _[8]_ & \\ \hline Total & & 1,800 & 3,500 & \\ \hline \end{tabular} Based on this information, we expect the power draw of the compute nodes to dominate on ARCHER2. Power draw associated with the interconnect switches are also a substantial component, but other system components (particularly storage) do not have a significant impact on the overall power draw of the system so can be discounted, at least initially, when considering ways to improve energy efficiency.
\end{table}
Table 2: Estimated/measured power draw for different ARCHER2 system components. Italies indicates estimates.
system. Based and Turner [3] provides more details on the different power draw by different software applications.
## 4 Improving Energy Efficiency
Over the past 18 months we have investigated ways to improve the power usage of the ARCHER2 system, and thereby the energy efficiency, without requiring users of the service to take any action themselves. Ideally, we would reduce the power draw of the system without any impact on the performance of software, but this is typically not possible. In this paper we cover two strategies for improving the energy efficiency:
1. Change the compute node BIOS settings to move from Power Determinism mode to Performance Determinism mode.
2. Reduce the default CPU clock frequency from 2.25 GHz (with turbo-boost enabled) to 2.0 GHz (with no turbo-boost).
Both changes are relatively simple to make on a system-wide basis with no action required on individual users of the service. We summarise the impact of these changes on ARCHER2 power draw and on application performance in the remainder of this section.
### Change BIOS to Performance
#### Determinism mode
One setting available in the BIOS on compute nodes that use AMD CPUs is a choice between Power Determinism mode and Performance Determinism. A full description of the meaning and implication of these settings can be found in a technical report from AMD [4].
Figure 2 shows the impact of the change on the ARCHER2 compute cabinet power draw. The change was implemented across all compute nodes during May 2022 and led to a 7% reduction in the mean power draw of the ARCHER2 compute cabinet power draw, from 3,220 kW to 3,010 kW.
Table 3 reports the impact on performance and compute node energy consumption for several application benchmarks [5]. These show an impact of 1% or less on application performance and reductions in energy consumed on compute nodes for the applications between 6% and 10%.
### Reduce CPU Clock Frequency to 2 GHz
The AMD CPUs on ARCHER2 allow the selection of different CPU frequencies, specifically 1.5 GHz, 2.0 GHz and 2.25 GHz. The highest frequency setting also enables the ability to turbo boost to higher frequencies. Reducing the CPU frequency reduces the rate
at which the processor can execute instructions but, if application performance is limited by data transfer rates from memory to the processor rather than the rate of instruction execution, then this may not have a large detrimental effect on performance while reducing the power draw of the compute nodes. Many software applications that run on HPC systems such as ARCHER2 are memory bound in this way, rather than compute bound, so this could increase the energy efficiency of the system. We investigated the effect of reducing the CPU clock frequency to 2.0 GHz on performance and total energy use for a series of application benchmarks representing different research areas on ARCHER2, with results summarised in We also assessed the impact of this change in default CPU frequency on the power draw of the ARCHER2 compute cabinets. Figure shows the impact of the change on the ARCHER2 compute cabinet power draw. The change led to a reduction of the mean power draw of the ARCHER2 compute cabinet power draw from 3,010 kW to 2,530 kW, a total reduction of 21% compared to the original baseline power draw of 3,220 kW.
All the application benchmarks are more energy efficient at 2.0 GHz compared to 2.25 GHz, with energy savings ranging from 7% to 20%. Performance is more strongly affected than for the BIOS change described above, with reductions in performance of 5% to 26% when using the lower clock frequency. Some of these energy and performance reductions are larger than might be expected based on a change from 2.25 to 2.0 GHz. Further investigation revealed that most applications typically boost the CPU frequency
Figure 1: Measured power draw of ARCHER2 compute cabinets for Dec 2021 – Apr 2022. Orange line indicates mean power draw (3,220 kW).
Figure 2: Measured power draw of ARCHER2 compute cabinets for Apr 2022 – May 2022. Orange line indicates mean power draw (3,220 kW before, 3,010 kW after).
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline
**Application** & **Nodes** & **Perf. ratio** & **Energy ratio** \\
**benchmark** & & & \\ \hline CASTEP AI Slab & 16 & 0.99 & 0.94 \\ \hline OpenSBLI TGV & 32 & 1.00 & 0.90 \\
1024\({}^{3}\) & & & \\ \hline VASP TiO2 & 32 & 0.99 & 0.93 \\ \hline \end{tabular}
\end{table}
Table 3: Performance and energy use comparison for application benchmarks with power determinism mode vs. performance determinism mode.
to closer to 2.8 GHz in actual operation - explaining the larger range of the changes when limiting to 2.0 GHz. Based on this data, the decision was taken to improve the energy efficiency of the ARCHER2 service by setting the default CPU frequency to 2.0 GHz.
However, whilst the default was changed, users could revert these changes for their jobs. Furthermore, applications where the reduction in frequency is expected to have a large negative impact on performance (>10%) had their module setup altered to reset the CPU frequency to 2.25 GHz (with turbo boost enabled) automatically. Users were strongly encouraged to benchmark the effect of CPU frequency on their use of ARCHER2 and to choose an appropriate setting.
We also assessed the impact of this change in default CPU frequency on the power draw of the ARCHER2 compute cabinets. Figure shows the impact of the change on the ARCHER2 compute cabinet power draw. The change led to a reduction of the mean power draw of the ARCHER2 compute cabinet power draw from 3,010 kW to 2,530 kW, a total reduction of 21% compared to the original baseline power draw of 3,220 kW.
## 5 Conclusions
We have made two low-overhead, system-wide changes to improve the energy efficiency of the ARCHER2 system which do not generally require user intervention or changes in behaviour, and which have a modest impact on application performance. Combined, these two changes reduce the energy draw of the ARCHER2 compute cabinets (which represent more than 90% of the power draw of the system) by an average of 690 kW, a reduction of 21% compared to the original average baseline power draw. This reduction in power draw freed up a substantial amount grid power capacity during a period of significant uncertainty in energy supplies in the UK and has resulted in significant savings in both scope 2 emissions and energy costs for the service. The change to the BIOS settings had a modest impact on power draw (210 kW, 6.5% reduction) and a negligible impact on the performance of application benchmarks (1% performance reduction). The default CPU frequency change had a much larger impact on power draw (480 kW, 15% reduction) and on application performance but can be reversed selectively by both the service operator and by users themselves on a per-application or per-job basis. All application benchmarks showed a reduction in total energy use at 2.0 GHz, 7% to 20% reduction, and the impact on performance varied from 5% to 26%.
To make correct choices about service operations in the areas discussed here, services must have a clear understanding of their priorities. For example, is the goal to maximise energy efficiency, to maximise emissions efficiency, to minimise running costs, to maximise application performance, or to achieve a balance between two or more different priorities? For ARCHER2, the primary goal was to maximise the energy efficiency due to potential power capacity shortages with a secondary goal of not having a large adverse impact on application performance.
During this work we noted that the power consumption of the most important components in terms of power draw is very high even when not being used for computational work. When compute nodes are not running user applications, they draw around 50% of power of a fully loaded compute node. The power draw of interconnect switches is steady at 200-250 W irrespective of system load. This means that to achieve good energy efficiency that utilisation of a system must be as close to 100% as possible and ideally over 90%.
Future papers will cover work we are undertaking to audit and model the emissions (scope 2 and scope 3) from ARCHER2 and large scale HPC systems more generally, looking at the impact on energy and emissions efficiency of replacing parts of modelling applications by AI-based approaches and investigating the impact of compiler and library choices on the energy efficiency of application benchmarks at different CPU frequencies.
## 6 Acknowledgments
Our thanks to Martin Lafferty at HPE and Kieran Leach at EPCC, University of Edinburgh for making power monitoring from ARCHER2 cabinets and switches available for this work. This work used the ARCHER2 UK National Supercomputing Service ([https://www.archer2.ac.uk](https://www.archer2.ac.uk)). This research was supported by the NetZero Scoping project, which was funded by the UKRI Digital Research Programme on grant NERC (NE/W007134/1). AJ was supported by UK Research and Innovation under the EPSRC grant EP/T028351/1.
|
2309.17044 | Rotating black hole solutions for $f(R)$ gravity and Newman Janis
Algorithm | We show that the $f(R)$-gravity theories with constant Ricci scalar in the
Jordan/Einstein frame can be described by Einstein or Einstein-Maxwell gravity
with a cosmological term and a modified gravitational constant. We also propose
a modified Newmann-Janis algorithm to obtain the rotating axisymmetric
solutions for the Einstein/Einstein-Maxwell gravity with a cosmological
constant. Using the duality between the two gravity theories we show that the
stationary or static solutions for the Einstein/Einstein-Maxwell gravity with a
cosmological constant will also be the solutions for the dual $f(R)$-gravity
with constant Ricci scalar. | Pankaj Chaturvedi, Utkarsh Kumar, Udaykrishna Thattarampilly, Vishnu Kakkat | 2023-09-29T08:11:04Z | http://arxiv.org/abs/2309.17044v2 | # Rotating black hole solutions for \(f(R)\) gravity and Newman Janis Algorithm
###### Abstract
We show that the \(f(R)\)-gravity theories with constant Ricci scalar in the Jordan/Einstein frame can be described by Einstein or Einstein-Maxwell gravity with a cosmological term and a modified gravitational constant. We also propose a modified Newmann-Janis algorithm to obtain the rotating axisymmetric solutions for the Einstein/Einstein-Maxwell gravity with a cosmological constant. Using the duality between the two gravity theories we show that the stationary or static solutions for the Einstein/Einstein-Maxwell gravity with a cosmological constant will also be the solutions for the dual \(f(R)\)-gravity with constant Ricci scalar.
+
Footnote †: preprint: APS-XXX
## I Introduction
The general theory of relativity (GR) is widely accepted as the fundamental theory of spacetime and gravity. Despite predicting numerous observational tests at large distances and late time scales (Infrared regime) to match the measurements from solar system tests, GR has gone through several challenges from the observational and theoretical viewpoints. Cosmological observations pertaining to cosmic microwave background (CMB), Type Ia supernovae and several others indicate that the Universe has undergone two phases of cosmic acceleration namely inflation and dark energy that occurred at early and late times respectively [1; 2; 3; 4; 5]. GR, in its original form, is unable to explain these phases of cosmic acceleration. Cosmological constant used to parametrize the recent accelerated expansion of the Universe is plagued by hierarchy problems in particle physics [6]. Beyond its inconsistencies with cosmological and astrophysical data, GR also presents numerous theoretical weaknesses. Most prominently, GR struggles in the Ultraviolet spectrum, especially when interpreting the physics of black holes and cosmological singularities at short distances and small time intervals. This is due to the fact that GR is a 2-derivative action which poses problems of renormalizability at the quantum level. The non-renormalizability of GR has spurred interest in higher derivative gravity theories, often referred to as modified gravities.
A simple yet consequential model for modified gravity is the \(f(R)\) gravity theory in which the Lagrangian density is modified to be an arbitrary analytic function of the Ricci scalar [7; 8]. The significance of higher-order terms in the \(f(R)\) gravity Lagrangian can be attributed to the fact that they can descend from the low energy limit of String/M-theory [9]. The simplest \(f(R)\)-gravity with \(R^{2}\) correction can give rise to a phase of inflation which was first proposed and explored by Starobinsky[10]. There are numerous \(f(R)\) gravity models to explain the cosmic inflation [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23] and current accelerated expansion of the Universe [24; 25; 26; 27; 28; 29; 30; 31; 32]. In addition to that \(f(R)\) gravity theories also has been investigated in explaining the singularity problem arising in the strong gravity regime [33; 34; 35; 36; 37; 38; 39; 40; 41; 42], galaxy rotation curves [43; 44; 45; 46; 47; 48; 49; 50; 51], detection of gravitational waves [52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63] and many more.
The current era of gravitational wave astronomy has presented us with the possibility of investigating physics of extremely compact objects, such as black holes and neutron stars [64]. This has opened new prospects for observing and testing theoretical models in the strong gravity regime. As a platform for theories that explain cosmic acceleration and inflation, it is of paramount importance to explore and test \(f(R)\) theories of modified gravity. The study of the properties of black hole solutions in these scenarios can provide strong gravity tests for these theories and may hint toward significant deviations from the GR. This knowledge of black hole spacetime can be obtained from solutions to the field equations, which, although easy to find analytically in General Relativity (GR), is a non-trivial task in modified gravity theories. Since most astrophysical objects are considered to be spinning, there is an interest in finding rotating black hole solutions for \(f(R)\) theories. Several black hole solutions in modified gravities with or without matter have been studied previously. Outside of \(f(R)\) theories [65; 66; 67; 68; 69; 70] such studies involve finding the black holes solutions for Gauss-Bonnet Gravity, Lovelock, non-local, and other modified theories studied in [71; 72; 73; 74; 75; 76; 77].
It is well known that \(f(R)\) gravity in Jordan frame can be recast as a non-minimally coupled scalar-tensor gravity theory in the Einstein frame by means of a conformal transformation [78; 79]. Thus for a problem formulated for \(f(R)\) gravity, the usual approach is to first solve the simpler field equations of motion in the Einstein frame and then use a conformal transformation to revert back to the Jordan frame. Spherically symmetric black hole solutions of \(f(R)\)-gravity theories with or without matter have been studied in the Einstein/Jordan frame in [80; 81]. Although this conformal transformation of higher order gravity to scalar-tensor gravity is in
general plausible, it does not shed enough information about the physical relevance of these two theories in different frames. This discrepancy is related to the fact that using a conformal transformation to go from one frame to the other the stability of the solutions and their physical meaning can completely change. This leads to several physical constraints which has to be imposed on the form of the function \(f(R)\) in order to have stable solutions in both the Einstein and the Jordan frame [82]. Despite the ambiguities mentioned above, it has been well establish that any \(f(R)\) gravity with constant Ricci scalar is dual to the Einstein gravity with a cosmological and an effective gravitational constants [83; 84].
Motivated by the the above duality of \(f(R)\)-gravity in the Jordan frame to scalar-tensor gravity in the Einstein frame, in this work we have explored the description of constant curvature \(f(R)\) gravity in the Einstein frame. We show that the constant curvature \(f(R)\)-gravity in the Einstein frame is indeed described by Einstein Gravity with a cosmological and an effective gravitational constants at the level of the action and field equations. In this work, we also examine the possible static or stationary blackhole solutions for the constant curvature \(f(R)\)-gravity with Maxwell field in the Einstein frame.
In general it is not an easy task to obtain the rotating black hole solutions in the modified or the simple Einstein gravity models with various horizon topologies because of the nonlinear nature of the field equations. It was observed by Newman and Janis in the case of the Einstein equations that a certain algorithm produces the Kerr solution from the corresponding non-rotating counterpart. This procedure is now known as the Newman-Janis algorithm (NJA) [85; 86]. This algorithm has been widely used as a technique to generate the Kerr-like rotating metrics from the corresponding static metrics, see [87; 88; 89; 90; 91]. This algorithm provides a way to generate axisymmetric metrics from a spherically symmetric stationary seed metric through a particular type of complexification of radial and time coordinates. Although this algorithm functions effectively within the framework of classical General Relativity (GR), the reason it yields the same result for the Kerr metric in classical GR remains somewhat elusive. Moreover, it has been illustrated in [92; 93; 94] that the NJ algorithm is not suitable for generating axisymmetric metrics in quadratic gravity models. Furthermore, It is still unknown about the applicability of the NJ algorithm to the other modified gravity models. In this paper, we propose a modified version of the NJ algorithm in order to generate stationary axisymmetric blackhole solutions for various constant curvature \(f(R)\) gravity models in the Einstein frame.
The article is organised as following. In section II, we review the Jordan and Einstein Frames for modified gravity and establish the duality between the constant curvature \(f(R)\) gravity theories in Jordan frame to Einstein-Maxwell gravity with a CC. Furthermore, we explicitly show this duality for well-known examples. Section III deals with the modification in Newmann-Janis Algorithm to generate the rotating black hole solution for the Einstein gravity with CC. Finally, we discuss our findings and draw conclusions in section IV.
## II \(f(R)\) gravity in Jordan and Einstein Frames
The standard form of the action for \(f(R)\) gravity in the so-called Jordan frame is given by
\[\mathcal{S}_{\mathcal{J}}=\frac{1}{2\kappa}\int d^{4}x\sqrt{-g}\;f(R)+ \mathcal{S}_{M}, \tag{1}\]
where \(\kappa=8\pi G\) in natural units with \(G\) being the four dimensional gravitational constant, \(f(R)\) is a generic function of the Ricci scalar \(R\), and \(\mathcal{S}_{M}\) is the usual matter contribution to the action. Varying the above action with respect to the metric results in the following Euler-Lagrange equations of motion
\[\left(g_{ab}\square-\nabla_{a}\nabla_{b}\right)F(R)+F(R)R_{ab}-\frac{1}{2}f(R )g_{ab}=\kappa T_{ab}, \tag{2}\]
where \(F(R)=f^{\prime}(R)\) and the matter stress-energy tensor \(T_{ab}\) is given by
\[T_{ab}=-\frac{2}{\sqrt{-g}}\frac{\delta S_{M}}{\delta g_{ab}} \tag{3}\]
The following action describes the Einstein-Hilbert gravity with a non-minimally coupled scalar field
\[\mathcal{S}_{\mathcal{J}}=\frac{1}{2\kappa}\int d^{4}x\sqrt{-g}\left(F(\phi)R -V_{\mathcal{J}}(\phi)\right)+\mathcal{S}_{M}, \tag{4}\]
where
\[V_{\mathcal{J}}(\phi)=\phi F(\phi)-f(\phi), \tag{5}\]
is dynamically equivalent to the \(f(R)\) gravity action in (1). This may be seen by considering the variation of the action in (4) with respect to the metric and to the scalar field-\(\phi(x_{\mu})\) which gives us the following Euler-Lagrange equations of motion
\[\left(g_{ab}\square-\nabla_{a}\nabla_{b}\right)F(\phi)+F(\phi)R_ {ab}-\frac{1}{2}f(\phi)g_{ab}=\kappa T_{ab},\] \[F^{\prime}(\phi)(R-\phi)-F(\phi)+f^{\prime}(\phi)=0. \tag{6}\]
Now provided \(F^{\prime}(\phi)\neq 0\) and \(F(\phi)=f^{\prime}(\phi)\) one gets the constraint \(R=\phi\) from the second equation of motion in (6). Plugging back the constraint \(R=\phi\) in the action (4) and the equations of motion (6) one recovers the \(f(R)\) gravity action and equations of motion. The action in (4) can now be recast in the Einstein frame by considering the following conformal transformation
\[\widetilde{g}_{ab}=F(\phi)\;g_{ab}, \tag{7}\]
where one must impose \(F(\phi)>0\) for the regularity of the given transformation. Given the new metric \(\widetilde{g}_{ab}\), the
action for the \(f(R)\) gravity can now be written in the Einstein frame as
\[\mathcal{S}_{E}=\int d^{4}x\sqrt{-\widetilde{g}}\left(\frac{\widetilde{R}}{2\kappa }-\frac{1}{2}\widetilde{\nabla}_{a}\widetilde{\phi}\;\widetilde{\nabla}_{b} \widetilde{\phi}-V_{E}(\widetilde{\phi})\right)+\widetilde{\mathcal{S}}_{M}, \tag{8}\]
\[\widetilde{\phi}=\sqrt{\frac{3}{2\kappa}}\ln\left[F(\phi)\right],\quad V_{E}( \widetilde{\phi})=\frac{V_{J}(\phi)}{F(\phi)^{2}}, \tag{9}\]
\[\widetilde{R}=\frac{1}{F(\phi)}\left(R-3\frac{\Box F(\phi)}{F(\phi)}+\frac{3}{ 2}\frac{\nabla_{a}F(\phi)\nabla^{a}F(\phi)}{F(\phi)^{2}}\right), \tag{10}\]
where the quantities with the subscript \((\widetilde{)}\) are defined with respect to the new \(\widetilde{g}_{ab}\)-metric. Moreover, under the transformation (7) the matter stress-energy tensor transforms as
\[\widetilde{T}_{ab}=\frac{T_{ab}}{F(\phi)^{2}} \tag{11}\]
The conformal equivalence of \(f(R)\) gravity in the two different frames as described above, must be accompanied by certain consistency conditions. In particular, for the case when the form of \(f(R)\) is described by a polynomial function of order greater than two, the correspondence between the two conformal frames becomes many-to-one [82]. This implies that there are multiple Einstein frame descriptions of a single higher-order \(f(R)\) theory. As described in the introduction, for a given higher-order \(f(R)\)-gravity in the Jordan frame one has to consider \(F^{\prime}(R)>0\) and \(F(R)>0\) as the necessary conditions for the existence of the corresponding Einstein frame. These conditions are also required to ensure the existence of a matter-dominated era in cosmological evolution in a high curvature classical regime, as elucidated in [82]. This motivates us to first consider the possible solutions of the \(f(R)\) gravity in the Einstein frame. We also interpret the equivalence of such solutions in the Einstein frame with those in the Jordan frame.
### Constant curvature black hole solutions for \(f(R)\) gravity in Einstein frame
The action in (8) describes the \(f(R)\) gravity in the Einstein frame. Considering the matter contribution coming solely from the Maxwell field (\(A_{a}\)) i.e.,
\[\widetilde{S}_{M}=-\frac{1}{8\kappa}\int d^{4}x\sqrt{-\widetilde{g}}\; \widetilde{\mathcal{F}}^{2}, \tag{12}\]
where \(\widetilde{\mathcal{F}}^{2}=\widetilde{\mathcal{F}}_{ab}\widetilde{\mathcal{F }}^{ab}\) and \(\widetilde{\mathcal{F}}_{ab}=\widetilde{\nabla}_{a}A_{b}-\widetilde{\nabla}_{b }A_{a}\) is the electromagnetic field strength tensor 1, one can see that the action in (8) describes an Einstein-Maxwell (EM) gravity non-minimally coupled to a scalar field. The corresponding Euler-Lagrange equations of the motion for the field \(\widetilde{\phi}\), \(\widetilde{g}_{ab}\), and \(A_{a}\) can now be given as
Footnote 1: The electromagnetic field strength tensor in the Einstein frame is related to the one in the Jordan frame by the transformation, \(\widetilde{\mathcal{F}}_{ab}=\mathcal{F}_{ab}/F(\phi)^{2}\).
\[\widetilde{R}_{ab}-\frac{1}{2}\widetilde{g}_{ab}\widetilde{R}- \left(\widetilde{\nabla}_{a}\widetilde{\phi}\;\widetilde{\nabla}_{b} \widetilde{\phi}-\widetilde{g}_{ab}V_{E}(\widetilde{\phi})\right.\] \[\left.+\widetilde{\mathcal{F}}_{ac}\widetilde{\mathcal{F}}^{cb}- \frac{1}{4}\widetilde{g}_{ab}\widetilde{\mathcal{F}}^{2}\right)=0, \tag{13}\] \[\widetilde{\nabla}_{a}\widetilde{\nabla}^{a}\widetilde{\phi}- \frac{\delta V_{E}(\widetilde{\phi})}{\delta\widetilde{\phi}}=0,\] (14) \[\widetilde{\nabla}_{a}\left(\widetilde{\mathcal{F}}^{ab}\right)=0. \tag{15}\]
Determining a general solution for the above equations of motion is in general difficult when the scalar field \(\widetilde{\phi}\) has a dynamical (coordinate dependent) solution [95; 96; 97; 98; 99; 100]. However, in the case when the scalar field \(\widetilde{\phi}\) has a constant profile then assuming,
\[\widetilde{\phi}=\mathcal{C},\quad F(\phi)=e^{\sqrt{\frac{2\kappa}{3}} \mathcal{C}},\quad V_{E}(\widetilde{\phi})=\frac{\Lambda}{\kappa}e^{-\sqrt{ \frac{2\kappa}{3}}\mathcal{C}}, \tag{16}\]
where \(\mathcal{C}\) and \(\Lambda\) are some constants, it may be observed that the action in (8) reduces to
\[\mathcal{S}_{E}= \frac{1}{2\kappa}\int d^{4}x\sqrt{-\widetilde{g}}\left(\widetilde{ R}-2\Lambda e^{-\sqrt{\frac{2\kappa}{3}}\mathcal{C}}-\frac{1}{4}\widetilde{ \mathcal{F}}^{2}\right),\] \[= \frac{1}{2\widetilde{\kappa}}\int d^{4}x\sqrt{-g}\left(R-2\Lambda -\frac{1}{4}\mathcal{F}^{2}\right), \tag{17}\]
which describes the Einstein-Maxwell (EM) gravity with a cosmological constant (\(\Lambda\)) and a modified effective gravitational constant (\(G_{eff}\)) given by
\[G_{eff}=\frac{\widetilde{\kappa}}{8\pi}=Ge^{-\sqrt{\frac{2\kappa}{3}}\mathcal{ C}}. \tag{18}\]
The cosmological constant is related to the AdS or dS length (\(L\)) as \(\Lambda=-\frac{3}{L^{2}}\) or \(\Lambda=\frac{3}{L^{2}}\) respectively. Depending on the sign of the cosmological constant, it is well known that the EM-gravity with a cosmological constant possesses several solutions namely, Reissner-Nordstrom AdS/dS black hole and Kerr-Newmann AdS/dS black hole. Moreover, for vanishing Maxwell field these solutions reduce to Scwarzschild AdS/dS black hole and Kerr AdS/dS black hole respectively. These black holes usually belong to the constant \(\widetilde{R}\) (\(\widetilde{R}=4\Lambda\)) solution space of the EM-gravity with a cosmological constant. Given the constraints (16) together with \(\widetilde{R}=4\Lambda\), (9) and (10), one can see that the curvature \(R\) in the Jordan frame can be fixed to a constant value given by
\[R=R_{0}=4\Lambda e^{-\sqrt{\frac{2\kappa}{3}}\mathcal{C}}, \tag{19}\]
From the above result, it is straightforward to see that the constant \(\widetilde{R}\) solution space of the EM-gravity with cosmological constant maps to the constant \(R\) solution space
of the \(f(R)\)-gravity in the Jordan frame. In the next subsection, we discuss the consistency conditions required to match the constant Ricci scalar solution space in both the EM-gravity with a cosmological constant and the \(f(R)\)-gravity in the Jordan frame. We will also discuss several viable forms of the \(f(R)\) function that satisfy the said conditions.
### Constant curvature black hole solutions for \(f(R)\) gravity in Jordan frame
The action describing the \(f(R)\) gravity in Jordan frame in (1) where the matter contribution comes from a Maxwell field (\(A_{a}\)) can be given as
\[\mathcal{S}_{\mathcal{J}}=\frac{1}{2\kappa}\int d^{4}x\sqrt{-g}\left(f(R)- \frac{1}{4}\mathcal{F}^{2}\right), \tag{20}\]
where \(\mathcal{F}^{2}=\mathcal{F}_{ab}\mathcal{F}^{ab}\) and \(\mathcal{F}_{ab}=\nabla_{a}A_{b}-\nabla_{b}A_{a}\) is the electromagnetic field strength tensor. The equations of motion for the above action are given by (2) and the following Maxwell equation
\[\nabla_{a}\left(\mathcal{F}^{ab}\right)=0, \tag{21}\]
where the traceless Maxwell stress-energy tensor is given as
\[T_{ab}=\frac{2}{\kappa}\left(\mathcal{F}_{ac}\mathcal{F}^{cb}-\frac{1}{4}g_{ ab}\mathcal{F}^{2}\right) \tag{22}\]
Considering the constant curvature scalar \(R=R_{0}\), the trace of (2) leads to
\[R_{0}=\frac{2f(R_{0})}{F(R_{0})}. \tag{23}\]
which determines the curvature scalar in terms of the function \(f(R)\) as long as \(F(R_{0})\neq 0\). The condition that the curvature scalar must assume constant real values restricts the possible form of the \(f(R)\) function. This also implies the possibility that some theories of \(f(R)\)-gravity can give multiple real values of \(R_{0}\) while for others one may not have a real constant value for the curvature scalar. Several models of \(f(R)\)-gravity, where one can have a real constant value for the curvature scalar, have been discussed in [101]. Thus restricting to such theories of \(f(R)\)-gravity with a real constant value for the curvature scalar, one can use (23) in (2) to obtain
\[R_{ab}-\frac{f(R_{0})}{2F(R_{0})}g_{ab}=\frac{\kappa}{F(R_{0})}T_{ab}. \tag{24}\]
The above equations of motion for the \(f(R)\)-gravity with constant curvature scalar are reminiscent of the ones obtained for the usual Einstein gravity with a cosmological constant
\[\Lambda=\frac{f(R_{0})}{2F(R_{0})}, \tag{25}\]
and an effective gravitational constant
\[G_{eff}=\frac{G}{F(R_{0})}, \tag{26}\]
which indicates a duality between the two different theories captured by the same action in (17). To ensure the positivity of the effective gravitational constant one has to impose the following conditions
\[F(R_{0})>0,\quad F^{\prime}(R_{0})>0, \tag{27}\]
where the second conditions \(F^{\prime}(R_{0})>0\) is required for a stable higher-order \(f(R)\)-gravity 2. We now discuss some known models of \(f(R)\) gravity with constant scalar curvature [101] and discuss which of them can be described by an Einstein gravity with a cosmological constant. We also study the stability conditions for their constant scalar curvature solutions described by the following four-dimensional line element
Footnote 2: Note that the condition \(F^{\prime}(R_{0})>0\) comes from the requirement of the positivity of, \(\frac{dd_{eff}}{dR}\big{|}_{R=R_{0}}\) which ensures the stability of the \(f(R)\)-gravity.
\[ds^{2}=-g(r)\,dt^{2}+\frac{dr^{2}}{g(r)}+r^{2}\,d\Omega_{k}^{2}, \tag{28}\]
where
\[d\Omega_{k}^{2}=\begin{cases}d\theta^{2}+\sin^{2}\theta\,d\phi^{2},&k=1\\ d\theta^{2}+d\phi^{2},&k=0\\ d\theta^{2}+\sinh^{2}\theta\,d\phi^{2},&k=-1\end{cases} \tag{29}\]
which represents the line element of a 2-sphere for \(k=1\), a 2-hyperboloid (\(H_{2}\)) for \(k=-1\), and flat 2-dimensional line element for \(k=0\) respectively.
#### Case (I): \(\mathbf{f(R)=R-\mu^{4}/R}\) model
This is one of the earliest models of \(f(R)\)-gravity proposed in [102] to explain the positive acceleration of the expanding Universe. Interestingly, this model reduces to the usual Einstein gravity with \(f(R)=R\) for very large values of the Ricci scalar. However, for small values of the Ricci scalar one can not neglect the \(1/R\) term implying a modified gravity in this regime. The field equation for the Maxwell field is given by (21) and for the metric it is given by
\[\left(1+\frac{\mu^{4}}{R^{2}}\right)R_{ab}-\frac{1}{2}\left(1- \frac{\mu^{4}}{R^{2}}\right)Rg_{ab}\] \[+\mu^{4}\left(g_{ab}\Box-\nabla_{a}\nabla_{b}\right)R^{-2}=2 \left(\mathcal{F}_{ac}\mathcal{F}^{cb}-\frac{1}{4}g_{ab}\mathcal{F}^{2} \right), \tag{30}\]
For the constant-curvature vacuum solutions (\(\nabla_{a}R=0\)), one has \(R=\pm\sqrt{3}\mu^{2}\). Now on using the metric ansatz (28) in (30), one can see that the only possible solution is the Schwarzschild-AdS/dS black hole solution with
\[g(r)=k-\frac{\Lambda}{3}r^{2}-\frac{M}{r},\quad\Lambda=\mp\frac{ \sqrt{3}}{4}\mu^{2} \tag{31}\]
which is also a solution to the Einstein gravity with the cosmological constant (\(\Lambda\)). Here, it should be noted that for this model of \(f(R)\)-gravity it is not possible to have Reisnner-Nordstrom AdS/dS black hole solutions. Furthermore, the stability conditions (27) in this case become
\[\left(1+\frac{\mu^{4}}{R^{2}}\right)\bigg{|}_{R=\pm\sqrt{3}\mu^{2 }}>0,\] \[\left(-\frac{2\mu^{4}}{R^{3}}\right)\bigg{|}_{R=\pm\sqrt{3}\mu^{2 }}>0 \tag{32}\]
which shows that the condition \(F^{\prime}(\pm\sqrt{3}\mu^{2})>0\) is violated for both the Schwarzschild-AdS/dS solutions. In particular, for the Schwarzschild-dS solution, this violation implies that this model suffers from the Dolgov-Kawasaki instability [103]. To remove this instability from the Schwarzschild-dS solution it was proposed in [104] to add an additional \(R^{2}\) term to the given model.
#### Case (II): \(\mathbf{f(R)=R+\alpha R^{n}\) model
The model of \(f(R)\) discussed before suffers from instability problems in the strong gravity/small R regime but exhibits no problems in the weak gravity/large R regime. Several viable models of \(f(R)\)-gravity with no instability problems in the weak gravity regime have been discussed in [104]. To resolve such instability problems in a strong gravity regime, it was proposed in [105] to consider the corrections proportional to higher orders of curvature such as \(R^{n}\) for \(n>1\). This is the motivation for considering the given \(f(R)\)-gravity model here. Solving the the field equation (2) for the ansatz (28) gives
\[g(r)=k-\frac{\Lambda}{3}r^{2}-\frac{M}{r},\] \[k=\frac{2^{n}(8\Lambda)^{1-n}}{2n-4},\ n\neq 2,\] \[\Lambda=2^{\frac{2}{n-1}}\left(4^{n}(n-2)\,\alpha\right)^{\frac{ 1}{1-n}} \tag{33}\]
which describes the metric for a Schwarzschild black hole in the presence of a cosmological constant. Similar to the previous case, one cannot obtain the charged solution, see [101; 105]. The stability conditions (27) in this case now reduce to
\[\frac{n-1}{n-2}>0,\quad\frac{n(n-1)}{4\Lambda(n-2)}>0 \tag{34}\]
which shows that for \(n<0\) the Schwarzschild-AdS solution (\(\Lambda<0\)) is stable. However, for \(n>2\) the Schwarzschild-dS solution (\(\Lambda>0\)) solution is stable and free from the Dolgov-Kawasaki instability.
#### Case (III): \(\mathbf{f(R)=R+\lambda\exp{(-\xi\;R)}}\) model
An interesting and promising model of \(f(R)\)-gravity can be obtained by adding an exponential correction term of the form \(\lambda\exp{(-\xi\;R)},\xi\in\mathcal{R}\) to the usual Einstein gravity. Notably, this model was shown to agree with cosmological observations related to the solar system and that of gravitational lensing of galaxies and clusters [106; 107; 108]. Considering this form of \(f(R)\)-gravity in (2) one gets the corresponding field equations with \(T_{ab}\) as specified in (22). To determine the solutions, we once again substitute the ansatz (28) for the metric in the field equations (2) with the given form of the \(f(R)\) function. This gives us the following
\[g(r)=k-\frac{\Lambda}{3}r^{2}-\frac{M}{r}+\frac{Q}{r^{2}}, \tag{35}\]
indicating that the line element in (28) represents the Reisnner-Nordstrom AdS/dS black hole solutions. Moreover, one also has the following constraint relations
\[\Lambda=\frac{\lambda e^{\frac{2}{Q-2}+2}}{2(Q-2)},\quad\xi=\quad \frac{e^{-\frac{2(Q-1)}{Q-2}}(1-Q)}{\lambda}, \tag{36}\]
which gives the cosmological constant (\(\Lambda\)) and the parameter \(\xi\) in terms of \(\lambda\) which we consider as the only free parameter in the theory. It is straightforward to see that on setting \(Q=0\) (i.e., the case of vanishing Maxwell field) one can also recover the usual Schwarzschild-AdS/dS solutions for the \(f(R)\)-gravity. Now to understand the stability of these solutions, we need to see in what regime the conditions (27) are satisfied. In this case, the stability conditions now become
\[Q>0,\quad\frac{(Q-1)^{2}}{2\Lambda(Q-2)}>0, \tag{37}\]
which shows that Reisnner-Nordstrom AdS or dS are stable for \(Q<2\) or \(Q>2\) respectively. Whereas, for \(Q=0\) only Schwarzschild-AdS solution is stable.
#### Case (IV): \(\mathbf{f(R)=R+\eta\left(\log{R}\right)}\) model
In this case, we consider the \(f(R)\)-gravity model with logarithmic corrections. Such models involving the logarithm of the Ricci scalar have been studied in the past to explain the inflationary paradigm in cosmology (see [109] and reference therein). Similar to the previous examples, solving the field equations (2) with the choice of the given \(f(R)\) for the metric ansatz (28) one gets the following
\[g(r)=k-\frac{\Lambda}{3}r^{2}-\frac{M}{r}+\frac{Q}{r^{2}}, \tag{38}\]
which represents the Reisnner-Nordstrom AdS/dS black hole solutions. These black hole solutions are accompanied by the following constraint relations
\[\Lambda=\frac{\eta}{2}\,W\left(\frac{\sqrt{e}}{2\eta}\right),\quad Q=1+\frac{1}{2 W\left(\frac{\sqrt{e}}{2\eta}\right)}, \tag{39}\]
where \(W\) stands for the Lambert-\(W\) or the productlog function. The above constraints give the cosmological constant and the charge in terms of \(\eta\) which is the only free parameter in the theory. For this model of \(f(R)\)-gravity one can also obtain the Schwarzschild-AdS/dS black hole solutions for vanishing Maxwell field. Moreover, in this case, the stability conditions (27) reduce to the following
\[\frac{2-\log\left(16\Lambda^{2}\right)}{1-\log\left(16\Lambda^{2}\right)}>0, \quad\frac{1}{\log\left(16\Lambda^{2}\right)-1}>0 \tag{40}\]
which shows that all of the black hole solutions are stable as long as the condition, \(2>\log\left(16\Lambda^{2}\right)>1\) is satisfied.
In principle, one can consider a more general form of the \(f(R)\) function combining the models discussed here resulting in more exotic forms of \(f(R)\)-gravity [101; 110; 111]. It may also be seen that for some models of constant curvature \(f(R)\)-gravity only Schwarzschild/dS dS black hole solutions are possible whereas for others it is possible to obtain Reisnner-Nordstrom AdS/dS black hole solutions also. This observation clearly implies a duality between the constant curvature \(f(R)\)-gravity theories and the Einstein-Maxwell gravity with a cosmological constant. Having obtained the static spherically symmetric solutions to the cases of \(f(R)\)-gravity one can now look for their rotating axisymmetric stationary solutions. In general, the way to obtain such rotating solutions is to use the Newmann-Janis algorithm. However, as described in the introduction using the NJ algorithm for modified gravity theories introduces pathologies in the resulting axially-symmetric metric [92]. In the next section, we propose a modified NJ algorithm for obtaining rotating solutions to Einstein-Maxwell gravity with a cosmological constant. Then by exploiting the duality described here, one can show that these rotating solutions for Einstein-Maxwell gravity with a cosmological constant will also be the solutions for the constant curvature \(f(R)\)-gravity theories.
## III Newmann Janis algorithm: Einstein gravity with cosmological constant
The original Newman-Janis (NJ) algorithm was proposed as a five-step procedure for generating new rotating axisymmetric solutions from known static spherically symmetric solutions (also known as seed metric) of the Einstein equations [85; 86; 112]. In this section, we describe a modified Newman-Janis algorithm that generates rotating solutions for Einstein gravity with a cosmological constant from its known static spherically symmetric solutions. Here the Schwarzschild or the Reissner-Nordstrom AdS/dS solutions can be considered as the seed metrices for the Einstein gravity with a cosmological constant. To describe the five steps of the said algorithm for the present case, we start with the following general spherically symmetric static seed metric
\[ds^{2}=-F(r)\,dt^{2}+G(r)^{-1}\,dr^{2}+H(r)\,\left(d\theta^{2}+\sin^{2}\theta \,d\phi^{2}\right). \tag{41}\]
which is used to generate the rotating solutions. Given the above seed metric, the first step of the NJ algorithm is to write it in terms of the Eddington-Finkelstein coordinates (\(x_{\mu}=\{u,r,\theta,\phi\}\)) using the following transformation
\[du=dt-\frac{dr}{\sqrt{F\,G}}. \tag{42}\]
The second step of the algorithm involves expressing the contravariant form of the seed metric in terms of a null tetrad, \(e^{\mu}_{a}=\{l^{\mu},n^{\mu},m^{\mu},\overline{m}^{\mu}\}\) as:
\[g^{\mu\nu}=l^{\mu}n^{\nu}+l^{\nu}n^{\mu}-m^{\mu}\overline{m}^{\nu}-m^{\nu} \overline{m}^{\mu}\,, \tag{43}\]
where
\[l_{\mu}l^{\mu} = m_{\mu}m^{\mu}=n_{\mu}n^{\mu}=l_{\mu}m^{\mu}=n_{\mu}m^{\mu}=0,\] \[l_{\mu}n^{\mu} = -m_{\mu}\overline{m}^{\mu}=1 \tag{44}\]
with \(\overline{m}_{\mu}\) being the complex conjugate of the \(m_{\mu}\) vector. For the seed metric 41, the form of the null tetrad can be obtained as
\[l^{\mu} = \delta^{\mu}_{r},\] \[n^{\mu} = \sqrt{F/G}\,\delta^{\mu}_{u}-(F/2)\,\,\delta^{\mu}_{r},\] \[m^{\mu} = \left(\delta^{\mu}_{\theta}+\frac{i}{\sin\theta}\,\delta^{\mu}_ {\phi}\right)/\sqrt{2H}. \tag{45}\]
Having obtained the null tetrad, the third step is to extend the Eddington-Finkelstein coordinates (\(x_{\mu}\)) to a new set of complex coordinates using the following transformation
\[d\widetilde{u} \rightarrow du+i\,a\,P(\theta),\quad d\widetilde{r}\to dr-i\,a \,\sin\theta,\] \[d\widetilde{\phi} \rightarrow d\phi+i\,a\,Q(\theta),\quad\theta\rightarrow\theta. \tag{46}\]
where \(a\) is some constant and the old tetrad and metric are recovered when one imposes the constraint, \(x_{\mu}=\overline{x}_{\mu}\) to the above coordinate transformation. Here it is to be noted that the usual NJ algorithm for the Einstein gravity in flat spacetime involves the complexification of only \(\{u,r\}\)-coordinates. However, in the present case of Einstein gravity with a cosmological constant, one requires an additional complexification of \(\phi\)-coordinate as well. Thus to summarize, the effect of this transformation is to create a new metric whose components are (real) functions of the complex coordinates. For the modified
NJ-algorithm being discussed her, we will follow the approach adopted in [113; 114; 115]. On using the transformation given in eq.(46), the components \(F(r)\), \(G(r)\) and \(H(r)\) of the metric (41) transform in to the new functions \(A(\overline{r},a)\), \(B(\overline{r},a)\) and \(C(\overline{r},a)\) respectively. We now consider the following ansatz for the functions \(A\), \(B\) and \(C\)
\[A(\overline{r})=A(r,\theta) = \frac{\Delta_{r}(r,)-a^{2}\,\sin^{2}\theta\,\Delta_{\theta}( \theta)}{r^{2}+a^{2}\,\cos^{2}\theta},\] \[B(\overline{r})=B(r,\theta) = \frac{1}{A(r,\theta)},\] \[C(\overline{r})=C(r,\theta) = \frac{\left(r^{2}+a^{2}\,\cos^{2}\theta\right)}{\Delta_{\theta}( \theta)} \tag{47}\]
which is inspired by the Kerr-AdS metric where we match its \(\{uu\}\) and \(\{\theta\theta\}\) components with the functions \(A(r,\theta)\) and \(C(r,\theta)\) respectively.
The fourth step in the algorithm is to write the transformed null tetrad using the complex coordinate transformation introduced in (46) as
\[l^{\mu} =\delta^{\mu}_{r},\] \[n^{\mu} =\delta^{\mu}_{u}-(A/2)\ \delta^{\mu}_{r},\] \[m^{\mu} =\frac{1}{\sqrt{2C}}\left(\delta^{\mu}_{\theta}+ia\left(\delta^{ \mu}_{u}P-\delta^{\mu}_{r}\sin\theta\right)+i\left(\csc\theta+Q\right)\delta^{ \mu}_{\phi}\right), \tag{48}\]
which on using the eq. (43) yields the contravariant form of the transformed seed metric with the following non vanishing elements
\[g^{uu} =\frac{2\,a^{2}\,P^{2}(\theta)\,\Delta_{\theta}(\theta)}{\Sigma(r,\theta)},\] \[g^{ur} =g^{ru}=-1-\frac{2\,a^{2}\,P(\theta)\,\sin\theta\,\Delta_{\theta} (\theta)}{\Sigma(r,\theta)},\] \[g^{u\phi} =g^{u\phi}=\frac{2\,a\,P(\theta)\,\left(\csc\theta+Q(\theta) \right)\,\Delta_{\theta}(\theta)}{\Sigma(r,\theta)},\] \[g^{rr} =\frac{2\,\Delta_{r}(r)}{\Sigma(r,\theta)},\quad g^{\theta\theta} =\frac{2\,\Delta_{\theta}(\theta)}{\Sigma(r,\theta)}\] \[g^{r\phi} =g^{\phi r}=-\frac{2\,a\,\left(1+Q(\theta)\,\sin\theta\right)\, \Delta_{\theta}(\theta)}{\Sigma(r,\theta)},\] \[g^{\phi\phi} =\frac{2\,\left(\csc\theta+Q(\theta)\right)^{2}\,\Delta_{\theta} (\theta)}{\Sigma(\theta)}\,, \tag{49}\]
where for brevity we have introduced the function \(\Sigma(r,\theta)=\left(a^{2}+2\,r^{2}+a^{2}\,\cos 2\theta\right)\), in the above expressions. The line element corresponding to the above-transformed metric can now be given as
\[ds^{2} =\frac{2a^{2}\Delta_{\theta}(\theta)\sin^{2}\theta-\Delta_{r}(r) }{\Sigma(r,\theta)}\,du^{2}-2\,du\,dr\] \[\qquad\qquad+\,\frac{\Sigma(r,\theta)}{2\,\Delta_{\theta}( \theta)}\,d\theta^{2}+2\,a\,\frac{P(\theta)}{\csc\theta+Q(\theta)}\,drd\phi\] \[+\,2\,a\,\frac{2P(\theta)\left(\Delta_{r}(r)-a^{2}\Delta_{\theta} (\theta)\sin^{2}\theta\right)-\sin\theta\Sigma(r,\theta)}{\Sigma(r,\theta)(Q( \theta)+\csc\theta)}\,dud\phi\] \[+\frac{4a^{2}\sin\theta P(\theta)+\frac{\Sigma(r,\theta)}{ \Delta_{\theta}(\theta)}+\frac{4P(\theta)^{2}\left(a^{4}\Delta_{\theta}(\theta) \sin^{2}\theta-a^{2}\Delta_{r}(r)\right)}{\Sigma(r,\theta)}\,d\phi^{2}}{2(Q( \theta)+\csc\theta)^{2}}\,d\phi^{2} \tag{50}\]
which is nothing but the line element of a rotating-AdS black hole solution in the Eddington-Finkelstein coordinates.
The fifth and final step of the algorithm is to go back to Boyer-Lindquist coordinates (BLC) using the following global coordinates transformations
\[du=dt-\frac{a^{2}+r^{2}}{\Delta_{r}(r)},\,d\phi=d\phi-\frac{a\,S}{\Delta_{r}( r)}\,. \tag{51}\]
where \(S\) is a constant to be determined later on. The line element of the rotating-AdS metric in BLC can now be given as follows
\[ds^{2} =\frac{2\left(a^{2}\Delta_{\theta}(\theta)\sin^{2}\theta-\Delta_{ r}(r)\right)}{\Sigma(r,\theta)}\,dt^{2}\] \[\qquad\qquad+\,\frac{\Sigma(r,\theta)}{2\Delta_{r}(r)}\,dr^{2}+ \frac{\Sigma(r,\theta,a)}{2\Delta_{\theta}(\theta)}\,d\theta^{2}\] \[\quad-4\,a\,\frac{\sin^{2}\theta\left(\Delta_{\theta}(\theta) \left(a^{2}+r^{2}\right)-\Delta_{r}(r)\right)}{S\,\Sigma(r,\theta)}\,dt\,d\phi\] \[+\,\frac{\sin^{2}\theta\left(2\Delta_{\theta}(\theta)\left(a^{2} +r^{2}\right)^{2}-2a^{2}\Delta_{r}(r)\sin^{2}\theta\right)}{S^{2}\,\Sigma(r, \theta)}\,d\phi^{2} \tag{52}\]
where in deriving the above line element, we have considered that all the off-diagonal elements of the corresponding metric should vanish except its \(\{t\phi\}\) component. This further gives us two constraint relations that determine the unknown functions \(P(\theta)\) and \(Q(\theta)\) in terms of the functions \(\Delta_{r}\) and \(\Delta_{\theta}\) as
\[P(\theta) = \frac{\sin\theta}{\Delta_{\theta}(\theta)} \tag{53}\] \[Q(\theta) = \csc\theta\,\left(-1+\frac{S}{\Delta_{\theta}(\theta)}\right), \tag{54}\]
To this end, it is to be noted that the line element in (52), derived from the modified Newman Janis algorithm discussed here, has two unknown functions \(\Delta_{r}\), \(\Delta_{\theta}\) and a constant \(S\). One can fix these unknowns using the equations of motion of Einstein-Maxwell gravity with a cosmological constant whose action can be given as
\[\mathcal{S}=\frac{1}{16\pi G}\int d^{4}x\sqrt{-g}\left(R-2\Lambda-\frac{1}{4} \mathcal{F}^{2}\right), \tag{55}\]
where \(G\) is the four-dimensional gravitational constant, \(\Lambda\) is the cosmological constant. On solving the Einstein field equations for the rotating metric given in eq. (52), one can determine the unknown functions \(\Delta_{r}(r)\), \(\Delta_{\theta}(\theta)\), and the constant \(S\) as
\[\Delta_{r}(r) = \left(a^{2}+r^{2}\right)\left(1-\frac{\Lambda r^{2}}{3}\right)-2 Gmr+Q^{2}\] \[\Delta_{\theta}(\theta) = 1+\frac{\Lambda a^{2}}{3}\cos^{2}\theta,\quad S=1+\frac{\Lambda a ^{2}}{3}, \tag{56}\]
where \(Q\) is the charge of the black hole and the cosmological constant is related to the AdS or dS length (\(L\)) as \(\Lambda=-\frac{3}{L^{2}}\) or \(\Lambda=\frac{3}{L^{2}}\) respectively. Plugging the above form of the functions \(\Delta_{r}(r)\), \(\Delta_{\theta}(\theta)\), and the constant \(S\) in eq.(52) it is obvious to see that one gets the line element corresponding to a Kerr-Newmann-AdS/dS black hole. The mass M and angular momentum J of the Kerr-Newmann-AdS/dS black hole are related to the parameters \(m\) and \(a\) through the relations \(M=m/\Sigma^{2},\ J=am/\Sigma^{2}\) respectively. One can also obtain the Kerr-AdS/dS black hole solution described by (52) and (56) for \(Q=0\), to the Einstein gravity with a cosmological constant by using the same algorithm.
Now going back to the cases of \(f(R)\)-gravity discussed before, for \(f(R)=R-\mu^{4}/R\) one can see that the Kerr-AdS/dS black hole solution described by (52) and (56) for \(Q=0\), is also a solution to the field equations (30) for vanishing Maxwell field with the identification of the cosmological constant (\(\Lambda\)) with the parameter \(\mu\) given in (31) as before. In the Table (1) we summarize the dualities of different \(f(R)\)-gravities with constant Ricci scalar and their solutions with Einstein gravity with a cosmological constant in the presence or absence of Maxwell field.
## IV Discussion and Conclusion
In this paper, we use the conformal transformation to express the \(f(R)\) gravity in the Jordan to Einstein frame. We find that constant curvature \(f(R)\) gravity theories in the Jordan frame are dual to Einstein-Maxwell gravity with a cosmological constant or modification in effective gravitational constant. We show the existence of the aforementioned duality by giving specific examples of well-known \(f(R)\) gravity theories. Table 1 shows the several \(f(R)\) gravity theories, their dual and the effective gravitational constant (\(G_{eff}\)). We further use this fact to derive the rotating blackhole solutions for generalized \(f(R)\) gravity with constant curvature. Previously, the Newman-Janis algorithm was used to generate the rotating black hole solution for Einstein and modified gravity. However, the NJ algorithm is only known to give accurate rotating spacetimes for the Einstein gravity. We then present a modified NJ algorithm to generate the axisymmetric rotating spacetimes for Einstein-Maxwell gravity with CC. Our modified NJ algorithm involves an additional complexification of \(\phi\)-coordinate to obtain the rotating spacetime for Einstein gravity with \(\Lambda\). This additional complexification gives the tractable form of transformed rotating metric with two unknown functions which are determined from the field equations of the gravity theory under consideration. The determination of these unknown functions in transformed metric ensures that the resulting rotating solution is indeed a solution of that particular theory.
We have presented our results for the modified gravity theories assuming the constant curvature solutions. However, we believe that one can also map the solutions of \(f(R)\)-gravity with dynamical Ricci scalar to those of the EM-gravity non-minimally coupled to a scalar field in the Einstein/Jordan frame. We also plan to explore the implications of such duality between two different gravitational theories in the context of gauge/gravity duality [116]. We leave these interesting avenues for future works.
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline \(\mathbf{f(R)gravity}\) & **Dual** & **Black hole solutions (AdS/dS )** & **Dictionary with \(\mathbf{G_{eff}=G/F(4\Lambda)}\)** \\ \hline \(\mathbf{R-\mu^{4}/R}\) & \(\mathbf{R-2\Lambda}\) & Schwarzschild and Kerr & \(\Lambda=\frac{\pi\sqrt{3}}{4}\mu^{2}\) \\ \hline \(\mathbf{R+\alpha R^{n}}\) & \(\mathbf{R-2\Lambda}\) & Schwarzschild and Kerr & \(\Lambda=2\frac{\pi^{2}}{4-(4^{n}(n-2)\,\alpha)\,\Gamma^{\frac{1}{1-n}}}\) \\ \hline \(\mathbf{R+\lambda\exp{(-\xi\,R)}}\) & \(\mathbf{R-2\Lambda}\) & Schwarzschild and Kerr & \(\Lambda=-\frac{\lambda c}{4\pi},\ \ \xi=\frac{1}{\lambda}\) \\ \cline{2-3} & \(\mathbf{R-2\Lambda-\frac{1}{4}\mathcal{F}^{2}}\) & Reisnner-Nordstrom and Kerr-Newmann & \(\Lambda=\frac{\lambda c\sqrt{2}\,\pi^{2}+2}{2(Q-2)},\ \ \xi=\ \frac{e^{-\frac{2(Q-2)}{Q-2}(1-Q)}}{\lambda}\) \\ \hline \(\mathbf{R+\eta\,log{(R)}}\) & \(\mathbf{R-2\Lambda}\) & Schwarzschild and Kerr & \(\Lambda=\frac{\eta}{2}\,W\,\left(\frac{\sqrt{2}}{2\rho}\right)\) \\ \cline{2-3} & \(\mathbf{R-2\Lambda-\frac{1}{4}\mathcal{F}^{2}}\) & Reisnner-Nordstrom and Kerr-Newmann & \(\Lambda=\frac{\eta}{2}\,W\,\left(\frac{\sqrt{2}}{2\rho}\right),\quad Q=1+\frac{ 1}{2W\left(\frac{\sqrt{2}}{2\pi}\right)}\) \\ \hline \end{tabular}
\end{table}
Table 1: A table showing the dictionary between the \(f(R)\)-gravities and their duals. The parameters \(G\) and \(G_{eff}\) denote the gravitational constant of \(f(R)\)-gravities and their duals respectively.
###### Acknowledgements.
We acknowledge Ido Ben-Dayan for offering suggestions and encouragement. PC is supported by the postdoctoral program at the Ariel University. VK acknowledges the postdoctoral grant of Unisa.
|
2301.13400 | The emergence of soft-glassy mechanics in simulated foams | Several seemingly different soft materials, including foams, cells, and many
complex fluids, exhibit remarkably similar rheological properties and
microscopic dynamics, termed soft glassy mechanics. Here, we show that such
behavior emerges from a simple model of a damped ripening foam, for
sufficiently weak damping. In particular, we observe intermittent avalanchey
dynamics, bubble super-diffusion, and power-law rheology that vary as the
damping factor is changed. In the limit of weak damping, the dynamics are
determined by the tortuous low-lying portions of the energy landscape, as
described in a recent study. For strong damping the viscous stresses cause the
system configuration to evolve along higher energy paths, washing out
small-scale tortuosity and producing motion with an increasingly ballistic
character. Using a microrheological approach, the linear viscoelastic response
of the model can be efficiently calculated. This resembles the power-law
rheology expected for soft glassy mechanics, but unexpectedly, is only weakly
sensitive to the damping parameter. Lastly, we study the reported memory effect
in foams after large perturbations and find that the timescale of the memory
goes to zero as the damping parameter vanishes, suggesting that the effect is
due to viscous stress relaxation rather than slow structural changes stabilized
by the energy landscape. | Amruthesh Thirumalaiswamy, Robert A. Riggleman, John C. Crocker | 2023-01-31T04:25:55Z | http://arxiv.org/abs/2301.13400v1 | # The emergence of soft-glassy mechanics in simulated foams
###### Abstract
Several seemingly different soft materials, including foams, cells, and many complex fluids, exhibit remarkably similar rheological properties and microscopic dynamics, termed soft glassy mechanics. Here, we show that such behavior emerges from a simple model of a damped ripening foam, for sufficiently weak damping. In particular, we observe intermittent avalanche dynamics, bubble super-diffusion, and power-law rheology that vary as the damping factor is changed. In the limit of weak damping, the dynamics are determined by the tortuous low-lying portions of the energy landscape, as described in a recent study. For strong damping the viscous stresses cause the system configuration to evolve along higher energy paths, washing out small-scale tortuosity and producing motion with an increasingly ballistic character. Using a microrheological approach, the linear viscoelastic response of the model can be efficiently calculated. This resembles the power-law rheology expected for soft glassy mechanics, but unexpectedly, is only weakly sensitive to the damping parameter. Lastly, we study the reported memory effect in foams after large perturbations and find that the timescale of the memory goes to zero as the damping parameter vanishes, suggesting that the effect is due to viscous stress relaxation rather than slow structural changes stabilized by the energy landscape.
## I Introduction
Soft glassy materials (SGMs) [1; 2; 3] such as foams and emulsions exhibit complex physical and rheological properties that continue to defy explanation. Moreover, the similarity of soft glassy mechanics to that of living cells [4; 5; 6] and glassy materials [7] has long been noted. Previous experimental and theoretical models have captured different aspects of such systems while falling short of a complete physical picture. For foams, rheological experiments have shown conflicting results -- showing weak [8; 9] or no [10] power-law frequency dependence of the dynamic shear modulus. Modeling efforts have largely focused on the now canonical 'bubble model' [11; 12], but the dynamic shear modulus of this model has not been reported. While a more recent study did report power-law rheology [13] it used a simplified system without damping. Further, experiments have shown memory effects [14; 15] in which a deformed foam shows perturbed mechanics which relaxes back to the unperturbed trend after a long time. The physical origin of this memory effect remains poorly understood.
Here, we study the soft glassy mechanics and rheology of foams, as well as their recovery from mechanical perturbation using a 3-D bubble model [11; 12; 16] with a simple damping law [7; 12], driven by simulated Ostwald ripening [17]. Previous stress-strain simulations [18; 16] of a 2-D bubble model without ripening have indicated a transition to avalanche dynamics with reduced applied strain rate. We look for a similar effect in our ripening foam model by changing the damping parameter \(\xi\), effectively changing the relative rates of ripening and viscous relaxation. This however requires the computationally expensive integration of the bubble model's equation of motion at low \(\xi\). We find that for sufficiently low damping (or equivalently slow ripening), the system dynamics are determined by the tortuous character of the energy landscape, as observed in a damping-free model [13], leading to avalanches in energy, super-diffusive bubble motion, and fractal configuration-space paths. For stronger damping, this behavior disappears, being replaced by a more continuous motion having a ballistic character. We use a microrheological approach to determine the dynamic shear modulus of our model from its intrinsic, non-thermal fluctuations [19; 20], and find that it generically has power-law rheology resembling recent experimental measurements [8; 9]. The rheology exponent is, unexpectedly, only a weak function of damping, providing new insights into the origin of power-law rheology in SGMs. Lastly, we study foam's recovery from mechanical perturbation by randomly scrambling the locations of bubbles in our model, finding that scrambling leads to perturbed mechanics that slowly return to the (average) unperturbed baseline, resembling experimental reports of mechanical memory in foams [14; 15]. The foam recovers to the baseline more quickly as the damping factor is reduced, and does so immediately when damping is removed, indicating that the memory effect is controlled by viscous stress relaxation, and not due to activation between energy minima.
## II Damped SGM model
### Coarsening bubble dynamics
We model a coarsening foam using the bubble model [11; 12] with a simplified damping rule and simulated Ostwald ripening. While the bubble model has been tra
ditionally used to simulate foams [11; 18], it also serves as an effective model for many other SGMs [7; 13; 21]. The constituent bubbles of foam in this model are treated as soft-sphere particles that can overlap and interact via a pairwise repulsive potential when overlapping:
\[V(\mathbf{r}_{ij})=\begin{cases}\frac{\varepsilon}{2}\Big{(}1-\frac{\|\mathbf{ r}_{ij}\|}{a_{i}+a_{j}}\Big{)}^{2},&\text{if }\|\mathbf{r}_{ij}\|<a_{i}+a_{j}\\ 0,&\text{otherwise}.\end{cases} \tag{1}\]
with \(r_{ij}\) being the distance between two bubbles of radii \(a_{i}\) and \(a_{j}\), and \(V(\mathbf{r}_{ij})\) being the corresponding potential.
The positions of the bubbles interacting via the pairwise potential in Eq. 1 are evolved using an (over-damped) equation of motion [11]. Notably, we consider a simplified version of the viscous force, \(\mathbf{F}_{i}=\xi\mathbf{v}_{i}\)[7; 12], on each bubble to reduce computational overload while preserving relevant model physics.
\[\xi\frac{d\mathbf{r}_{i}}{dt}=-\underset{j}{\sum}\frac{nn}{\partial V(r_{ij} )} \tag{2}\]
with the right-hand side representing a summation over neighboring bubbles that contribute to the force on bubble \(i\). Meanwhile, the left side is the damping force with \(\xi\) being the effective viscous damping factor.
To simulate the mass exchange between bubbles due to Ostwald ripening [17], the bubble radii are allowed to evolve while keeping total bubble volume constant (preserving notional mass). Ripening causes larger bubbles to grow and smaller ones to shrink over time via a pairwise mass flux. We model this process with a flow rate that depends on the degree of overlap between neighboring bubbles (over the overlap cross-section), along with a mean-field flux that flows through the connected phase medium.
\[\begin{split} Q_{i}&=\underbrace{-\alpha_{1}\rho \sum_{j}^{nn}\left(\frac{1}{a_{i}}-\frac{1}{a_{j}}\right)A_{\text{overlap}}} _{\text{neighbor-neighbor}}\\ &\qquad\qquad\underbrace{-\alpha_{2}\rho\left(\frac{1}{a_{i}}- \frac{1}{\langle a\rangle}\right)a_{i}}_{\text{mean-field}}\end{split} \tag{3}\]
As indicated, the first term represents the pair-wise mass-flux between neighboring bubbles over the cross-section of overlap and the second represents the mean field contribution. The values for \(\alpha_{1}(=0.05)\) and \(\alpha_{2}(=0.002)\) were chose akin to our previous study [13]. When a bubble's volume turns negative over the course of the simulation, we remove it from the simulation box, while ensuring that the mass of the deleted bubble and its neighbors is conserved.
To realize the overall evolution of the system, we independently evolve the system using the equation of motion (Eq. 2) and ripening rules (Eq. 3) for all bubbles at every time step. We use a simple explicit Euler scheme with small \(dt\) values, to numerically integrate the equations of motion (more details in Appendix A.2. The use of other integrators like a second-order Runge-Kutta numerical discretization led to similar results. While we note that the equations of motion can be physically unstable at very high energies (when there is a significant overlap between bubbles), we verify that such large overlaps are not present at the energy levels presented here. Further, while using small step-sizes within the range of numerical stability (more details in Appendix A.2), we ensure the simulation has converged by cross-validating with smaller step-sizes (\(dt\)). It may be noted here that smaller step-sizes (\(dt\)) are required for lower values of \(\xi\), and thus are computationally more expensive to simulate.
Bubble model simulations [13] when initialized randomly, eventually reach a dynamic steady state with a characteristic bubble size distribution and various systematic trends in properties such as total energy or mean bubbles size, as with experiments [22]. We initialize a system of \(N\sim 1000\) bubbles at a volume fraction of \(\phi=0.75\) (just above its jamming volume fraction [13; 23]) with a Weibull bubble radius distribution, \(\mathbf{P}(a)\sim(k/\lambda)(x/\lambda)^{k-1}\), where \(k=1.75,\lambda=0.73\), and let the system evolve as a function of time. This distribution is representative of the steady state bubble size distribution that a Gaussian initialized system reaches when evolved in the quasi-static limit (\(\xi\to 0\)) [13]. Using this as a starting point for all our simulations, we model over a range of damping factors (\(\xi\)) and calculate various physical quantities of interest. It must be noted that we end our simulations when bubbles grow considerably large leading to multiple (\(>1\)) overlaps between 2 bubbles. This happens much earlier in larger \(\xi\) systems thus producing shorter simulation trajectories overall.
Fluctuations in the system's total potential energy change significantly for different simulated viscosities. Two distinct limits are observed, as shown in Fig. 1. Low viscosity simulations (\(\xi\leq 0.001\)), produce large fluctuations in \(\Delta U(\Delta t=1)/U(t)\) (see Fig. 1a, indicative of avalanchey, intermittent dynamics. These are suggestive of the system following the 'bumpy' lower levels of the energy landscape. Conversely, with higher \(\xi\) values, the system no longer moves from minimum to minimum of the underlying energy landscape but evolves in a dynamic force balance between the larger interaction forces and viscous stresses. This allows the system to fly over the barriers and rugged features of the energy landscape, with a higher time average potential energy for the system (see Fig. 6a). This change in the fluctuations is shown more clearly in the distribution of energy drops, Fig. 1b, which becomes more heavy-tailed at lower \(\xi\). Further, a similar trend can also be seen in Fig. 1c, where the average coordination number (over a system configuration) is higher at higher viscosities, indicating
that viscous stresses are shifting the foam structure away from the minimum energy states, and farther from jamming, defined as coordination with \(\langle z\rangle\simeq z_{c}\)[21]. Thus foam configurations formed at low damping explore lower and more tortuous portions of the energy landscape and ones with higher damping cruise through higher and apparently smoother portions of the energy landscape.
To characterize the system's high dimensional motion over the energy landscape, we look at the path traversed by the system through configuration space for the range of viscosities studied. The different time points on a simulation trajectory in configuration space are analyzed for end-to-end distances (\(\Delta R^{2}\)) and path contour lengths (\(\Delta s\)). This serves as a measure of the tortuosity of the \(3N\) dimensional configurational trajectory taken by the system over time. As expected from our conclusions above, we observe that lower viscosities yield fractal, self-similar scaling at large lengthscales with a fractal dimension of \(D_{f}\,\sim\,2/1.38\) (slope at large distances) \(\,\simeq\,1.45\) (see Fig. 2a) - capturing the intrinsic fractal physics of the landscape [13]. Simulations with higher damping show almost no large lengthscale fractal character, indicative of their ability to avoid lower energy portions of the energy landscape. Alternatively, the slight bends on Fig. 2 may be interpreted as a shift in the lengthscale (as a function of \(\xi\)) over which a fractal slope would be observed. This, however, is further evidence for the self-similar fractal nature of the landscape and indicates that one would have to examine considerable lag times (or configuration
Figure 2: Analysis of bubble motion in 3-\(N\) dimensional space and real space shows a mixture of fractal and ballistic motion. (a) The different simulation points in high dimensional (\(3N\)) space are analyzed for end-to-end distances \(\Delta R^{2}\) and contour lengths \(\Delta s\) to study fractal scaling over different length scales. Simulations with larger \(\xi\) values give almost ballistic scaling in hyperspace. However, lower \(\xi\) lead to a more tortuous trajectory characteristic of a fractal path, leading to super-diffusive scaling. The grey dashed line is a reference with ballistic scaling \(\Delta R^{2}\sim\Delta s\) throughout. All data points above represent values pooled over 4 simulations and log-bin-averaged over contour distances. (b) Time and ensemble (4) averaged mean-squared displacement for an ensemble of bubbles that remain finite sized throughout our simulations, plotted for the different \(\xi\) values shows ballistic motion rolling over to a super-diffusive form for lower \(\xi\) simulations. In comparison, more viscous simulations show more ballistic behavior over a larger range of \(\tau\).
Figure 1: (a) Traces of relative energy differences \(\Delta U/U(t)\) (for \(\Delta t=1\) or simulation points spaced by 1 time unit) are sensitive to intermittent dynamics. For lower damping, \(\xi\lesssim 0.01\), the relative change in energy shows abrupt peaks characteristic of intermittent motion. (b) Lower \(\xi\) simulations show a heavy-tailed probability distribution of energy fluctuations, typical of an avalanche system. As the system becomes more viscous, the energy fluctuations become more Gaussian. (c) The average system coordination number \(\langle z\rangle\) remains low for lower \(\xi\) simulations, characteristic of lower energy configurations close to potential energy minima on the landscape. \(z_{C}\) is the critical coordination for jamming, with \(z_{C}=6\) in \(3-D\)[21].
distances traveled by the system) to observe soft-glassy mechanics for systems with larger damping. Here, it must be noted that when particles shrink to zero size as a result of ripening, we fix their positions in space, thus conserving the number of dimensions (\(3N\)) used to calculate \(\Delta R^{2}\). In Fig. 2b, we compute the ensemble and time-averaged mean-squared displacement as a function of lag time (\(\tau\)). These curves show a functional form that is similar to that of the \(\Delta R^{2}\) above because the mean-squared displacement is a projection of those curves to 3-D space; the slight difference in the exponent is due to the calculation being done on a slightly different ensemble of bubbles that remain finite sized throughout the simulation.
Here, one may also identify a dimensionless group of interest called the _Deborah number De_, which can be expressed as the ratio of time scales associated with relaxation and the mode of driving--the two relevant dynamic processes for this system. Here, that would be the damped relaxation time from Eq. 2 (\(\tau_{R}=\xi\langle a\rangle^{2}/\epsilon\)) and timescale associated with changing bubble radii (\(a\)) imparted by the ripening process, Eq. 3 (\(\tau_{C}=\left\langle a\right\rangle^{2}/\alpha_{1}\) when \(\alpha_{1}>\alpha_{2}\)). This gives us a _ripening Deborah number_, (\(De_{\alpha}=\xi\alpha_{1}/\epsilon\)) which is a ratio of the relaxation (\(\tau_{R}\)) and coarsening (\(\tau_{C}\)) times (typically ranging between \(10^{-3}-10^{-6}\) for our simulations). This dimensionless group presumably depends on the system's volume fraction \(\phi\) and its proximity to the jamming volume fraction \(\phi_{J}\)[11; 18].
This dimensionless group formalism can be a useful way to explain many previous experimental and simulation results [12; 16; 18]. We begin by noting that the avalanchey dynamics and intermittent rearrangements observed in our simulations resemble previous studies of similar systems [11] driven by shear strain instead of coarsening. Various comprehensive studies [16; 18] using \(2D\) shear strain point out a similar transition to avalanchey rearrangement events below a certain shear strain rate. Thus, our results can be interpreted as a transition in landscape physics as a function of \(De_{\alpha}\) while shear simulation results [16] can be explained using a corresponding _shear Deborah number_, \(De_{\gamma}\).
For a foam experiment, we note that the energy scale and damping factor vary as the system evolves: \(\epsilon\simeq\sigma\langle a\rangle^{2}\)[11] and \(\xi\propto\langle a\rangle\), while \(\alpha_{1}\) is effectively independent of \(\langle a\rangle\). Thus experimentally, \(De_{\alpha}\propto\left\langle a\right\rangle^{3}\) changes for dynamically aging foam where \(\langle a\rangle\) increases as a function of time [13; 22](see Fig. 5). This keeps pushing the aging system away from the landscape-dominated regime, potentially explaining the issue associated with the shifting cut-off [1], and tending to produce behavior akin to high \(\xi\) simulations.
### Rheology of SGMs
The rheology of soft-glassy systems is typically found to be weakly frequency dependent (solid-like), often with a power-law form, while different experiments on foams [8; 10] yield apparently conflicting results. Computationally, capturing low-frequency responses to applied strains can be very expensive, making the determination of rheology difficult [13]. Here, we provide a numerical procedure that derives its essentials from a microrheological approach [19; 24] that computes the power spectra of the active, fluctuating shear strain and stress from the particle motions, and computes the dynamic shear modulus from their ratio.
We begin by noting that one can relate the stress (\(\sigma(t)\)) and strain (\(\gamma(t)\)) to the creep compliance (\(\mathbf{J}(t)\)) using the theory of linear response [25; 26] and the Boltzmann superposition principle, relating them through a convolution:
\[\mathbf{J}(t)\varoquiv\dot{\sigma}(t)= \gamma(t) \tag{4}\] \[\int_{-\infty}^{t}\mathbf{J}(t-t^{\prime})\dot{\sigma}(t^{\prime} )dt^{\prime}= \gamma(t)\]
While this basic constitutive equation represents the relation between the macroscopic stress and strain for a linear material, we extend this formalism to its microheological version wherein each bubble/ particle can be treated as a tracer moving in a homogeneous viscoelastic continuum (formed by all the other bubbles) driven by active fluctuating stresses. Thus, typically one can use the bubbles' positional vectors describing their motion in the effective medium to describe the local, time-dependent strain in the effective medium [24]. Similarly, the local fluctuating active stress acting on each bubble in the system can be computed as follows [27; 28]:
\[\sigma(\mathbf{r}_{i})=-\left(\sum_{j}^{nn}\mathbf{r}_{ij}\otimes\mathbf{F}_ {ij}\right)\delta(\mathbf{r}-\mathbf{r}_{i}) \tag{5}\]
where \(r_{ij}\) and \(F_{ij}\) represent the inter-particle displacements and forces between particles \(i\) and \(j\).
Applying the above equations directly to the data would be impractical because the \(\sigma(t)\) and \(\gamma(t)\) signals for each bubble are random functions of time. Instead, we transform the equation described in Appendix B.1 to a relation between the ensemble-averaged mean squared differences (MSD), the stress, and strain. The stress MSD is calculated by considering the squared difference between the three off-diagonal elements of the bubble-wise symmetric tensor (see Eq. 5). Further, we consider the ensemble average over all bubbles in our system and over similar lag times to get a statistically consistent MSD. Meanwhile, the strain MSD can be estimated using the positional MSD or mean-squared displacement introduced earlier (see Fig. 2a). These quantities can further be related using the modified Fourier transformed (FT) version of the above equation [13; 19; 20]:
\[\left|G^{*}(\omega)\right|^{2}\simeq\frac{\widetilde{\Delta\sigma^{2}}(\omega)}{3 \pi\left\langle a\right\rangle\widetilde{\Delta\mathbf{r}}^{2}(\omega)} \tag{6}\]
To avoid assumptions and approximations related to computing Fourier transforms of these MSDs over a finite range of lag times, [13; 20], we consider the exact convolutional relation described in Appendix B.1. This equation can be further modified using the Wiener-Khinchin theorem and the relationship between autocorrelation and MSD for the stress and strain, giving us the following equation:
\[2J^{2}(0)\langle\sigma^{2}\rangle+\int_{0}^{\tau_{i}}f(\tau_{i} -t^{\prime})(\langle\sigma^{2}\rangle-\langle\Delta\sigma^{2}\rangle(t^{ \prime})/2)dt^{\prime}\\ =(\langle\gamma^{2}\rangle-\langle\Delta\gamma^{2}\rangle(\tau_{ i})/2)\\ \simeq 3\pi\langle a\rangle(\langle\mathbf{r}^{2}\rangle-\langle \Delta\mathbf{r}^{2}\rangle(\tau_{i})/2)/{\langle a\rangle}^{3} \tag{7}\]
where \(f(\tau_{i})\) is defined as follows,
\[f(\tau_{i})=\left(\int_{0}^{\tau_{i}}\dot{J}(\tau_{i}-t^{\prime})\dot{J}(t^{ \prime})dt^{\prime}+2J(0)\dot{J}(\tau_{i})\right)\]
where \(\Delta\sigma^{2}(\tau)\) and \(\Delta\gamma^{2}(\tau)\) represent the time-averaged, mean-squared difference of the bubbles' stress and strains, in our analyses. We approximate the strain using the position vector, \(\mathbf{r}\)[19; 24] as discussed above. Further, we ensemble average our MSDs over 4 simulation runs. Finally, to represent the creep compliance, we use a modified version of the model suggested by Lavergne and co-authors in Ref. [8]: \(J(t)=1/G_{\infty}+k_{D}/G_{\infty}[(1+t/\tau_{0})^{\beta}-1]\) (more details in Appendix B.2). Using this as a model for the viscoelastic rheology for the foam, we undertake a simultaneous fitting operation for the parameters of the model, i.e., \(G_{\infty},k_{D}\) and \(\beta\), at various lag times or \(\tau_{i}\) in the convolutional integral equation shown above (Eq. 7). Using the optimal parameters from the fit gives us the creep compliance and, subsequently, the complex modulus \(G^{*}(\omega)\) using the relation, \(G^{*}(\omega)J(\omega)=1/i\omega\). Further details of the derivation and mathematics of the numerical procedure are provided in Appendix B.1. It may be further noted that attempts to model the rheology using a Maxwell model produced inferior solutions to Eq. 7, with the power-law model cited above providing significantly better fits.
The results from the computed creep compliance and dynamic shear moduli are summarized in Fig. 3. \(G^{*}(\omega)\) exhibits a power-law regime over the \(\omega\) range of interest and is characteristic of behavior predicted in theory [1], simulations [13] and observed in experiments [8; 9; 10]. Recent experiments and our simulation results here (see Fig. 3), evaluated with a robust numerical approach provide clear evidence in support of the existence of power-law rheology in SGMs. Fig. 3a, shows the fits for the \(J(t)\) model described above, with a family of curves with similar power-low exponents. Considering the semi-analytical FT to obtain \(G^{*}(\omega)\) gives us the viscoelastic moduli with a power regime defined by \(G^{*}(\omega)\sim\omega^{\beta}\), with weak dependence of the exponent \(\beta\) on damping, showing that this is a universal feature for foams, regardless of damping.
### Memory and recovery in perturbed SGMs
The SGM system shows a significant downhill descent in energy as the largest bubbles coarsen and grow. As this downward trend continues, the system reaches a dynamical scaling steady state [13]. While it is unclear whether configurations in this regime form an ergodic ensemble over some characteristic time, the bubbles show stable trends in various structural quantities like average co
Figure 3: We compute the viscoelastic moduli for the dynamic viscous simulations considered from the fluctuating stresses and displacements of bubbles in the simulation, as described in the text. \(\xi\) values with data over a significant \(\tau\) range were considered for the calculation. The dotted grey lines indicate the \(\tau\) range of the MSD data used for the above calculation. (a) Fitting the model explained in the text to simulation data gives us suitable fits with a family of curves with power-law behavior. The creep compliance scales as \(J(\tau)\sim t^{\beta}\) in the lag time range shown above. (b) \(G^{*}(\omega)\) obtained from \(J(\tau)\), gives us power-law rheology in \(\omega\) i.e., \(G^{*}(\omega)\sim\omega^{\beta}\). This behavior is observed at all \(\xi\) values calculated above. (inset) The predicted \(\beta\) values, indicative of the log-slope for the curves in (b), hover consistently in the range \(\sim 0.15-0.2\), similar to previously observed values in simulation [13], and experiments [8; 9].
ordination number, mean bubble radius, normalized radial distribution, etc. Bubbles initially move around to reach the steady state, defined by the dynamical scaling 'attractor' on the energy landscape, and then continue to evolve in this steady state ensemble. Any perturbation away from the attractor would thus lead the system back to a 'new' steady state as defined by the structural and dynamical properties of the attractor and the system landscape. Experiments have observed [14; 15] that a strain-perturbed foam relaxed back to its unperturbed steady state after an unexpectedly long waiting time, and have described this as a memory phenomenon or measure of history dependence. The consensus [14; 15] on the origin of this memory is that coarsening mediated excitations are needed to enable the system to overcome local minima that the perturbed system relaxes into. Thus, the long waiting time has been considered a result of slow coarsening.
To study this phenomenon's structural and dynamical significance computationally, we run a set of simulations using our modified damped model over various \(\xi\) values. We consider the theoretical extreme of a perturbation by introducing positional scrambles in our system. To do so, we begin with a typical steady-state system and randomly scramble the various \(3N\) positions of the bubbles. This scramble randomly assigns a point in hyperspace for the system of soft spheres, providing a random structural perturbation. We then continue with the relaxation-coarsening procedure described previously in Section II.1. It must be noted here that for the quasi-static case when \(\xi=0\), we relax the system to its first energy minimum (i.e. mechanical equilibrium) using FIRE [29] instead of using Eq. 2.
For the quasi-static case, we see that the system, upon one (or even multiple) scrambles, returns to the earlier dynamical scaling steady trend (see Fig. 4a) immediately. Indicators like \(\langle z\rangle\) and \(\langle a\rangle^{2}\) show no significant change from steady-state behavior, as can be seen in Fig. 4 which plots the scrambled (at \(t=400\)) and unscrambled average coordination number as a function of time. Here the scrambled system experiences no barriers to reaching this 'new' steady state with FIRE traversing the large configurational distance on a relatively smooth portion of the energy landscape (at higher energies) to find the nearest (primary) minima. It may be noted that the scramble
Figure 4: Scrambling a quasi-static system shows an immediate return to trend. (a) We scramble the configurational positions of a system in steady state (at \(t=400\)) in a quasi-static (\(\xi=0\)) simulation. Surprisingly, the system always finds a ‘new’ steady state right away, as indicated by the coordination number (\(z-z_{C}\)) measured here, and continues to evolve with similar dynamic properties. The dark and light symbols represent the scrambled and unscrambled simulation, respectively. (b) Running multiple (\(\sim 100\)) such scrambles at \(t=400\), gives us a Gaussian distribution of \(\langle z\rangle\) as shown above. This overlaps well with \(\langle z\rangle\) values obtained at \(t=400\) for 10 different realizations of the same simulation as indicated by the mean \(\pm\) standard deviations. This tells us that the scrambled simulation returns to the newly found steady state instantaneously. (inset) Moreover, the temporal autocorrelations for these \(z\) ensembles - scrambled and unscrambled - provide similar decorrelation times. These findings indicate similar dynamic properties for the scrambled and unscrambled simulation.
Figure 5: Scrambling a quasi-static system shows no change to ripening evolution. Here, we look at the structure through the radial distribution formed at steady state for a scrambled and unscrambled system. As previously in Fig. 4, the system instantaneously continues in steady state. As can be noticed, the slope changes for the scrambled simulation at \(t=400\) (indicated by arrows), indicative of a new foam initiation time [13]; however, the trend remains linear, consistent with dynamic scaling state behavior \(\langle a\rangle^{2}\sim t_{\rm age}\).
moves the system to a random \(3N\)-dimensional configuration on the energy landscape. The system thus evolves in the particular meta-basin corresponding to the scrambled positions going forward. So while a single simulation might not erogodically explore all portions in configuration space, these different hyperspaces on the energy landscape have similar structural properties. Small changes in moving ensemble averages indicate the slight variations in different regions or metabasins of the energy landscape. Effectively, all these primary minima that the minimizer finds belong to the'steady-state ensemble' of the particular foam radii distribution realization at \(t=400\).
To test whether the scrambled simulation actually returns to the'steady-state ensemble' for a similar foam at the same age, we compare the average coordination numbers at \(t=400\) for 10 different quasi-static simulations (different positional initializations at \(t=0\)) in Fig. 4b; with a pool of coordination numbers obtained from scrambling the same test simulation (from Fig. 4a) configuration at \(t=400\), a 100 different ways and quenching them using FIRE. As seen in Fig. 4b, the distribution of average coordination numbers for a quenched minima from the test simulation lies in the range of expected steady-state \(\left\langle z\right\rangle\) values for a similar foam simulation at the same age. Lastly, looking at the inherent temporal correlations in the coordination number further tells us that similar correlations get rebuilt after the system evolves on the randomly chosen'metabasin' on the energy landscape Fig. 4a(inset). Thus we conclude that while the system doesn't return to the exact configurational hyperspace on the energy landscape, the foam exhibits a'memory' effect in various physical and dynamical properties.
Interestingly, this also doesn't affect the coarsening mediated bubble size distribution reached at dynamic scaling as seen in Fig. 5. This can be seen in the average system radii measured over time in Fig. 5. While there is a noticeable change in the rate of radial change or the slope, the trajectory remains in steady state as indicated by the linear gradient. Additionally, the scrambled simulation continues to evolve with similar moments of the radii distribution (see Fig. 4a(inset)). Overall, this shows that while any steady state structure built in by the dynamics and coarsening before the scramble gets ruined by the perturbation, it gets restored immediately by quenching to the nearest minima (using FIRE).
Repeating the same computational experiment at finite \(\xi\) provides insight into the mechanism of the memory phenomenon. In agreement with previous experiments [10; 15], we see that the system requires a surprisingly long time to recover to its former steady state trend (see Fig. 6). However, unlike previous suggestions of this time-scale being coarsening-mediated, we observe a \(\xi\) dependent phenomenon. This viscous time scale dictates the time the system takes to relax any energetic stress built in by the overlaps caused by the positional scramble. Larger \(\xi\) leads to a longer time for the system to relax these unstable overlaps. Fig. 6 shows the progression of energy and average coordination number towards equilibrium after a positional scramble. Interestingly, the \(z\) values shoot below the steady state line post scramble before trending back to steady state, much like previous experiments measuring rearrangement rate [14]. Finally, one may note here that while the waiting times seem to be damping dependent, they are, however, much larger than \(\tau_{R}=\xi(a)/\epsilon\sim\mathcal{O}(\xi)\). This discrepancy might be due to the extreme nature of positional perturbation introduced in our simulation - which introduces many large and small perturbations for all \(N\) bubbles away from the nearest steady-state ensemble configuration. Thus, the waiting time is a compound sum of all these different distances that the bubbles must traverse to reach the 'new' steady state.
Figure 6: Scrambled foams with damping show very slow relaxation towards their prior trends in energy and coordination. Finite \(\xi\) simulations are scrambled at \(t=400\) and evolved using the dynamical equation Eq. 2. (a) The potential energies post scrambling show a progressive trend in reaching the steady state. Fitting the energies (b) The mean coordination number \(\left\langle z\right\rangle\) of the system shows similar \(\xi\) dependent trends. Interestingly, the initial dynamics directs the system to lower \(\left\langle z\right\rangle\) configurations before relaxing to the appropriate steady-state value.
## III Conclusions
We have shown that the 'bubble model' with simple damping and simulated ripening recreates many of the exciting phenomena reported for soft-glassy materials. Specifically, this model exhibits avalanchey, intermittent dynamics at low viscosities with non-Brownian super-diffusive motion. Considered in high-dimensional configuration space, such motion occurs along a fractal configurational path that is constrained to the lowest energy portions of the potential energy landscape. We find that the energy minima of the bubble model are clustered together in configuration space, scattered along the configuration path followed by the model (at low viscosity). In the practical absence of viscous stresses, the system hops from each (ripening destabilized) energy minimum to a nearby, adjacent energy minimum. This landscape-dominated motion subsequently produces the observed super-diffusive motion and stress and strain fluctuations corresponding to power-law rheology. As the simulated viscosity is increased, the system shows progressively smoother dynamics and motion with a more ballistic character. In this case, viscous stresses cause the configuration never to explore the true potential energy minima but instead evolve along a path that is adjacent to the cluster of energy minima. The system effectively stays at higher potential energy, and the finer details of the fractal configuration path seen at lower viscosity are washed out, leading to a straighter path and more ballistic motion. Between these two limits, one can find a gradation of properties, where the system displays increasing characteristic length and time scales above which the low viscosity behavior may still be observed.
This model also successfully generates power-law rheology, which previously has not been reported for a damped bubble model. However, unlike other properties, power-law rheology seems to be a consistent feature in SGMs over a wide range of viscosity values. This suggests an extended fractal nature for the energy landscape, consistently producing power-law rheology even when the configurations are at energy somewhat above the energy minima. Further, our microrheology-based approach provides a robust and reliable way to compute viscoelastic moduli from force and strain fluctuations measurements of constituent particles and is free of any systematic truncation errors associated with earlier microrheological methods.
Lastly, we investigate the'memory' of the ripening bubble model for mechanical perturbations by randomly scrambling the bubble positions. We find that scrambled configurations (effectively a random point in configuration space) must relax a long configuration distance before reaching their first potential energy minimum. Moreover, those first energy minima are indistinguishable (statistically) from the ensemble of configurations explored by other ripening simulations of the same age. This is most clearly shown by the quasi-static simulation that immediately recovers its earlier (ensemble averaged) baseline properties when it reaches its first energy minimum. For finite viscosity, the system can take a long time to traverse the required configuration distance to return to the vicinity of the energy minima cluster and recover its earlier mechanical properties. In viscous systems, this recovery time (and effective'memory' time) is proportional to the viscous relaxation time of the model. This viscosity-mediated recovery process is contrary to previous experimental inferences and a consequence of the barrier-free potential energy landscape at higher energies that the perturbed system must traverse.
Future work would include developing a model that captures other long-time characteristics of these SGMs. We hope that such a description will provide a more complete and practical model for SGMs. Modeling the physical properties of the many materials categorized under SGMs could be of potential use in fields ranging from material science (foams and complex fluids) to biology (living cells [4; 5]).
## Author contributions
A.T., R.A.R., and J.C.C. designed research; A.T. performed research and analyzed data; A.T., R.A.R., and J.C.C. wrote the paper. R.A.R. and J.C.C. contributed equally to this work.
## Acknowledgments
We are grateful for valuable conversations with Douglas Durian, Francois Lavergne, Andrea Liu, Talid Sinno, and Veronique Trappe. This work was supported by NSF-DMR 1609525 and 1720530 and computational resources provided by XSEDE through TG-DMR150034.
## Appendix A Damped SGM Model
### Underdamped limit of an overdamped equation
The equation used for the simulations, as described in the main text, is:
\[\begin{split}\xi\frac{d\mathbf{r}_{i}}{dt}&= \mathbf{F}_{i}\\ &=-\sum_{j}^{nn}\frac{\partial V(r_{ij})}{\partial r_{i}}\end{split} \tag{10}\]
It can be seen that the equation is similar to an overdamped equation of motion. However, it must be noted that though the equation resembles and has the characteristics of an overdamped equation of motion, the same is not due to a large viscosity but rather the non-inertial
nature of the constituent particles considered in the system. One may recall that dynamics for a mass attached to a damped spring are mediated by the damping factor \(\zeta=b/(2\sqrt{km})\). That can be evaluated for our system of interest as follows \(\zeta\simeq\xi/\sqrt{\epsilon\rho\langle a\rangle}\). Since overdamped dynamics is achieved when \(\zeta\geq 1\), we see that the non-inertial particles (\(\rho\to 0\)) in our case give rise to the so-called overdamped equation of motion. Meanwhile, we continue to operate with a finite value of \(\xi\).
### Integration and stability
The simulation can be summarized as a numerical integration of the two equations - Eq. 2 and eq. 3 using a numerical integration technique. Due to the stiff nature of Eq. 2 (especially at small \(\xi\) values, one needs to choose appropriate \(dt\) values to ensure any error perturbations don't diverge as the simulation proceeds and that the solution is a converged one. We use a simple Explicit Euler scheme to perform our integration here. We note that other methods, like implicit Euler and second-order Runge Kutta scheme, provide more extensive stability regimes for \(dt\) and are more accurate but can have more significant computational overload associated with the integration scheme. Below, we perform a simple numerical stability test.
We start by considering Eq. 2 for all \(N\) particles or \(3N\) degrees of freedom, i.e., \(i\in\{1,2,...3N\}\), which can be expressed in terms of the Hessian for using a Taylor expansion as follows: and
\[\begin{split}\xi\frac{d\mathbf{r}}{dt}&=\mathbf{F} \\ &=\mathbf{F}_{0}-\mathbf{H}\mathbf{r}\end{split} \tag{2}\]
where \(\mathbf{r}\), \(\mathbf{F}\) are \(3N\) dimensional vectors and \(\mathbf{H}\) is a \(3N\times 3N\) matrix or the Hessian of the potential field. We may note here that for most \(\xi\) simulations, the system configurations are close to mechanical equilibrium, so for our stability analysis, we may approximate this using \(\mathbf{F}_{0}\simeq 0\). Further one may note that the any error \(\epsilon_{i}\) would propagate via an equation similar to Eq. 2:
\[\xi\frac{d\epsilon}{dt}=-\mathbf{H}\epsilon \tag{3}\]
Now, using the Explicit Euler formalism, for time steps \(n+1\) and \(n\), we get:
\[\begin{split}\xi\frac{\epsilon_{\mathbf{n+1}}-\epsilon_{\mathbf{ n}}}{dt}&=-\mathbf{H}\epsilon_{n}\\ &=-\lambda\epsilon_{n}\\ \frac{||\epsilon_{\mathbf{n+1}}||}{||\epsilon_{\mathbf{n}}||}& =||\mathbf{I}-\lambda||dt/\xi\end{split} \tag{4}\]
where \(\lambda\) is a matrix containing all eigenvalues of \(\mathbf{H}\). Enforcing the criteria of stability on the equation above we have,
\[\begin{split}\frac{||\epsilon_{\mathbf{n+1}}||}{||\epsilon_{ \mathbf{n}}||}&\leq 1\\ ||\mathbf{I}-\lambda||dt/\xi&\leq 1\\ 0\leq||\lambda_{max}||dt/\xi&\leq 2\end{split} \tag{5}\]
where \(\lambda_{max}\) is the largest eigenvalue for \(\mathbf{H}\). Since all eigenvalues would be real for this physical system, we now have,
\[\begin{split} 0\leq||\lambda_{max}||dt/\xi&\leq 2\\ dt&\leq 2\xi/\lambda_{max}\end{split} \tag{6}\]
For most configurations, explored in our system simulation the \(\lambda_{max}\) varies around \(\sim 1-10\). This gives us that \(dt\leqslant\xi/5\) is the condition for stability. Here, we choose \(dt=\xi/10\), as the step size for all simulations reported in this study. Since we have Eq. 3, which also controls overall dynamics, this choice of time-step was validated for convergence. We have verified that our explicit Euler scheme was converged by checking other smaller values of \(dt\). Other schemes like the RK-2 also produced similar results. Further, for \(\xi>0.01\), we stuck with the use of \(dt=0.001\), as the system moves further away from mechanically stable states and the above approximation in Eq. 3 fails to strictly hold.
### Dimensionless group analysis: _Deborah number_
Apart from evaluating the _Deborah number_\(De\) as the ratio of the damped relaxation time from Eq. 2 (\(\tau_{R}=\xi{\langle a\rangle}^{2}/\epsilon\)) and probing time associated with changing bubble radii (\(a\)) imparted by the coarsening process, Eq. 3 (\(\tau_{C}={\langle a\rangle}^{2}/\alpha_{1}\), we can do a simple Buckingham Pi analysis to determine the relevant \(\Pi\) group. Below, we present the analysis to derive the Deborah number as a \(\Pi\) group.
One can re-model the system through an experimental lens and pose the problem statement as measuring the average radii \({\langle a\rangle}\) as a function of time. Intuitively, this might be influenced by system properties like \(\epsilon\), \(\rho\), \(\alpha_{1}\), \(\xi\). These 4 quantities along with \({\langle a\rangle}\), are comprised of the dimensions \(M\), \(L\) and \(T\). Thus 2 \(\Pi\) groups can be made using these variables for every combination of 3 repeating variables being chosen. Here, we choose \(\rho\), \(\alpha\), and \(\xi\) as are repeating variables.
\[\begin{split}\Pi_{1}&=f(\epsilon,\rho,\alpha_{1}, \xi)\\ &=\epsilon\rho^{x}\alpha_{1}^{y}\xi^{z}\end{split} \tag{7}\]
Solving for \(x\), \(y\), and \(z\) so that \(\Pi_{1}\) is dimensionless, we get \(\Pi_{1}=\epsilon/(\alpha_{1}\xi)\) or \(De=\xi\alpha_{1}/\epsilon\).
## Appendix B Rheology
### Analytical Derivation
Here, we provide a derivation for the integral equation Eq. 7, which we used to compute the viscoelastic moduli for our simulation. We start by noting that the theory of viscoelasticity for linear materials [25, 26] shows that the creep compliance \(J\), can be related to the stress \(\sigma\) and strain \(\gamma\) as follows:
\[\int_{-\infty}^{t}J(t-t^{\prime})\hat{\sigma}(t^{\prime})dt^{\prime}=\gamma(t) \tag{10}\]
Here we may note that \(J\), \(\sigma\) and \(\gamma\) are \(=0\)\(\forall t\in(-\infty,0)\) and \(\geq 0\)\(\forall t\in[0,\infty)\). Thus, we can extend the integral limits by doing the following:
\[\begin{split}\int_{-\infty}^{\infty}J(t-t^{\prime})\hat{\sigma} (t-t^{\prime})dt^{\prime}&=\gamma(t)\\ \int_{-\infty}^{\infty}\dot{J}(t-t^{\prime})\sigma(t-t^{\prime} )dt^{\prime}&=\gamma(t)\\ \text{using the product rule}\\ \int_{0}^{t}\dot{J}(t-t^{\prime})\sigma(t-t^{\prime})dt^{\prime}+J (0)\sigma(t)&=\gamma(t)\end{split} \tag{11}\]
Taking the Fourier Transform of the non-decomposed equation above and applying the convolution theorem gives us,
\[\widetilde{J}\ \widetilde{\sigma}=\ \widetilde{\gamma} \tag{12}\]
While one could potentially work with Eq. 11 or Eq. 12, the numerical inaccuracies associated with an FT [13, 20] and the statistical noise in a trajectory function like \(\sigma(t)\) or \(\gamma(t)\), would make the procedure more difficult. Thus, we use the Wiener-Khinchin theorem and further transform the auto-correlation into its mean squared version as follows:
\[\begin{split}\widetilde{J}\ \widetilde{j}&=\frac{|| \widetilde{\gamma}||^{2}}{||\widetilde{\sigma}||^{2}}\\ &=\frac{\widetilde{R_{\gamma\gamma}}}{\widetilde{R_{\sigma \sigma}}}\\ &=\frac{\langle\gamma^{2}\rangle-\widetilde{\langle\Delta\gamma ^{2}\rangle}/2}{\langle\sigma^{2}\rangle-\widetilde{\langle\Delta\sigma^{2} \rangle}/2}\end{split} \tag{13}\]
Reshuffling this equation and taking the inverse FT yields an integral equation. We decompose the limits to stay between \(0\) and \(t\), which adds a few boundary terms for the step function jump in \(J\) and \(\sigma\) at \(t=0\). Further, we change our notation for \(t\) to \(\tau\), to be consistent with the MSDs which are calculated as averages over lag times.
\[\begin{split} 2J^{2}(0)\langle\sigma^{2}\rangle+\int_{0}^{\tau}f(\tau-t^{ \prime})(\langle\sigma^{2}\rangle-\langle\Delta\sigma^{2}\rangle(t^{\prime})/ 2)dt^{\prime}\\ =(\langle\gamma^{2}\rangle-\langle\Delta\gamma^{2}\rangle(\tau) /2)\\ \text{where }f(\tau)\text{ is defined as follows,}\\ f(\tau)=\left(\int_{0}^{\tau}\dot{J}(\tau-t^{\text{\tiny T}}) \dot{J}(t^{\text{\tiny T}})dt^{\text{\tiny T}}+2J(0)\dot{J}(\tau)\right)\end{split} \tag{14}\]
This equation can be approximated, using similar mathematical approximations as used earlier in Ref. [20].
\[\begin{split}\int_{0}^{\tau}g(\tau-t^{\prime})\langle\Delta\sigma ^{2}\rangle(t^{\prime})dt^{\prime}&=\langle\Delta\gamma^{2} \rangle(\tau)\\ \text{where }g(\tau)\text{ is defined as follows,}\\ g(\tau)=\left(\int_{0}^{\tau}\dot{J}(\tau-t^{\text{\tiny T}}) \dot{J}(t^{\text{\tiny T}})dt^{\text{\tiny T}}\right)\end{split} \tag{15}\]
We approximate the right-hand side of this equation using the bubble portions \(\mathbf{r}\)[19, 24], giving: \(\simeq 3\pi\langle a\rangle(\langle\mathbf{r}^{2}\rangle-\langle\Delta\mathbf{r}^{ 2}\rangle(\tau_{i})/2)/\langle a\rangle^{3}\). However, it may be noted that this equation is not well defined at \(\tau=0\). Thus we evaluate this only for lag times greater than zero. To get an accurate solution, we consider the above equation at various finite lag time values or \(\tau_{i}\) and solve a set of simultaneous equations to find the appropriate creep compliance, \(J(t)\). Specifically, we choose \(\tau_{i}\in\{\tau_{1},\tau_{2},\tau_{3}...\tau_{max}\}\). Here, \(\tau_{1}\) can be as small as \(dt\). We report here results for \(\tau_{1}=1\). This choice, however, brings in some numerical error due to the integrals going from \(0\rightarrow\tau\). It may be noted that this equation is mathematically exact for \(\forall\tau>0\) and that the upper limit of our observation - \(\tau_{max}\), does not affect the numerical procedure, effectively avoiding a source of truncation error present in many earlier approaches.
### Choice of Fitting Model
One may notice that solving Eq. 14 or Eq. 15 requires a model for \(J(t)\). Here we choose a modified version of the model suggested in Ref. [8]. The original model put forth in the above study has a terminal mode of relaxation at long times, given by \(t/\eta_{R}\), and has been observed previously in experiments [10]. In our simulations, we, however, do not observe any terminal relaxation and thus ignore the additional term mentioned above. We considered a modified version of the model given as follows.
\[J(t)=1/G_{\infty}+k_{D}/G_{\infty}[(1+t/\tau_{0})^{\beta}-1] \tag{16}\] |
2301.13414 | Incentive Compatibility in the Auto-bidding World | Auto-bidding has recently become a popular feature in ad auctions. This
feature enables advertisers to simply provide high-level constraints and goals
to an automated agent, which optimizes their auction bids on their behalf. In
this paper, we examine the effect of different auctions on the incentives of
advertisers to report their constraints to the auto-bidder intermediaries. More
precisely, we study whether canonical auctions such as first price auction
(FPA) and second price auction (SPA) are auto-bidding incentive compatible
(AIC): whether an advertiser can gain by misreporting their constraints to the
autobidder.
We consider value-maximizing advertisers in two important settings: when they
have a budget constraint and when they have a target cost-per-acquisition
constraint. The main result of our work is that for both settings, FPA and SPA
are not AIC. This contrasts with FPA being AIC when auto-bidders are
constrained to bid using a (sub-optimal) uniform bidding policy. We further
extend our main result and show that any (possibly randomized) auction that is
truthful (in the classic profit-maximizing sense), scalar invariant and
symmetric is not AIC. Finally, to complement our findings, we provide
sufficient market conditions for FPA and SPA to become AIC for two advertisers.
These conditions require advertisers' valuations to be well-aligned. This
suggests that when the competition is intense for all queries, advertisers have
less incentive to misreport their constraints.
From a methodological standpoint, we develop a novel continuous model of
queries. This model provides tractability to study equilibrium with
auto-bidders, which contrasts with the standard discrete query model, which is
known to be hard. Through the analysis of this model, we uncover a surprising
result: in auto-bidding with two advertisers, FPA and SPA are auction
equivalent. | Yeganeh Alimohammadi, Aranyak Mehta, Andres Perlroth | 2023-01-31T05:08:37Z | http://arxiv.org/abs/2301.13414v2 | # Incentive Compatibility in the Auto-bidding World
###### Abstract
Auto-bidding has recently become a popular feature in ad auctions. This feature enables advertisers to simply provide high-level constraints and goals to an automated agent, which optimizes their auction bids on their behalf. These auto-bidding intermediaries interact in a decentralized manner in the underlying auctions, leading to new interesting practical and theoretical questions on auction design, for example, in understanding the bidding equilibrium properties between auto-bidder intermediaries for different auctions. In this paper, we examine the effect of different auctions on the incentives of advertisers to report their constraints to the auto-bidder intermediaries. More precisely, we study whether canonical auctions such as first price auction (FPA) and second price auction (SPA) are _auto-bidding incentive compatible (AIC)_: whether an advertiser can gain by misreporting their constraints to the autobidder.
We consider value-maximizing advertisers in two important settings: when they have a budget constraint and when they have a target cost-per-acquisition constraint. The main result of our work is that for both settings, FPA and SPA are not AIC. This contrasts with FPA being AIC when auto-bidders are constrained to bid using a (sub-optimal) uniform bidding policy. We further extend our main result and show that any (possibly randomized) auction that is truthful (in the classic profit-maximizing sense), scalar invariant and symmetric is not AIC. Finally, to complement our findings, we provide sufficient market conditions for FPA and SPA to become AIC for two advertisers. These conditions require advertisers' valuations to be well-aligned. This suggests that when the competition is intense for all queries, advertisers have less incentive to misreport their constraints.
From a methodological standpoint, we develop a novel continuous model of queries. This model provides tractability to study equilibrium with auto-bidders, which contrasts with the standard discrete query model, which is known to be hard. Through the analysis of this model, we uncover a surprising result: in auto-bidding with two advertisers, FPA and SPA are auction equivalent.
Introduction
Auto-bidding has become a popular tool in modern online ad auctions, allowing advertisers to set up automated bidding strategies to optimize their goals subject to a set of constraints. By using algorithms to adjust the bid for each query, auto-bidding offers a more efficient and effective alternative to the traditional fine-grained bidding approach, which requires manual monitoring and adjustment of the bids.
There are three main components in auto-bidding paradigm: 1) the advertisers who provide high-level constraints to the auto-bidders, 2) the auto-bidder agents who bid - in a decentralized manner - on behalf of each advertiser to maximize the advertiser's value subject to their constraints, and 3) the query-level auctions where queries are sold (see Figure 1).
Current research has made important progress in studying the interactions of the second and third components in the auto-bidding paradigm, particularly in understanding equilibrium properties (e.g., welfare and revenue) between the auto-bidders intermediaries for different auction rules (Aggarwal et al., 2019; Balseiro et al., 2021; Deng et al., 2021; Mehta, 2022; Liaw et al., 2022). There is also work on mechanism design for this setting in more generality, i.e., between the advertisers and the auctioneer directly abstracting out the second component (Balseiro et al., 2021, 2022; Golrezaei et al., 2021).
Our work, instead, examines the relation between value-maximizing advertisers, who maximize the value they obtain subject to a payment constraint, and the other two components of the auto-bidding paradigm. More precisely, we study the impact of different auction rules on the incentives of advertisers to report their constraints to the auto-bidder intermediaries. We specifically ask whether canonical auctions such as first price auction (FPA), second price auction (SPA) and general truthful auctions are _auto-bidding incentive compatible_ (AIC) - in other words, can advertisers gain by misreporting their constraints to the auto-bidder?
We consider value-maximizing advertisers in two important settings: when they have a budget constraint and when they have a target cost-per-acquisition (tCPA) constraint1. The main result of
Figure 1: The Auto-bidding Process: Advertisers submit constraints and receive query allocations with specified costs as output. Inside the auto-bidding feature, each advertiser has an agent that optimizes bidding profile within each advertiser’s constraints.
our work is that for both settings, FPA and SPA are not AIC. This contrasts with FPA being AIC when auto-bidders are constrained to bid using a (sub-optimal) uniform bidding policy. We further generalize this surprising result and show that any (possibly randomized) truthful auction having a scale invariance and symmetry property is also not AIC. We complement our result by providing sufficient market conditions for FPA and SPA to become AIC for two advertisers. These conditions require advertisers' valuations to be well-aligned. This suggests that when the competition is intense for all queries, advertisers have less incentive to misreport their constraints.
In our model, each advertiser strategically reports a constraint (either a tCPA or a budget) to an auto-bidder agent which bids optimally on their behalf in each of the queries. Key in our model, we consider a two stage game where first advertisers submit constraints to the auto-bidders and, in the subgame, auto-bidders reach a bidding equilibrium across all query-auctions. Thus, when an advertiser deviates and reports a different constraint to its auto-bidder, the whole bidding subgame equilibrium can change.2 In this context, an auction rule is called auto-bidding incentive compatible (AIC) if, for all equilibria, it is optimal for the advertiser to report their constraint to the auto-bidder.
Footnote 2: This two stage model captures the idea that auto-bidding systems rapidly react to any change in the auction. Hence, if there is any change in the bidding landscape, auto-bidders quickly converge to a new equilibrium.
### Main Results
We begin our results by presenting a stylized example in Section 2 that demonstrates how auto-bidding with SPA is not AIC (Theorem 2.1). Our example consists of a simple instance with three queries and two advertisers. This example highlights a scenario where an advertiser can benefit from lowering their reported budget or tCPA-constraint.
We then introduce a continuous query model that departs from the standard auto-bidding model by considering each query to be of infinitesimal size. This model provides tractability in solving equilibrium for general auction rules like FPA which is key to study the auto-bidding incentive compatibility properties of such auctions. Further, this continuous-query model succinctly captures real-world scenarios where the value of a single query is negligible compared to the pool of all queries that are sold.
Under the continuous-query model, we study the case where queries are sold using FPA and show that in the auto-bidding paradigm, FPA is not AIC (Section 4). We first characterize the optimal bidding strategy for each auto-bidder agent which, surprisingly, has a tractable form.3 We then leverage this tractable form to pin down an equilibrium for the case of two auto-bidders when both auto-bidders either face a budget or tCPA constraint. In this equilibrium, queries are divided between the two advertisers based on the ratio of their values for each advertiser. Specifically, advertiser 1 receives queries for which the ratio of its value to the other advertiser's value is higher than a certain threshold. From this point, determining the equilibrium reduces to finding a threshold that make advertisers' constraints tight (see Lemma 4.4 for more detail). We then show that for instances where the threshold lacks monotonicity with the auto-bidders constraints, advertisers have an incentive to misreport the constraint to the auto-bidder (Theorem 4.1). Conversely, when the thresholds are monotone advertisers report constraints truthfully. We show conditions on the advertisers' valuations, for the two-advertisers setting, to guarantee this monotonicity (Theorem 4.10). This condition requires a strong positive correlation of the advertisers' valuations across the queries. As a
practical insight, our results suggest that for settings where the competition on all queries is intense, advertisers' incentives to misreport is weak.
We then explore the case where, in FPA, auto-bidders are constrained to bid using a _uniform bidding strategy:_ the bid on each query is a constant times the advertiser's value for the query.4 Uniform bidding is only an optimal strategy when auctions are truthful (Aggarwal et al., 2019). Even though for FPA these strategies are suboptimal, they have gained recent attention in the literature due to their tractability Conitzer et al. (2022a,b); Chen et al. (2021); Gaitonde et al. (2022). We show that in such a scenario, FPA with uniform bidding turns out to be AIC (Theorem 4.2). However, we note that while this proves AIC in our model, the suboptimality of uniform bidding for FPA can give rise to incentives to deviate in other ways outside our model, e.g., by splitting the advertising campaigns into multiple campaigns with different constraints. These considerations are important when implementing this rule in practice.
Footnote 4: Uniform bidding strategy is also known in the literature as pacing bidding Conitzer et al. (2022a); Chen et al. (2021); Conitzer et al. (2022b); Gaitonde et al. (2022).
The second part of the paper pivots to the case where auctions are truthful, that is, auctions in which it is optimal for a profit-maximizing agent to bid their value. We first study the canonical SPA and show that, in our continuous-query model, SPA and FPA are auction equivalent. That is, the allocation and payments among the set of _reasonable_ equilibria (Theorem 5.5).5 As a Corollary, the results we obtain for FPA apply to SPA as well: SPA is not AIC and; and we derive sufficient conditions on advertisers' valuations so that SPA is AIC for two advertisers. We then consider a general class of randomized truthful auctions. We show that if the allocation rule satisfies these natural conditions:6 (i) scaled invariant (if all bids are multiplied by the same factor then the allocation doesn't change), and (ii) is symmetric (bidders are treated equally); then the auction rule is not AIC.
Footnote 5: We show the auction equivalence among uniform bidding equilibria for SPA and threshold-type equilibrium for FPA.
Footnote 6: These conditions have been widely studied in the literature due to their practical use (Mehta, 2022; Liaw et al., 2022; Allouah and Besbes, 2020).
The main results of the paper are summarized in Table 1.
### Related Work
The study of auto-bidding in ad auctions has gained significant attention in recent years. One of the first papers to study this topic is Aggarwal et al. (2019), which presents a mathematical formulation for the auto-bidders problem given a fixed constraints reported by advertisers. They show that uniform bidding is an optimal strategy if and only if auctions are truthful (in the profit-maximizing sense). They further started an important line of work to measure, using a Price of Anarchy (PoA) approach, the welfare implications when auto-bidders are bidding in equilibrium for different auctions.
\begin{table}
\begin{tabular}{|c|c|} \hline Per Query Auction & AIC \\ \hline \hline Second-Price Auction & Not AIC \\ Truthful Auctions & Not AIC \\ First-Price Auction & Not AIC \\ First-Price Auction with Uniform Bidding & AIC7 \\ \hline \end{tabular}
\end{table}
Table 1: Main Results
Current results state that for SPA the PoA is 2 Aggarwal et al. (2019) and also for FPA Liaw et al. (2022)8, and, interestingly, it can be improved if the auction uses a randomized allocation rule Mehta (2022); Liaw et al. (2022). In a similar venue, Deng et al. (2021); Balseiro et al. (2021) studies models where the auction has access to extra information and show how reserves and boosts can be used to improve welfare and efficiency guarantees.
Footnote 8: The authors show that for a general class of deterministic auctions \(PoA\geq 2\).
A second line of work, studies how to design revenue-maximizing auctions when bidders are value-maximizing agents and may have private information about their value or their constraints (Golrezaei et al., 2021; Balseiro et al., 2021; Balseiro et al., 2021). In all these settings, the mechanism designer is not constrained to the presence of the auto-bidding intermediaries (Component 2 in Figure 1). Our study has added structure by having advertisers submit their constraints first, followed by a decentralized subgame to achieve a bidding equilibrium before allocating and determining payments. Thus, a priori their mechanism setting can achieve broader outcomes than in our auto-bidding constraint paradigm. Interestingly, for the one query case the authors show that FPA with a uniform bidding policy is optimal Balseiro et al. (2021). Our results complement theirs and show that such mechanism is implementable in auto-bidding constraint paradigm and is AIC.
Closer to our auto-bidding paradigm, a recent line of work has started to study the incentive of advertisers when bidding via an auto-bidder intermediary. Mehta and Perlroth (2023) show that a profit-maximizing agent may benefit by reporting a target-based bidding strategy to the auto-bidder when the agent has concern that the auctioneer may change (ex-post) the auction rules. Also, in an empirical work, Li and Tang (2022) develop a new methodology to numerically approximate auto-bidding equilibrium and show numerical examples where advertisers may benefit my reporting their constraints on SPA. Our work complements their findings by showing under a theoretical framework that SPA is not AIC.
Our work also connects with the literature about auction with budgeted constraint bidders. In particular, our results are closely related to Conitzer et al. (2022) who study FPA with uniform bidding (a.k.a. pacing bidding). They introduce the concept of the first-price auction pacing equilibrium (FPPE) for budget-constrained advertisers, which is the same as the equilibrium in our auto-bidding subgame. They show that in FPPE the revenue and welfare are monotone increasing as a function of the advertisers' budgets. In our work, we show that in FPPE, advertisers' _values_ are monotone as a function of their reported budget. In addition, they differentiate between first and second-price by showing that FPPE is computable, unlike SPPE, where maximizing revenue has previously been known to be NP-hard Conitzer et al. (2022), and that the general problem of approximating the SPPE is PPAD-complete Chen et al. (2021). In contrast, we show in the continuous model both SPA and FPA are tractable. Interestingly, this dichotomy between FPA and SPA (both with uniform bidding) is reflected in our work as well - the former is AIC, while the latter is not.
Uniform bidding has been explored in a separate body of research on repeated auctions, without the presence of auto-bidding. Balseiro and Gur (2019) investigate strategies to minimize regret in simultaneous first-price auctions with learning. Gaitonde et al. (2022) take this concept further by extending the approach to a wider range of auction settings. Furthermore, Golrezaei et al. (2021) examines how to effectively price and bid for advertising campaigns when advertisers have both ROI and budget constraints.
Warm Up: Second Price Auction is not AIC!
To understand the implications of the auto-bidding model, we start with an example of auto-bidding with the second-price auction. Through this example, we will demonstrate the process of determining the equilibrium in an auto-bidding scenario and emphasize a case where the advertiser prefers to misreport their budget leading to the following theorem.
**Theorem 2.1**.: _For the budget setting (when all advertisers are budgeted-constrained) and for the tCPA-setting (when all advertisers are tCPA-constrained), we have that SPA is not AIC. That is, there are some instances where an advertiser benefits by misreporting its constraint._
Proof.: Consider two budget-constrained advertisers and three queries \(Q=\{q_{1},q_{2},q_{3}\}\), where the expected value of winning query \(q\) for advertiser \(a\) is denoted by \(v_{a}(q)\), and it is publicly known (as in Table 2). At first, each advertiser reports their budget to the auto-bidder \(B_{1}=2\), and \(B_{2}=4\). Then the auto-bidder agents, one for each advertiser, submit the bidding profiles (to maximize their advertisers' value subject to the budget constraint). The next step is a second-price auction per query, where the queries are allocated to the highest bidder.
The value of each query for each advertising the equilibrium bidding strategies for the auto-bidder agents is challenging, as the auto-bidder agents have to find the best-response bids with respect to the other auto-bidder agents, and each auto-bidder agent's bidding profile changes the cost of queries for the rest of the agents. To calculate such an equilibrium between auto-bidder agents, we use the result of Aggarwal et al. (2019) to find best-response strategies. Their result states that the best response strategy in any truthful auto-bidding auction is uniform bidding.9 In other words, each agent optimizes over one variable, a bidding multiplier \(\mu_{a}\), and then bids on query \(q\) with respect to the scaled value \(\mu_{a}v_{a}(q)\).
Footnote 9: They show uniform bidding is _almost_ optimal, but in Appendix A we show that in this example it is exactly optimal.
We show that with the given budgets \(B_{1}=2\) and \(B_{2}=4\), an equilibrium exists such that advertiser 1 only wins \(q_{1}\), and \(\mu_{1}=0.5\) and \(\mu_{2}=1\) result in such an equilibrium. To this end, we need to check: 1) Allocation: with bidding strategies \(\mathbf{b}_{1}=(\mu_{1}v_{1}(q_{1}),\mu_{1}v_{1}(q_{2}),\mu_{1}v_{1}(q_{3}))\) and \(\mathbf{b}_{2}=(\mu_{2}v_{2}(q_{1}),\mu_{2}v_{2}(q_{2}),\mu_{2}v_{2}(q_{3}))\), advertiser 1 wins \(q_{1}\) and advertiser 2 wins \(q_{2}\) and \(q_{3}\), 2) Budget constraints are satisfied, and 3) Bidding profiles are the best response: The auto-bidder agents do not have the incentive to increase their multiplier to get more queries. These three conditions are checked as follows:
1. _Allocation inequalities:_ For each query, the advertiser with the highest bid wins it. \[\frac{v_{1}(q_{1})}{v_{2}(q_{1})}\geq\frac{\mu_{2}}{\mu_{1}}=\frac{1}{0.5} \geq\frac{v_{1}(q_{2})}{v_{2}(q_{2})}\geq\frac{v_{1}(q_{3})}{v_{2}(q_{3})}.\]
2. _Budget constraints:_ Since the auction is second-price the cost of query \(q\) for advertiser 1 is \(\mu_{2}v_{2}(q)\) and for advertiser 2 is \(\mu_{1}v_{1}(q)\). So, we must have the following inequalities to hold so
\begin{table}
\begin{tabular}{l|c|c|c} & \(q_{1}\) & \(q_{2}\) & \(q_{3}\) \\ \hline Advertiser 1 & 4 & 3 & 2 \\ Advertiser 2 & 1 & 1.3 & 10 \\ \end{tabular}
\end{table}
Table 2: SPA with two budget constraint advertisers is not AIC: The value of each query for each advertising the equilibrium bidding strategies for the auto-bidder agents is challenging, as the auto-bidder agents have to find the best-response bids with respect to the other auto-bidder agents, and each auto-bidder agent’s bidding profile changes the cost of queries for the rest of the agents. To calculate such an equilibrium between auto-bidder agents, we use the result of Aggarwal et al. (2019) to find best-response strategies. Their result states that the best response strategy in any truthful auto-bidding auction is uniform bidding.\({}^{9}\) In other words, each agent optimizes over one variable, a bidding multiplier \(\mu_{a}\), and then bids on query \(q\) with respect to the scaled value \(\mu_{a}v_{a}(q)\).
that the budget constraints are satisfied: \[2=B_{1}\geq\mu_{2}v_{2}(q_{1})=1\qquad\text{(Advertiser 1)},\] \[4=B_{2}\geq\mu_{1}(v_{1}(3)+v_{1}(q_{2}))=2.5\qquad\text{(Advertiser 2)}.\]
3. _Best response:_ Does the advertiser's agent have incentive to raise their multiplier to get more queries? If not, they shouldn't afford the next cheapest query. \[2<\mu_{2}(v_{2}(q_{1})+v_{2}(q_{2}))=2.3\qquad\text{(Advertiser 1)},\] \[4<\mu_{1}(v_{1}(q_{3})+v_{1}(q_{2})+v_{1}(q_{1}))=4.5\qquad\text{(Advertiser 2)}.\]
Since all the three conditions are satisfied. Thus, this profile is an equilibrium for th auto-bidders bidding game. In this equilibrium, advertiser 1 wins \(q_{1}\) and advertiser 2 wins \(q_{2}\) and \(q_{3}\).
Now, consider the scenario that advertiser 1 wants to strategically report their budget \(B_{1}\) to the auto-bidder. Suppose the first advertiser decreases their budget. Intuitively, the budget constraint for the auto-bidder agent should be harder to satisfy, and hence the advertiser should not win more queries. But, contrary to this intuition, when advertiser 1 reports a lower budget \(B_{1}^{\prime}=1\), we show that, given the unique auto-bidding equilibrium, advertiser 1 wins \(q_{1}\) and \(q_{2}\) (more queries than the case where advertiser 1 reports \(B_{1}=2\)). Similar to above, we can check that \(\mu_{1}^{\prime}=\frac{1}{2.3}\), and \(\mu_{2}^{\prime}=1\) results in an equilibrium (we prove the uniqueness in Appendix A):
1. Allocation: advertiser 1 wins \(q_{1}\) and \(q_{2}\) since it has a higher bid on them, \[\frac{v_{1}(q_{1})}{v_{2}(q_{1})}\geq\frac{v_{1}(q_{2})}{v_{2}(q_{2})}\geq \frac{\mu_{2}^{\prime}}{\mu_{1}^{\prime}}=\frac{1}{2.3}\geq\frac{v_{1}(q_{3})} {v_{2}(q_{3})}.\]
2. Budget constraints: \[4\geq v_{1}(q_{3}),\qquad\text{and}\] \[1=(1/2.3)(v_{2}(q_{1})+v_{2}(q_{2})).\]
3. Best response: \[4<1(v_{1}(q_{3})+v_{1}(q_{2})),\qquad\text{and}\] \[1<(1/2.3)(v_{2}(q_{1})+v_{2}(q_{2})+v_{2}(q_{3})).\]
This surprising example leads to the first main result of the paper. In Appendix A, we will generalize the above example to the case of tCPA-constrained advertisers with the same set of queries as in Table 2.
Before studying other canonical auctions, in the next section we develop a tractable model of continuous query. Under this model it turns out that the characterization of the auto-bidders bidding equilibria when the auction is not SPA is tractable. This tractability is key for studying auto-bidding incentive compatibility.
Model
The baseline model consists of a set of \(A\) advertisers competing for \(q\in Q\) single-slot queries owned by an auctioneer. We consider a continuous-query model where \(Q=[0,1]\). Let \(x_{a}(q)\) be the probability of winning query \(q\) for advertiser \(a\). Then the expected value and payment of winning query \(q\) at price \(p_{a}(q)\) are \(x_{a}(q)v_{a}(q)dq\) and \(p_{a}(q)dq\).10, 11 Intuitively, this continuous-query model is a first-order approximation for instances where the size of each query relative to the whole set is small.
Footnote 10: All functions \(v_{a},x_{a},p_{a}\) are integrable with respect to the Lebesgue measure \(dq\).
Footnote 11: The set \(Q=[0,1]\) is chosen to simplify the exposition. Our results apply to a general metric measurable space \((Q,\mathcal{A},\lambda)\) with atomless measure \(\lambda\).
The auctioneer sells each query \(q\) using a query-level auction which induces the allocation and payments \((x_{a}(q),p_{a}(q))_{a\in A}\) as a function of the bids \((b_{a})_{a\in A}\). In this paper, we focus on the First Price Auction (FPA), Second Price Auction (SPA) and more generally any Truthful Auction (see Section 5.2 for details).
#### Auto-bidder agent:
Advertisers do not participate directly in the auctions, rather they report high-level goal constraints to an auto-bidder agent who bids on their behalf in each of the queries. Thus, Advertiser \(a\) reports a budget constraint \(B_{a}\) or a target cost-per-acquisition constraint (tCPA) \(T_{a}\) to the auto-bidder. Then, the auto-bidder taking as fixed other advertiser's bid, submits bids \(b_{a}(q)\) to induce \(x_{a}(q),p_{a}(q)\) that solves
\[\max\int_{0}^{1}x_{a}(q)v_{a}(q)dq \tag{1}\] \[\text{s.t.}\int_{0}^{1}p_{a}(q)dq\leq B_{a}+T_{a}\int_{0}^{1}x_{a }(q)v_{a}(q)dq. \tag{2}\]
The optimal bidding policy does not have a simple characterization for a general auction. However, when the auction is truthful (like SPA) the optimal bid take a simple form in the continuous model. (Aggarwal et al., 2019).
**Remark 3.1** (Uniform Bidding).: _If the per-query auction is truthful, then uniform bidding is the optimal policy for the autobidder. Thus, \(b_{a}(q)=\mu\cdot v_{a}(q)\) for some \(\mu>0\). We formally prove this in Claim 5.4._
#### Advertisers
Following the current paradigm in autobidding, we consider that advertisers are value-maximizers and can be two of types: a budget-advertiser or tCPA-advertiser. Payoffs for these advertisers are as follows.
* For a budget-advertiser with budget \(B_{a}\), the payoff is \[u_{a}=\begin{cases}\int_{0}^{1}x_{a}(q)v_{a}(q)dq&\text{ if }\int_{0}^{1}p_{a}(q)dq\leq B_{a}\\ -\infty&\text{ if not.}\end{cases}\]
* For a tCPA-advertiser with target \(T_{a}\), the payoff is \[u_{a}=\begin{cases}\int_{0}^{1}x_{a}(q)v_{a}(q)dq&\text{ if }\int_{0}^{1}p_{a}(q)dq \leq T_{a}\cdot\int_{0}^{1}x_{a}(q)v_{a}(q)dq\\ -\infty&\text{ if not}.\end{cases}\]
#### Game, Equilibrium and Auto-bidding Incentive Compatibility (AIC)
The timing of the game is as follows. First, each advertiser depending on their type submits a budget or target constraint to an auto-bidder agent. Then, each auto-bidder solves Problem 1 for the respective advertiser. Finally, the per-query auctions run and allocations and payments accrue.
We consider a complete information setting and use subgame perfect equilibrium (SPE) as solution concept. Let \(V_{a}(B_{a}^{\prime};B_{a})\) the expected payoff in the subgame for a budget-advertiser with budget \(B_{a}\) that reports \(B_{a}^{\prime}\) to the auto-bidder (likewise we define \(V_{a}(T_{a}^{\prime};T_{a})\) for the tCPA-advertiser).
**Definition 3.2** (Auto-bidding Incentive Compatibility (AIC)).: _An auction rule is Auto-bidding Incentive Compatible (AIC) if for every SPE we have that \(V_{a}(B_{a};B_{a})\geq V_{a}(B_{a}^{\prime};B_{a})\) and \(V_{a}(T_{a};T_{a})\geq V_{a}(T_{a}^{\prime};T_{a})\) for every \(B_{a},B_{a}^{\prime},T_{a},T_{a}^{\prime}\)._
Similar to classic notion of incentive compatibility, an auction rule satisfying AIC makes the advertisers' decision simpler: they simply need to report their target to the auto-bidder. However, notice that the auto-bidder plays a subgame after advertiser's reports. Thus, when Advertiser \(a\) deviates and submit a different constraint, the subgame outcome may starkly change not only on the bids of Advertiser \(a\) but also other advertisers bid may change as well.
## 4 First Price Auctions
In this section, we demonstrate that the first price auction is not auto-bidding incentive compatible.
**Theorem 4.1**.: _Suppose that there are at least two budget-advertisers or two tCPA-advertisers, then FPA is not AIC._
Later in Section 4.2, we show a complementary result by providing sufficient conditions on advertisers' value functions to make FPA be AIC for the case of two advertisers. We show that this sufficient condition holds in many natural settings, suggesting that in practice FPA tends to be AIC.
Then in Section 4.3, we turn our attention to FPA where autobidders are restricted to use uniform bidding across the queries. In this case, we extend to our continuous-query model the result of Conitzer et al. (2022a) and show the following result.
**Theorem 4.2**.: _FPA restricted to uniform bidding is AIC._
### Proof of Theorem 4.1
We divide the proof of Theorem 4.1 in three main steps. Step 1 characterizes the best response bidding profile for an autobidder in the subgame. As part of our analysis, we derive a close connection between first and second price auctions in the continuous-query model that simplifies the task of finding the optimal bidding for each query to finding a single multiplying variable for each advertiser.
In Step 2, we leverage the tractability of our continuous-query model and pin-down the subgame bidding equilibrium when there are either two budget-advertisers or two tCPA-advertisers in the
game (Lemma 4.4). We derive an equation that characterizes the ratio of the multipliers of each advertiser as a function of the constraints submitted by the advertisers. This ratio defines the set of queries that each advertiser wins, and as we will see the value accrued by each advertiser is monotone in this ratio. So, to find a non-AIC example, one has to find scenarios where the equilibrium ratio is not a monotone function of the input constraints which leads to the next step.
To conclude, we show in Step 3 an instance where the implicit solution for the ratio is nonmonotonic, demonstrating that auto-bidding in first-price auctions is not AIC. As part of our proof, we interestingly show that AIC is harder to satisfy when advertisers face budget constraints rather than tCPA constraints (see Corollary 4.6).
### Step 1: Optimal Best Response
The following claim shows that, contrary to the discrete-query model, the best response for the autobidder in a first price auction can be characterized as function of a single multiplier.
**Claim 4.3**.: _Taking other auto-bidders as fixed, there exists a multiplier \(\mu_{a}\geq 0\) such that the following bidding strategy is optimal:_
\[b_{a}(q)=\begin{cases}\max_{a^{\prime}\neq a}(b_{a^{\prime}}(q))&\mu_{a}v_{a} (q)\geq\max_{a^{\prime}\neq a}(b_{a^{\prime}}(q))\\ 0&\mu_{a}v_{a}(q)\neq\max_{a^{\prime}}(b_{a^{\prime}}(q)).\end{cases}\]
_The result holds whether the advertiser is budget-constrained or tCPA-constrained12._
Footnote 12: In FPA ties are broken in a way that is consistent with the equilibrium. This is similar to the pacing equilibrium notion where the tie-breaking rule is endogenous to the equilibrium Conitzer et al. (2022a).
Proof.: We show that in a first-price auction, the optimal bidding strategy is to bid on queries with a value-to-price ratio above a certain threshold. To prove this, we assume that the bidding profile of all advertisers is given. Since the auction is first-price, advertiser \(a\) can win each query \(q\) by fixing small enough \(\epsilon>0\) and paying \(\max_{a^{\prime}\neq a}(b_{a^{\prime}}(q))+\epsilon\). So, let \(p_{a}(q)=\max_{a^{\prime}\neq a}(b_{a^{\prime}}(q))\), be the price of query \(q\). Since we have assumed that the value functions of all advertisers are integrable (i.e., there are no measure zero sets of queries with a high value), in the optimal strategy \(p_{a}\) is also integrable since it is suboptimal for any advertiser to bid positive (and hence have a positive cost) on a measure zero set of queries.
First, consider a budget-constrained advertiser. The main idea is that since the prices are integrable, the advertiser's problem is similar to a continuous knapsack problem. In a continuous knapsack problem, it is well known that the optimal strategy is to choose queries with the highest value-to-cost ratio Goodrich and Tamassia (2001). Therefore, there must exist a threshold, denoted as \(\mu\), such that the optimal strategy is to bid on queries with a value-to-price ratio of at least \(\mu\). So if we let \(\mu_{a}=\frac{1}{\mu}\), then advertiser \(a\) bids on any query with \(\mu_{a}v_{a}(q)\geq p_{a}(q)\).
We prove it formally by contradiction. Assume to the contrary, that there exist non-zero measure sets \(X,Y\subset Q\) such that for all \(x\in X\) and \(y\in Y\), the fractional value of \(x\) is less than the fractional value of \(y\), i.e., \(\frac{v_{a}(x)}{p_{a}(q_{x})}<\frac{v_{a}(y)}{p_{a}(y)}\), and in the optimal solution advertiser \(a\) gets all the queries in \(X\) and no query in \(Y\). However, we show that by swapping queries in \(X\) with queries in \(Y\) with the same price, the advertiser can still satisfy its budget constraint while increasing its value.
To prove this, fix \(0<\alpha<\min(\int_{X}p_{a}(q)dq,\int_{Y}p_{a}(q)dq)\). Since the Lebesgue measure is atomless, there exists subsets \(X^{\prime}\subseteq X\) and \(Y^{\prime}\subseteq Y\) such that \(\alpha=\int_{X^{\prime}}p_{a}(q)dq=\int_{Y^{\prime}}p_{a}(q)dq\). Since the value
per cost of queries in \(Y\) is higher than queries in \(X\), by swapping queries of \(X^{\prime}\) with \(Y^{\prime}\), the value of the new sets increases, while the cost does not change. Therefore, the initial solution cannot be optimal.
A similar argument holds for tCPA-constrained advertisers. Swapping queries in \(X^{\prime}\) with \(Y^{\prime}\) does not change the cost and increases the upper bound of the tCPA constraint, resulting in a feasible solution with a higher value. Therefore, the optimal bidding strategy for tCPA constraint is also \(b_{a}(q)\) as defined in the statement of the claim.
### Step 2: Equilibrium Characterization
The previous step showed that the optimal bidding strategy is to bid on queries with a value-to-price ratio above a certain threshold. Thus, we need to track one variable per auto-bidder to find the subgame equilibrium.
In what follows, we focus on the case of finding the variables when there are only two advertisers in the game. This characterization of equilibrium gives an implicit equation for deriving the equilibrium bidding strategy, which makes the problem tractable in our continuous-query model.13.
Footnote 13: Notice that for the discrete-query model finding equilibrium is PPAD hard Filos-Ratsikas et al. (2021)
From Claim 4.3 we observe that the ratio of bidding multipliers is key to determine the set of queries that each advertiser wins. To map the space of queries to the bidding space, we introduce the function \(h(q)=\frac{v_{1}(q)}{v_{2}(q)}\). Hence, for high values of \(h\), the probability that advertiser 1 wins the query increases. Also, notice that without loss of generality, we can reorder the queries on \([0,1]\) so that \(h\) is non-decreasing.
In what follows, we further assume that \(h\) is increasing on \([0,1]\). This implies that \(h\) is invertible and also differentiable almost everywhere on \([0,1]\). With these assumptions in place, we can now state the following lemma to connect the subgame equilibrium to the ratio of advertisers' values.
**Lemma 4.4**.: _[Subgame Equilibrium in FPA] Given two budget-constrained auto-bidders with budget \(B_{1}\) and \(B_{2},\) let \(\mu_{1}\) and \(\mu_{2}\) be as defined in Claim 4.3 for auto-bidding with FPA. Also assume that \(h(q)=\frac{v_{1}(q)}{v_{2}(q)}\) as defined above is strictly monotone. Then \(\mu_{1}=\frac{B_{2}}{E[z\mathbbm{1}(z\geq r)]}\) and \(\mu_{2}=\mu_{1}r\), where \(r\) is the solution of the following implicit function,_
\[\frac{rE[\mathbbm{1}[z\geq r)]}{E[z\mathbbm{1}(z\leq r)]}=\frac{B_{1}}{B_{2}}. \tag{3}\]
_Here, \(E[\cdot]\) is defined as \(E[P(z)]=\int_{0}^{\infty}P(z)f(z)dz,\) where \(f(z)=\frac{v_{2}(h^{-1}(z))}{h^{\prime}(h^{-1}(z))}\) wherever \(h^{\prime}\) is defined, and it is zero otherwise._
_Also, for two tCPA auto-bidders with targets \(T_{1}\) and \(T_{2}\), we have \(\mu_{1}=\frac{T_{1}E[1(z\leq r)]}{E[1(z\geq r)]}\) and \(\mu_{2}=\mu_{1}r\), where \(r\) is the answer of the following implicit function,_
\[\frac{rE[\mathbbm{1}(z\geq r)]}{E[z\mathbbm{1}(z\geq r)]}\frac{E[\mathbbm{1} (z\leq r)]}{E[z\mathbbm{1}(z\leq r)]}=\frac{T_{1}}{T_{2}}. \tag{4}\]
**Remark 4.5**.: _The function \(f\) intuitively represents the expected value of the queries that advertiser 2 can win as well as the density of the queries that advertiser 1 can win. Also, the variable \(r\) shows the cut-off on how the queries are divided between the two advertisers. In the proof, we will see that the advertisers' value at equilibrium is computed with respect to \(f\): Advertiser 1's overall value is \(\int_{r}^{\infty}zf(z)dz\) and advertiser 2's overall value is \(\int_{0}^{r}f(z)dz\)._
Proof.: First, consider budget constraint auto-bidders. Given Claim 4.3, in equilibrium price of query \(q\) is \(\min(\mu_{1}v_{1}(q),\mu_{2}v_{2}(q))\). Therefore, the budget constraints become:
\[B_{1}=\int_{0}^{1}\mu_{2}v_{2}(q)\mathbbm{1}(\mu_{2}v_{2}(q)\leq\mu_{1}v_{1}(q) )dq,\]
\[B_{2}=\int_{0}^{1}\mu_{1}v_{1}(q)\mathbbm{1}(\mu_{2}v_{2}(q)\geq\mu_{1}v_{1}(q) )dq.\]
With a change of variable from \(q\) to \(z=h(q)\) and letting \(r=\frac{\mu_{2}}{\mu_{1}}\), we have:
\[B_{1}=\int_{r}^{\infty}\mu_{2}v_{2}(h^{-1}(z))\frac{dh^{-1}(z)}{dz}dz\]
\[B_{2}=\int_{0}^{r}\mu_{1}v_{1}(h^{-1}(z))\frac{dh^{-1}(z)}{dz}dz.\]
Observe that \(v_{1}(h^{-1}(z))=zv_{2}(h^{-1}(z))\), then if we let \(f(z)=v_{2}(h^{-1})(h^{-1})^{\prime}=\frac{v_{2}(h^{-1}(z))}{h^{\prime}(h^{-1} (z))}\), the constraints become
\[B_{1}=\mu_{2}\int_{r}^{\infty}f(z)dz, \tag{5}\]
\[B_{2}=\mu_{1}\int_{0}^{r}zf(z)dz. \tag{6}\]
We obtain Equation (3) by diving both sides of Equation (5) by the respective both sides of Equation (6).
Now, consider two tCPA constrained auto-bidders. Similar to the budget-constrained auto-bidders, we can write
\[T_{1}\int_{0}^{1}v_{1}(q)\mathbbm{1}(\mu_{2}v_{2}(q)\leq\mu_{1}v_{1}(q))dq= \int_{0}^{1}\mu_{2}v_{2}(q)\mathbbm{1}(\mu_{2}v_{2}(q)\leq\mu_{1}v_{1}(q))dq\]
\[T_{2}\int_{0}^{1}v_{2}(q)\mathbbm{1}(\mu_{2}v_{2}(q)\geq\mu_{1}v_{1}(q))dq= \int_{0}^{1}\mu_{1}v_{1}(q)\mathbbm{1}(\mu_{2}v_{2}(q)\geq\mu_{1}v_{1}(q))dq\]
The same way of changing variables leads to the following:
\[\frac{T_{1}}{T_{2}}\frac{\int_{r}^{\infty}xf(x)dx}{\int_{0}^{r}f(x)}=\frac{r \int_{r}^{\infty}f(x)dx}{\int_{0}^{r}xf(x)dx}.\]
This finishes the proof of the lemma.
The previous theorem immediately implies that any example of valuation functions that is non-AIC for budget-advertisers, it will be non-AIC for tCPA-advertisers as well.
**Corollary 4.6**.: _If auto-bidding with the first-price and two budget-advertisers is not AIC, then auto-bidding with the same set of queries and two tCPA-advertisers is also not AIC._
Proof.: Recall that advertiser \(1\) wins all queries with \(h(q)\geq r\). So, the value accrued by advertiser \(1\) is decreasing in \(r\). So, if an instance of auto-bidding with tCPA-constrained advertisers is not AIC for advertiser \(1\), then the corresponding function \(\frac{r\int_{0}^{\infty}f(x)dx}{\int_{0}^{r}xf(x)dx}\frac{\int_{0}^{r}f(x)}{ \int_{r}^{\infty}xf(x)dx}\) (same as (4)) must be increasing for some \(r^{\prime}\).
On the other hand, recall that \(\frac{r\int_{r}^{\infty}f(x)dx}{\int_{0}^{r}xf(x)dx}\) is the ratio for budget-constrained bidders equilibrium as in (3). The additional multiplier in the equilbrium equation of tCPA constraint advertiser in (4) is \(\frac{\int_{0}^{r}f(x)dx}{\int_{r}^{\infty}xf(x)}\) which is increasing in \(r\). So, if the auto-bidding for budget-constrained bidders is not AIC and hence the corresponding ratio is increasing for some \(r^{\prime}\), it should be increasing for the tCPA-constrained advertisers as well, which proves the claim.
### Step 3: Designing a non AIC instance
The characterization of equilibrium from Step 2 leads us to construct an instance where advertisers have the incentive to misreport their constraints. The idea behind the proof is that the value accrued by the advertiser \(1\) is decreasing in \(r\) ( as found in Lemma 4.4). Then to find a counter-example, it will be enough to find an instance of valuation functions such that the equilibrium equation (3) is non-monotone in \(r\).
Proof of Theorem 4.1.: We construct an instance with two budget-constrained advertisers. By Corollary 4.6 the same instance would work for tCPA-constrained advertisers. To prove the theorem, we will find valuation functions \(v_{1}\) and \(v_{2}\) and budgets \(B_{1}\) and \(B_{2}\) such that the value accrued by advertiser \(1\) decreases when their budget increases.
Define \(g(r)=\frac{\int_{r}^{\infty}xf(x)dx}{r\int_{r}^{\infty}f(x)dx}\). By Lemma 4.4, one can find the equilibrium by solving the equation \(g(r)=\frac{B_{2}}{B_{1}}\). Recall that advertiser \(1\) wins all queries with \(\frac{v_{1}(q)}{v_{2}(q)}\geq r\). So, the total value of queries accrued by advertiser \(1\) is decreasing in \(r\). Hence, to construct a non- AIC example, it is enough to find a function \(f\) such that \(g\) is non-monotone in \(r\).
A possible such non-monotone function \(g\) is
\[g(r)=\frac{(r-1)^{3}+3}{cr}-1, \tag{7}\]
where \(c\) is chosen such that \(\min_{r\geq 0}g(r)=0\), i.e., \(c=\min\frac{(r-1)^{3}+3}{r}\approx 1.95105\). To see why \(g\) is non-monotone, observe that \(g(r)\) is decreasing for \(r\leq 1.8\), because \(g^{\prime}(r)=\frac{2r^{3}-3r^{2}-2}{cr^{2}}\) is negative for \(r\leq 1.8\), and then increasing for \(r\geq 1.81\).
We claim the function \(f\) defined as in,
\[f(r)=3c(r-1)^{2}\frac{e^{\int_{0}^{r}\frac{c}{(r-1)^{3}+3}dx}}{((r-1)^{3}+3)^{ 2}}, \tag{8}\]
would result in the function \(g\) in (7). To see why this claim is enough to finish the proof, note that there are many ways to choose the value functions of advertisers to derive t\(f\) as in (8). One possible way is to define \(v_{1},v_{2}:[0,1]\rightarrow\mathbb{R}\) as \(v_{2}(q)=f(\tan(q))/(\tan(q)^{2}+1)\) and \(v_{1}(q)=\tan(q)v_{2}(q)\) (see Fig. 2).
So it remains to prove that choosing \(f\) as in (8) would result in \(g\) as defined in (7). To derive \(f\) from \(g\), first we simplify \(g\) using integration by part,
\[g(r) =\frac{\int_{0}^{r}xf(x)dx}{r\int_{r}^{\infty}f(x)dx}\] \[=\frac{r\int_{0}^{r}f(x)dx-\int_{0}^{r}\int_{0}^{x}f(y)dydx}{r \int_{r}^{\infty}f(x)dx}\] \[=\frac{r\int_{0}^{\infty}f(x)dx-\int_{0}^{r}\int_{0}^{x}f(y)dydx} {r\int_{r}^{\infty}f(x)dx}-1,\]
Assuming that \(\int_{0}^{\infty}f(x)\) is finite, the above equations lead to the following
\[rg(r)+r=\frac{\int_{0}^{r}\int_{x}^{\infty}f(y)dydx}{\int_{r}^{\infty}f(x)dx}. \tag{9}\]
Therefore, by integrating the inverse of both sides,
\[\log(\int_{0}^{r}\int_{x}^{\infty}f(y)dydx)=C+\int_{0}^{r}\frac{1}{xg(x)+x}dx,\]
and by raising to the exponent
\[\int_{0}^{r}\int_{x}^{\infty}f(y)dydx=Ke^{\int_{0}^{r}\frac{1}{xg(x)+x}dx}.\]
for some constants \(C\) and \(K>0\). Then by differentiating both sides with respect to \(x\),
\[\int_{r}^{\infty}f(x)dx=\frac{K}{rg(r)+r}e^{\int_{0}^{r}\frac{1}{xg(x)+x}dx}.\]
Note that for any choice of \(K\geq 0\), dividing the last two equations will result in (9). So, without loss of generality, we can assume \(K=1\). By differentiating again, we can derive \(f\) as a function of \(g\):
\[f(r)=\frac{(g^{\prime}(r)r+g(r))}{(rg(r)+r)^{2}}e^{\int_{0}^{r}\frac{1}{xg(x)+ x}dx}.\]
Figure 2: An example of two advertisers such that FPA is not AIC (proof of Theorem 4.1). When \(\frac{B_{1}}{B_{2}}=1200\), there are three values for \(r\) (see the right panel) that lead to equilibrium, and one (orange) leads to non-AIC equilibrium.
We need \(g^{\prime}(r)r+g(r)\geq 0\) to ensure that \(f(r)\geq 0\) for all \(r\). This holds for \(g\) as in (7). Finally, by substituting \(g\) as in (7), we will derive \(f\) as in (8).
**Remark 4.7**.: _Note that the above proof shows that for values of \(r\) such that there exists a equilibrium which is not AIC, there exists always a second monotone equilibrium. This follows from the fact that the function \(g(r)\) tends to infinity as \(r\to\infty\), so, \(g\) must be increasing for some large enough \(r\)._
Before moving on to finding conditions for incentive compatibility, we also note that the above's characterization implies the existence of equilibrium for auto-bidding with any pairs of advertisers.
**Proposition 4.8**.: _Given auto-bidding satisfying the conditions of Lemma 4.4, the equilibrium for all pairs of budgets or all pairs of tCPA constrained advertisers always exists._
Proof.: Recall that the equilibirum exists if there exists an \(r\) such that
\[\frac{B_{2}}{B_{1}}=\frac{\int_{0}^{r}xf(x)dx}{r\int_{r}^{\infty}f(x)dx}\]
has a solution for any value of \(\frac{B_{2}}{B_{1}}\). Note that the right-hand side \((\frac{\int_{0}^{r}xf(x)dx}{r\int_{r}^{\infty}f(x)dx})\) is positive for any \(r>0\), and it continuously grows to infinity as \(r\to\infty\). So, to make sure that every value of \(B_{2}/B_{1}\) is covered, we need to check whether at \(r=0\) the ratio becomes zero. By L'Hopital rule, \(\lim_{z\to 0}\frac{zf(z)}{\int_{z}^{\infty}f(x)dx-zf(z)}=0\), which is as desired.
For tCPA constrained advertiser, the second ratio \(\frac{r\int_{0}^{r}f(x)dx}{\int_{r}^{\infty}xf(x)dx}\) always converges to \(0\), so the equilibrium in this case always exists.
### Sufficient Conditions for Incentive Compatibility
In this section we show that the lack of ACI happens for cases where advertisers' valuations have unusual properties. More precisely, the main result of the section is to characterize sufficient conditions on the advertiser's valuations so that FPA is AIC when there are two advertisers in the auction.
For this goal, we recall the function \(f(z)=\frac{v_{2}(h^{-1}(z))}{h^{\prime}(h^{-1}(z))}\) where \(h(q)=\frac{v_{1}(q)}{v_{2}(q)}\) defined in Section 4.1. As shown in Lemma4.4, function \(f\) behaves as a value of the queries advertiser \(2\) gets and the density of queries that advertiser \(1\) gets.
**Lemma 4.9**.: _Consider that there are two advertisers and they can either both be budget-advertisers or tCPA-advertisrs. Also, suppose that auto-bidder with FPA uses the optimal bidding strategy in Claim 4.3. Then a sufficient condition for FPA to be AIC is that \(f\) has a monotone hazard rate, i.e., \(\frac{f(r)}{\int_{r}^{\infty}f(x)dx}\) is non-decreasing in \(r\)._
Proof.: Following the proof Theorem 4.1, if \(g(r)=\frac{\int_{0}^{r}\int_{x}^{\infty}f(y)dydx}{r\int_{r}^{\infty}f(x)dx}\) is non-decreasing in \(r\) then the equilibrium is AIC. The equivalent sufficient conditions obtained by imposing the inequality \(g^{\prime}(r)\geq 0\) is that for all \(r\geq 0\),
\[r\big{(}\int_{r}^{\infty}f(x)dx\big{)}^{2}\geq\big{(}\int_{0}^{r}\int_{x}^{ \infty}f(y)dydx\big{)}\big{(}\int_{r}^{\infty}f(x)dx-rf(r)\big{)}. \tag{10}\]
If \(\int_{r}^{\infty}f(x)dx\leq rf(r)\) then above's inequality obviously holds. So, we can assume that for some \(r>0\), \(\int_{r}^{\infty}f(x)dx>rf(r)\). Since \(\frac{f(z)}{\int_{z}^{\infty}f(x)dx}\) is non-decreasing in \(z\), we must have that for all \(r^{\prime}\leq r\), \(\frac{f(r^{\prime})}{\int_{r^{\prime}}^{r^{\prime}}f(x)dx}\leq\frac{f(r)}{\int _{r}^{\infty}f(x)dx}\leq\frac{1}{r}\leq\frac{1}{r^{\prime}}\). On the other hand by taking the derivative of \(\frac{f(z)}{\int_{z}^{\infty}f(x)dx}\), we must have that \(f^{\prime}(z)\int_{z}^{\infty}f(x)dx+f(z)^{2}\geq 0\). By considering two cases on the sign of \(f^{\prime}\), for \(z\leq r\) we must have, \(f(z)\big{(}zf^{\prime}(z)+f(z)\big{)}\geq 0\), and hence \((zf(z))^{\prime}\geq 0\) for all \(z\leq r\). Therefore, \(zf(z)\) is non-decreasing for \(z\leq r\).
On the other hand,
\[\int_{0}^{r}\int_{x}^{\infty}f(y)dydx =\int_{0}^{r}\int_{x}^{r}f(y)dydx+\int_{0}^{r}\int_{r}^{\infty}f (y)dydx\] \[=r\int_{0}^{r}f(x)dx-\int_{0}^{r}\int_{0}^{x}f(y)dydx+r\int_{r}^{ \infty}f(x)dx\] \[=\int_{0}^{r}xf(x)dx+r\int_{r}^{\infty}f(x)dx,\]
where the second equation is by integration by part. Then by applying monotonicity of \(z(f)z\) for \(z\leq r\) we have \(\int_{0}^{r}\int_{x}^{\infty}f(y)dydx\leq r^{2}f(r)+r\int_{r}^{\infty}f(x)dx\). So, to prove (10) it is enough to show that
\[\left(\int_{r}^{\infty}f(x)dx\right)^{2}\geq\left(rf(r)+\int_{r}^{\infty}f(x) dx\right)\left(\int_{r}^{\infty}f(x)dx-rf(r)\right),\]
which holds, since the right-hand side is equal to \(\big{(}\int_{r}^{\infty}f(x)dx\big{)}^{2}-(rf(r))^{2}\) strictly less than the left-hand side.
While the condition on \(f\) has intuitive properties when seen as a density, it has the unappealing properties to be too abstract in terms of the conditions on the advertisers' valuation. The following result, provides sufficient conditions on value functions \(v_{1}\) and \(v_{2}\) that makes \(f\) be monotone hazard rate, and hence, FPA to be AIC.
**Theorem 4.10**.: _Consider two advertisers that are either budget-advertisers or tCPA-advertisers. Assume that \(h(q)=\frac{v_{1}(q)}{v_{2}(q)}\) is increasing concave function and that \(v_{2}\) is non-decreasing. Then, the equilibrium in FPA auto-bidding with bidding strategy as in Claim 4.3 is AIC._
Proof.: Note that when \(f\) is non-decreasing, it also has a monotone hazard rate. Now, when \(h\) is concave, \(\frac{1}{h^{\prime}}\) is a non-decreasing function, and since \(v_{2}\) is also non-decreasing, then \(f\) is also non-decreasing.
### FPA with uniform bidding
The previous section shows that when auto-bidders have full flexibility on the bidding strategy, FPA is not AIC. However, non-uniform bid is not simple to implement and auto-bidders may be constrained to use simpler uniform bidding policies (aka pacing bidding). In this context, the main result of the section is Theorem 4.2 that shows that when restricted to uniform bidding policies FPA is AIC. Note that here, we are assuming a simple model where advertisers do not split campaigns. So, FPA with uniform bidding is AIC but it could bring up other incentives for advertisers when it is implemented.
**Definition 4.11** (Uniform bidding equilibrium).: _A uniform bidding equilibrium for the auto-bidders subgame corresponds to bid multipliers \(\mu_{1},\ldots,\mu_{N}\) such that every auto-bidder \(a\) chooses the uniform bidding policy \(\mu_{a}\) that maximizes Problem (1) when restricted to uniform bidding policies with the requirement that if advertiser \(a\)'s constraints of type 2 are not tight then \(\mu_{a}\) gets its maximum possible value.14_
Footnote 14: When valuations are strictly positive for all queries \(q\in[0,1]\), we can easily show that bid multipliers have to be bounded in equilibrium. When this is not the case, we set a cap sufficiently high to avoid bid multipliers going to infinity.
The proof of Theorem 4.2 is based on the main results of Conitzer et al. (2022a). The authors proved that the uniform-bidding equilibrium is unique and in equilibrium the multiplier of each advertiser is the maximum multiplier over all feasible uniform bidding strategies. Their result is for budget-constrained advertisers, and we extend it to include tCPA constrained advertisers. The proof is deferred to Appendix B.
**Lemma 4.12** (Extension of Theorem 1 in Conitzer et al. (2022a)).: _Given an instance of Auto-bidding with general constraints as in (2), there is a unique uniform bidding equilibrium, and the bid multipliers of all advertisers is maximal among all feasible uniform bidding profiles._
Now, we are ready to prove Theorem 4.2.
Proof of Theorem 4.2.: Assume that advertiser 1 increases their budget or their target CPA. Then the original uniform bidding is still feasible for all advertisers. Further, by Lemma 4.12 the equilibrium pacing of all advertisers is maximal among all feasible spacings. So, the pacing of all advertisers will either increase or remain the same. But the constraints of all advertisers except 1 are either binding or their multiplier has attained its maximum value by the definition of pacing equilibrium. Therefore, the set of queries they end up with should be a subset of their original ones since the price of all queries will either increase or remain the same. So, it is only advertiser 1 that can win more queries.
**Remark 4.13**.: _Conitzer et al. (2022a) show monotonicity properties of budgets in FPA with uniform bidding equilibrium for the revenue and welfare. Instead, in our work we focus on monotonicity for each advertiser._
## 5 Truthful Auctions
This section studies auto-bidding incentive compatibility for the case where the per-query auction is a truthful auction.
A truthful auction is an auction where the optimal bidding strategy for a profit-maximizing agent is to bid its value. An important example of a truthful auction is Second Price Auction. As we showed in the three-queries example of the introduction, SPA is not AIC. In this section, we show that the previous example generalizes, in our continuous-query model, to any (randomized) truthful auctions so long as the auction is scalar invariant and symmetric (see Assumption 5.1 below for details). As part of our proof-technique, we obtain an auction equivalence result which is interesting on its own: in the continuous query-model SPA and FPA have the same outcome.15
Footnote 15: It is well-known that in the discrete-query model, FPA and SPA are not auction equivalent in the presence of auto-bidders.
For the remaining of the section we assume all truthful auction satisfy the following property.
**Assumption 5.1**.: _Let \((x_{a}(\mathbf{b}))_{a\in A}\) be the allocation rule in a truthful auction given bids \(\mathbf{b}=(b_{a})_{a\in A}\). We assume that the allocation rule satisfies the following properties._
1. _The auction always allocates:_ \(\sum_{a\in A}x_{a}(\mathbf{b})=1\)__
2. _Scalar invariance: For any constant_ \(c>0\) _and any advertiser_ \(a\in A\)_,_ \(x_{a}(\mathbf{b})=x_{a}(c\mathbf{b})\)_._
3. _Symmetry: For any pair of advertisers_ \(a,a^{\prime}\in A\) _and bids_ \(b,b^{\prime},\mathbf{b}_{-\{a,a^{\prime}\}}=(b)_{a\in A\setminus\{a,a^{\prime}\}}\) _we have that_ \[x_{a}(b_{a}=b,b_{a^{\prime}}=b^{\prime},\mathbf{b}_{-\{a,a^{\prime}\}})=x_{a^ {\prime}}(b_{a}=b^{\prime},b_{a^{\prime}}=b,\mathbf{b}_{-\{a,a^{\prime}\}}).\]
**Remark 5.2**.: _Observe that SPA satisfies Assumption 5.1._
From the seminal result of Myerson (1981) we obtain a tractable characterization of truthful auctions which we use in our proof.
**Lemma 5.3** (Truthful auctions (Myerson, 1981)).: _Let \((x_{a}(\mathbf{b}),p_{a}(\mathbf{b}))_{a\in A}\) the allocation and pricing rule for an auction given bids \(\mathbf{b}=(b_{a})_{a\in A}\). The auction rule is truthful if and only if_
1. _Allocation rule is non-decreasing on the bid: For each bidder_ \(a\in A\) _and any_ \(b_{a}^{\prime}\geq b_{a}\)_, we have that_ \[x_{a}(b_{a}^{\prime},\mathbf{b}_{-a})\geq x_{a}(b_{a},\mathbf{b}_{-a}).\]
2. _Pricing follows Myerson's formulae:_ \[p_{a}(\mathbf{b})=b_{a}\cdot x_{a}(\mathbf{b})-\int_{0}^{b_{a}}x_{a}(z, \mathbf{b}_{-a})dz.\]
A second appealing property of truthful actions is that the optimal bidding strategy for auto-bidders is simpler: in the discrete-query model uniform bidding strategy is almost optimal and can differ from optimal by at most the value of two queries (Aggarwal et al., 2019). We revisit this result in our continuous-query model and show that uniform bidding policy is optimal for truthful auctions.
**Claim 5.4**.: _In the continuous-query model, if the per-query auction is truthful then using a uniform bidding is an optimal strategy for each auto-bidder._
Proof.: We use Theorem 1 Aggarwal et al. (2019). Pick some small \(\delta>0\) and divide the interval \([0,1]\) into subintervals of length \(\delta\). Let each subinterval \(I\) be a discrete query with value functions \(v_{j}(I)=\int_{I}v_{j}(q)dq\). Then Theorem 1 Aggarwal et al. (2019) implies that uniform bidding differs from optimal by at most two queries. So, the difference from optimal is bounded by \(2\max_{j}\max_{|I|\leq\delta}v_{j}(I)\). Now, since the valuation functions are atomless (i.e., the value of a query is \(dq\)), by letting \(\delta\) to \(0\), the error of uniform bidding in the continuous case also goes to zero.
### SPA in the Continuous-Query Model
We generalize the discrete example of second price auction in Theorem 2.1 to the continuous set of queries model showing that SPA is not AIC. The key step consists on showing that for the continuous-query model there is an auction equivalence result between first and second price auction.
**Theorem 5.5**.: _[Auction Equivalence Result] Suppose that auto-bidder uses a uniform bid strategy for SPA, and similarly, uses the simple bidding strategy defined in Claim 4.3 for FPA. Then, in any subgame equilibrium the outcome of the auctions (allocations and pricing) on SPA is the same as in FPA._
This result immediately implies that all the results for FPA in Section 4 hold for SPA as well.
**Theorem 5.6**.: _Suppose that there are at least two budget-advertisers or two tCPA-advertisers, then even for the continuous-query model SPA is not AIC._
Similarly to FPA case, we can characterize the equilibrium for the two-advertiser case and derive sufficient conditions on advertisers' valuation functions so that SPA is AIC.
**Theorem 5.7**.: _Given two advertisers, let \(\mu_{1}\) and \(\mu_{2}\) be the bidding multipliers in equilibrium for the subgame of the auto-bidders. Also assume that \(h(q)=\frac{v_{1}(q)}{v_{2}(q)}\) is increasing. Then_
1. _If the advertisers are budget-constrained with budget_ \(B_{1}\) _and_ \(B_{2}\)_, then_ \(\mu_{1}=\frac{B_{2}}{E[z1(z\geq r)]}\) _and_ \(\mu_{2}=\mu_{1}r\)_, where_ \(r\) _is the answer of the following implicit function,_ \[\frac{rE[\mathbbm{1}[z\geq r)]}{E[z\mathbbm{1}(z\leq r)]}=\frac{B_{1}}{B_{2}}.\] _Here,_ \(E[.]\) _is defined as_ \(E[P(z)]=\int_{0}^{\infty}P(z)f(z)dz,\) _where_ \(f(z)=\frac{v_{2}(h^{-1}(z))}{h^{\prime}(h^{-1}(z))}\) _wherever_ \(h^{\prime}\) _is defined, and it is zero otherwise._
2. _If the advertisers are tCPA-constrained with targets_ \(T_{1}\) _and_ \(T_{2}\)_, we have_ \(\mu_{1}=\frac{T_{1}E[1(z\leq r)]}{E[1(z\geq r)]}\) _and_ \(\mu_{2}=\mu_{1}r\)_, where_ \(r\) _is the answer of the following implicit function,_ \[\frac{rE[\mathbbm{1}(z\geq r)]}{E[z\mathbbm{1}(z\geq r)]}\frac{E[\mathbbm{1} (z\leq r)]}{E[z\mathbbm{1}(z\leq r)]}=\frac{T_{1}}{T_{2}}.\]
3. _If further,_ \(v_{2}\) _is non-decreasing in_ \(q\)_, and_ \(h\) _is concave, and advertisers are either both budget-constrained two tCPA-constrained, then SPA is AIC._
We now demonstrate the auction equivalence between FPA and SPA.
Proof of Theorem 5.5.: Note that the optimal strategy for a second-price auction is uniform bidding with respect to the true value of the query by Claim 5.4. Also, Claim 4.3 implies that the cost obtained by each advertiser in first-price auction in the continuous model is also depends on pacing multipliers of the other advertiser. This claim immediately, suggests the equivalent between the optimal. bidding strategies of first and second price auctions. So, the optimal strategy for both auctions will be the same and therefore the resulting allocation and pricing will also be the same. Hence, it follows that the same allocation and pricing will be a pure equilibrium under both auctions.
### Truthful Auctions Beyond Second-Price
We now present the main result of the section. We show that a general truthful auction (with possibly random allocation) is not AIC.
**Theorem 5.8**.: _Consider a truthful auction \((\mathbf{x},\mathbf{p})\) satisfying Assumption 5.1. If there are at least two budget-advertisers or two tCPA-advertisers, then the truthful auction is not AIC._
The remainder of the section gives an overview of the proof of this theorem. Similar to the FPA and SPA case, we start by characterizing the equilibrium in the continuous case when there are two advertisers in the game. The proof relies on the observation that for auctions satisfying Assumption 5.1, the allocation probability is a function of the bids' ratios. So, again, similar to FPA and SPA finding the equilibrium reduces to finding the ratio of bidding multipliers. Then to finish the proof of Theorem 5.8 instead of providing an explicit example where auto-bidding is non-AIC, we showed that the conditions needed for an auction's allocation probability to satisfy are impossible.
The following theorem finds an implicit equation for the best response. We omit the proofs of the intermediaries steps and deferred them to the Appendix C.
**Theorem 5.9**.: _Consider a truthful auction \((\mathbf{x},\mathbf{p})\) satisfying Assumption 5.1 and assume that there are either two budget-advertisers or two tCPA-advertisers. Let \(\mu_{1}\) and \(\mu_{2}\) be the bidding multipliers used by the auto-bidders in the subgame equilibrium. Further, assume that \(h(q)=\frac{v_{1}(q)}{v_{2}(q)}\) is increasing. Then_
1. _If the advertisers are budget-constrained with budget_ \(B_{1}\) _and_ \(B_{2}\)_, then_ \(\mu_{1}=\frac{B_{1}}{E[p_{1}(rz,1)]}\) _and_ \(\mu_{2}=r\mu_{1}\)_, where_ \(r\) _is the answer of the following implicit function,_ \[\frac{E[rp_{1}(\frac{z}{r},1)]}{E[zp_{1}(\frac{r}{z},1)]}=\frac{B_{1}}{B_{2}}.\] _Here,_ \(E[.]\) _is defined as_ \(E[P(z)]=\int_{0}^{\infty}P(z)f(z)dz,\) _where_ \(f(z)=\frac{v_{2}(h^{-1}(z))}{h^{\prime}(h^{-1}(z))}\) _wherever_ \(h^{\prime}\) _is defined, and it is zero otherwise._
2. _If the advertisers are tCPA-constrained with targets_ \(T_{1}\) _and_ \(T_{2}\)_, we have_ \(\mu_{1}=\frac{T_{1}E[zg(z/r)]}{E[rp_{1}(z/r)]}\) _and_ \(\mu_{2}=\mu_{1}r\)_, where_ \(r\) _is the answer of the following implicit function,_ \[\frac{E[x_{1}(\frac{r}{z},1)]}{E[zx_{1}(\frac{z}{r},1)]}\frac{E[rp_{1}(\frac{ z}{r},1)]}{E[zp_{1}(\frac{r}{z},1)]}=\frac{T_{1}}{T_{2}}.\]
Because allocation probability \(x_{1}\) is a non-decreasing function, we can derive a similar result to the FPA case and show if an instance is not AIC for budget-advertisers then it is also not AIC for tCPA-advertisers.
**Proposition 5.10**.: _If for the two budget-constrained advertisers case the truthful auction is not AIC, then for the tCPA-constrained advertisers case the same auction is also not AIC._
Using the previous results we are in position to tackle the main theorem.
Proof of Theorem 5.8.: We prove Theorem 5.8 for budget constrained advertisers, since Proposition 5.10 would derive it for tCPA constraint advertisers. We use implicit function theorem to find conditions on \(p_{1}\) and \(f\) to imply monotonicity in \(r\). Let
\[H(x,r)=\frac{\int_{0}^{\infty}rf(z)p_{1}(z/r,1)dz}{\int_{0}^{\infty}f(z)zp_{1} (r/z,1)dz}-x.\]
Then when advertiser 1 increases budget, the corresponding variable \(x\) increases. So, if we want to check whether \(r\) is a non-decreasing function of \(x\), we need \(\frac{dr}{dx}\) to be non-negative. By the implicit function theorem,
\[\frac{dr}{dx}=-\frac{\frac{\partial H}{\partial r}}{\frac{\partial H}{\partial r }}=\frac{1}{\frac{\partial H}{\partial r}}.\]
So, assume to the contrary that \(r\) i always non-decreasing in x, then \(\frac{\partial H(x,r}{\partial r}\geq 0\). Define \(p(x)=p_{1}(x,1)\). Then we have the following
\[E[\frac{d}{dr}rp(z/r)]E[zp(r/z)]\geq E[rp(z/r)]E[\frac{d}{dr}\Big{(}zp(r/z) \Big{)}].\]
Then
\[\frac{\frac{d}{dr}E[rp(z/r)]}{E[rp(z/r)]}\geq\frac{\frac{d}{dr}E[zp(z/r)]}{E[ zp(z/r)]}\]
By integrating both parts, we have that for any choice of \(f\),
\[rE[p(z/r)]\geq E[zp(r/z)].\]
When the above inequality hold for any choice of \(v_{1}\) and \(v_{2}\), we claim that the following must hold almost everywhere
\[p(b)\geq bp(1/b). \tag{11}\]
To see this, assume to the contrary that there exist a measurable set \(B\) such that (11) does not hold for it. Let \(qv_{2}(q)=v_{1}(q)\), therefore, \(f(z)=v_{2}(z)\) can be any measurable function. So, we can define \(f\) to have zero value everywhere except \(X\), and have weight 1 over \(X\) to get a contradiction.
By substituting variable with \(y=1/b\) in (11), \(p(1/b)db\geq p(b)/bdb\). Therefore, almost everywhere \(p(b)=bp(1/b)\). By differentiating we have \(p^{\prime}(b)=p(1/b)-p^{\prime}(1/x)/x\). On the other hand, as we will see in Appendix C for any truthful auction satisfying Assumption 5.1, \(p^{\prime}(b)=p^{\prime}(1/b)\). Therefore, \(p(b)=p^{\prime}(b)(b+1)\). Solving it for \(p\), we get that the only possible AIC pricing must be of the form \(p(b)=\alpha(b+1)\) for some \(\alpha>0\).
Next, we will show there is no proper allocation probability satisfying the Assumption 5.1 that would result in a pricing function \(p\). It is not hard to see that by the Myerson's pricing formulae, \(\frac{dx_{1}(b,1)}{db}=\frac{p^{\prime}(b)}{b}.\) Therefore, we must have \(x_{1}^{\prime}(b,1)=\alpha/b\), so \(x_{1}(b,1)=c\log(b)+d\) for some constants \(c>0\) and \(d\). But \(x_{1}\) cannot be a valid allocation rule, since it will take negative values for small enough \(b\).
|
2307.00080 | Inter-case Predictive Process Monitoring: A candidate for Quantum
Machine Learning? | Regardless of the domain, forecasting the future behaviour of a running
process instance is a question of interest for decision makers, especially when
multiple instances interact. Fostered by the recent advances in machine
learning research, several methods have been proposed to predict the next
activity, outcome or remaining time of a process automatically. Still, building
a model with high predictive power requires both - intrinsic knowledge of how
to extract meaningful features from the event log data and a model that
captures complex patterns in data. This work builds upon the recent progress in
inter-case Predictive Process Monitoring (PPM) and comprehensively benchmarks
the impact of inter-case features on prediction accuracy. Moreover, it includes
quantum machine learning models, which are expected to provide an advantage
over classical models with a scaling amount of feature dimensions. The
evaluation on real-world training data from the BPI challenge shows that the
inter-case features provide a significant boost by more than four percent in
accuracy and quantum algorithms are indeed competitive in a handful of feature
configurations. Yet, as quantum hardware is still in its early stages of
development, this paper critically discusses these findings in the light of
runtime, noise and the risk to overfit on the training data. Finally, the
implementation of an open-source plugin demonstrates the technical feasibility
to connect a state-of-the-art workflow engine such as Camunda to an IBM quantum
computing cloud service. | Stefan Hill, David Fitzek, Patrick Delfmann, Carl Corea | 2023-06-30T18:33:45Z | http://arxiv.org/abs/2307.00080v1 | # Inter-case Predictive Process Monitoring: A candidate for Quantum Machine Learning?
###### Abstract
Regardless of the domain, forecasting the future behaviour of a running process instance is a question of interest for decision makers, especially when multiple instances interact. Fostered by the recent advances in machine learning research, several methods have been proposed to predict the next activity, outcome or remaining time of a process automatically. Still, building a model with high predictive power requires both - intrinsic knowledge of how to extract meaningful features from the event log data and a model that captures complex patterns in data. This work builds upon the recent progress in inter-case Predictive Process Monitoring (PPM) and comprehensively benchmarks the impact of inter-case features on prediction accuracy. Moreover, it includes quantum machine learning models, which are expected to provide an advantage over classical models with a scaling amount of feature dimensions. The evaluation on real-world training data from the BPI challenge shows that the inter-case features provide a significant boost by more than 4% in accuracy and quantum algorithms are indeed competitive in a handful of feature configurations. Yet, as quantum hardware is still in its early stages of development, this paper critically discusses these findings in the light of runtime, noise and the risk to overfit on the training data. Finally, the implementation of an open-source plugin demonstrates the technical feasibility to connect a state-of-the-art workflow engine such as Camunda to an IBM quantum computing cloud service.
Keywords:Predictive Process Monitoring Quantum Machine Learning Inter-case Design Science Research
## 1 Introduction
Today, enterprises, universities or public institutions track large amounts of time-stamped data stored in event logs. Propelled by the ongoing advances in machine learning (ML) research and based on the domain knowledge of business process management (BPM), Predictive Process Monitoring (PPM) is a collection of techniques that aim to predict the future behaviour of business processes using historical data as features [22]. As business processes in applications such
as logistics, airport operation or hospital management, are based on complex interactions, the outcome of a single process instance also depends on other concurrently executing instances and their states [6]. Therefore, several attempts have been made to incorporate the inter-case dependencies [5, 13, 25, 33].
Based on theoretical computer science and quantum physics, the idea of using quantum systems for computation has been developed since 1982, when Feynman published the idea of'simulating quantum physics with quantum physics' [10]. What Feynman means is to use real quantum systems such as electrons or photons to perform calculations. Since then, it has been shown that quantum computers can solve combinatorial problems in complexity classes beyond the classical ones. The most prominent example is Shor's algorithm, which efficiently derives the prime factorisation for large composite integers exponentially faster than classical implementations [35]. Apart from this algorithm, which has the potential to revolutionise cryptography, applications of quantum algorithms range from chemistry [19, 42] to finance [7, 8, 24] to industrial optimisation [36]. In short, the advantage of using a quantum computer lies in its higher dimensional vector space to perform calculations.
The novelty presented in this paper is to solve the inter-case PPM problem with a variational quantum algorithm and the implicit quantum kernel estimation on a support vector machine [14, 30], which - to the best of our knowledge - has not been tested ever before. In our evaluation, we can empirically show that the quantum kernels were able to inherently capture the PPM-specific patterns between cases, thus significantly outperforming classical kernel-based approaches in terms of accuracy. Since training time is an important decision factor in production scenarios, we also tested stratification sampling and were able to show that training time can be reduced by 85% while maintaining the same accuracy by considering only 50% of the features. All prototypical implementations can be found in an open source library4 and follow standard industrial library interfaces, so cross-validation and model selection methods are supported and the classifiers work with all common open-source Python-based frameworks.
Footnote 4: [https://gitlab.com/stefanhill/qppm](https://gitlab.com/stefanhill/qppm)
Even though recent publications might draw an overly optimistic picture of the ongoing hype around quantum technology and the European Union equipped the initiative Quantum Flagship with an enormous budget of 1 billion euros 5, we would like to keep the expectations on a realistic level. Quantum systems in their current dimensions are called noisy intermediate-scale quantum (NISQ) devices and are still in their early stages of development [26]. To give an example, [1] claim to be able to solve a problem in 200 seconds on Google's 53 qubits processor Sycamore, which would take at least 10,000 years to be solved on a classical computer while a Chinese research group at the University in Hefei reports achieving first promising results on a photon-based QPU with 76 qubits [44]. Yet, the most advanced systems like the IBM Eagle QPU consist of 127 noisy qubits, and a 433 qubit processor was released in October 2022 6 which is
more than enough to run the algorithms that we are using in our experiments. Thus, to demonstrate the technical feasibility of running the algorithms on IBM hardware, we additionally implemented an open-source plugin that connects the workflow engine Camunda to the IBMQ cloud service. By establishing this link between emergent scientific fore-front prototypes and solution-oriented mature business tools in an industrial-grade environment, we argue that the combination of quantum computing and BPM serves as an ideal draught horse for fundamental research in both areas.
This is why the paper is structured as follows. First, we introduce the inter-case PPM problem and we will give a brief introduction to quantum computing. Second, we explain the pipeline for our experiments that includes both the feature extraction from the event log and the quantum kernel classifier. Next, we present the improvements in accuracy and runtime for our selection of classifiers. In light of our findings, we discuss how they contribute to the current landscape of research in quantum kernel methods. We conclude by showing directions for research at the intersection of quantum computing and BPM and present a demonstration of the prototype.
## 2 Background
In this section, we will briefly explain the terminology of the inter-case PPM problem followed by an introduction to quantum computing. Furthermore, we will sketch how quantum circuits are plugged into common ML models, to build the so-called hybrid quantum-classical ML models for our experiments.
Figure 1: Example process with running instance (orange dot). An exemplary predictive process monitoring problem is the prediction of the next task (blue) based on historical events (orange arrows) and case attributes (orange box).
### Predictive Process Monitoring
Processes can be extracted and stored in a so-called _event log_. Such an event log consists of a set of _traces_ that refer to a business _case_ which consists of an arbitrary number of distinct _events_. In practice, machine-readable formats such as the eXtensible Event Stream (XES) XML-based standard are used7.
Footnote 7: [http://www.xes-standard.org/](http://www.xes-standard.org/)
Consider for example the process model of a hospital as in Figure 1. In that scenario, the patient would be assigned a case and takes part in several activities which will be logged as _trace_\(\sigma_{x}\) of events \(e\in\mathcal{E}\), where \(\mathcal{E}\) is the universe of all events. The first event of a new patient could be lunch. This event would carry some information like the _case id_, the _activity_ with its corresponding _activity name_ (standard concept:name) and the _timestamp_. These three main properties of the event are usually called the _control flow_. An exhaustive formal description of the event log can be found in the appendix and in [37]. This allows us to formulate the problem of predictive process monitoring [33].
Definition 1 (Predictive Process Monitoring (PPM)): Given a (possibly running) case \(\sigma_{x}\), the predictive process monitoring problem (PPM) is to find a function \(f:\mathcal{E}^{*}\rightarrow\mathcal{Y}\) that accurately maps \(\sigma_{x}\) to the corresponding label \(y(\sigma_{x})\).
The label refers to anything that is called a prediction type, e.g. the next activity or remaining time a business case until completion [6]. In that manner, the PPM problem becomes a _supervised learning task_ (SLT) [33].
Casting the PPM problem into a SLT requires a function \(g\) which transforms sub-traces (also called prefixes) from an event log to features for the ML algorithm. Moreover, a prerequisite to solve the SLT is that observations are independent identically distributed (i.i.d.) feature-outcome pairs \((x_{i},y_{i})\). However, the event data in a prefix log \(L^{*}\) is highly correlated for prefixes of the same case (_intra_-case dependency) and for every two prefixes that run at the same time or share limited resources in the same environment (_inter_-case dependency). [33] define the Sequence-To-feature Encoding Problem (STEP) as follows.
Definition 2 (Sequence-To-feature Encoding Problem (STEP)): Let \(\mathcal{L}^{*}\) be an extended event log that contains all prefixes of the sequences in \(L\). Solving the STEP problem is to find a function \(g:\mathcal{E}^{*}\times\mathcal{Y}\times 2^{L}\rightarrow\mathcal{X}\times \mathcal{Y}\) such that the result of its operation, \(\{g(\sigma_{i},y(\sigma_{i}),L^{*})\}\subset\mathcal{X}\times\mathcal{Y}\), is an i.id. sample of feature-outcome from \(\mathcal{X}\times\mathcal{Y}\).
Recently, a number of STEP encodings for the function \(g\) have prevailed [38]. The most simple ones are the _static_ and the _last state_ encoding, which only take the attributes of an instance and the last state into account. These two encodings do not include historical information about the business case. _Aggregation-based_ encodings count the number of occurences of an activity or indicate by a boolean whether the activity was included in the past execution of an instance [21]. While the aggregation-based encodings include the number of past events, they still do not contain the development of variables over the history of the case. This is
where the _index-based_ encoding comes into play. It appends all states into a feature vector, thus, is lossless by design. To ensure that the feature vectors are of same length, missing values are padded with a default value.
The challenge in PPM is to combine sequential and contextual case information. Hence, the encoding must reflect the order of events, as well as dynamic and static case attributes assigned to the case. As cases that are executed at the same time or share resources are highly inter-related, it is important to encode this information as well. Some approaches incorporate the inter-case dependencies into the encoding by appending aggregated features [13, 33, 34], or capture the temporal dynamics by grouping instances in batches [20, 25].
We build our features based on the central concept introduced by [13], which is to span a time window that moves alongside the current event to be encoded. All events inside the window are called _peer events_ and the respective cases are defined to be _peer cases_ of the current running instance. Using this notion of peer cases, we delineate the set of inter-case features in Table 1 based on suggestions by [13] and [34]. In contrast to the intra-case features such as the aggregation or the index-based encoding, these inter-case features do not encode information of a single sub-trace. They rather aggregate metrics that depend on the concurrently executed process instances. Now, an inter-case feature is defined as STEP solution where the feature space \(\mathcal{X}\) is replaced a bi-dimensional space \(\mathcal{X}_{1}\times\mathcal{X}_{2}\). Here, \(\mathcal{X}_{1}\) represents the intra-case part of the solution i.e. the event sequence and \(\mathcal{X}_{2}\) consists of the inter-case features, which are aggregated relative to the last event. An example for such a feature vector could be the last four activities of the patient in the hospital plus the total number of doctors (resources) and the total number of other patients in the hospital during the last two hours. Whether the patient proceeds with an operation or has to be scheduled for lunch depends on his medical record as well as on the staffing of the station and the load. A more detailed introduction to how inter-case features are assembled can be found in the supplementary materials 8.
\begin{table}
\begin{tabular}{|p{142.3pt}|p{284.5pt}|} \hline
**peer\_cases** & Counts the number of concurrently executed cases in the time window. \\ \hline
**peer\_act** & Counts the total number of triggered activities of all cases during the time window. \\ \hline
**res\_count** & Counts the number of resources working in the time window. \\ \hline
**avg\_delay** & Calculates an average delay from the activity transitions in the time window relatively to the average transition times of the training log. \\ \hline
**freq\_act** & Returns the topmost frequent activity in the time window. \\ \hline
**top\_res** & Returns the topmost frequently used resource in the time window. \\ \hline
**batch** & Returns an indicator of how prone the upcoming possible events are to end up in a batch. A batch, here, forms when there is a task such as taxes that is only performed once in a cycle, thus, all instances have to wait until a specific date irrespective of their arrival time at their current activity. \\ \hline \end{tabular}
\end{table}
Table 1: Inter-case features of our experiments.
### Quantum Computing
Quantum computing is a fundamentally different approach of computation, which takes advantage of manipulating properties of quantum mechanical systems. The constitutional difference between classical and quantum computation is that the quantum bit is represented as a two-dimensional tensor whereas a classical bit lives in one dimension only. This circumstance implies an exponential increase of the state space. Instead of using high and low voltage on a transistor to represent 0 and 1 in a classical bit, quantum bits are represented by utilizing quantum properties like the spin states of an electron or the polarization states of a photon [15]. Mathematically the state of these two-level quantum systems with a ground state \(|0\rangle\) and an excited state \(|1\rangle\) is denoted by the state vector:
\[|\psi\rangle=\alpha|0\rangle+\beta|1\rangle=\binom{\alpha}{\beta} \tag{1}\]
where \(\alpha\) and \(\beta\) are complex numbers satisfying \(|\alpha|^{2}+|\beta|^{2}=1\). In this matter, unlike a classical bit, the qubit has infinitely many possible states - all superpositions of \(|0\rangle\) and \(|1\rangle\). A measurement on the qubit state using basis \(|0\rangle\) and \(|1\rangle\) will return 0 with probability \(|\alpha|^{2}\) and 1 with probability \(|\beta|^{2}\)[9]. Here, the Bloch sphere is used as a tool to represent the state of a qubit. An arbitrary superposition of a qubit \(|\psi\rangle\) can be written as \(|\psi\rangle=cos\frac{\phi}{2}|0\rangle+e^{i\theta}sin\frac{\phi}{2}|1\rangle\).
If there are \(N\) qubits in a system, the total state of that system can be a superposition of \(2^{N}\) different states: \(|000...00\rangle,|100...00\rangle,...|111...11\rangle\). A classical system can be in one of these \(2^{N}\) states but not in a superposition of several of them. The superposition of a qubit can be enforced by applying a Hadamard gate. Likewise classical gates such as an AND or an OR gate, quantum gates manipulate the state of one or multiple qubits. A brief introduction into quantum gates can be found in the supplementary materials 1.
Footnote 1: Supplementary material B: [https://bit.ly/qppm-supplementary](https://bit.ly/qppm-supplementary)
### Quantum Kernel Methods
The idea behind quantum kernel methods is to take advantage of the higher dimensional Hilbert space when calculating the inner products for the kernel trick [28, 29]. It can be shown that the embedding of a data vector \(\mathbf{x}\) into a quantum state \(|\phi(\mathbf{x})\rangle\) fulfils the definition of a feature map \(\phi:\mathcal{X}\rightarrow\mathcal{F}\) with \(\mathcal{F}\) being a quantum feature space [31]. Instead of using the notion of an inner product, a linear model in the quantum representing kernel Hilbert space (RKHS) is defined in Dirac notation:
\[f(x;w)=\langle w|\phi(x)\rangle \tag{2}\]
with \(|w\rangle\in\mathcal{F}\) being a weight vector living in the feature space. The concept of interpreting \(x\rightarrow|\phi(x)\rangle\) as feature map opens up all possibilities of classical kernel methods for the quantum world. The backbone is a _feature-embedding_ circuit \(U_{\phi}(x)\) that acts on a ground state \(|0...0\rangle\) of a Hilbert space \(\mathcal{F}\) as \(U_{\phi}(x)|0...0\rangle\)[31].
The real benefit of a quantum kernel is hidden in non-Gaussian feature maps that are not easy to simulate classically. [14] propose such a \(n\)-qubits feature map based on Hadamard and z-gates, called _zz feature map_:
\[\mathcal{U}_{\phi}(\mathbf{x})=U_{\phi}(\mathbf{x})H^{\otimes n}U_{\phi}( \mathbf{x})H^{\otimes n} \tag{3}\]
with
\[U_{\phi}(\mathbf{x})=exp\left(i\underset{S\subseteq[n]}{\sum}\phi_{S}( \mathbf{x})\underset{i\in S}{\prod}Z_{i}\right) \tag{4}\]
To gain a better understanding of how the feature map looks like, Figure 3 visualizes the underlying circuit. Qubits are prepared by the Hadamard and rotational z-gates and entangled in the section with zz-gates. The circuit is also implemented in the evaluation framework. Additionally, we were using an adapted variant of the circuit that encodes on the y-axis instead of the z-axis.
While implementing the feature map is only the first step to embed a data vector \(\mathbf{x}\) into the circuit, the second one is to integrate the kernel into a classifica
Figure 3: Feature map based on rotations along the z-axis and Hadamard gates as proposed by [14] on an exemplary circuit with 3 qubits.
Figure 2: Bloch sphere to visualise the states of a qubit. States \(\hat{z}=|0\rangle\) and \(-\hat{z}=|1\rangle\) are the most common measurement bases where \(\hat{z}\) is the ground state with zero energy [12].
tion model. Here, the two most prominent approaches were developed independently by [31] and [14]. Both articles show that it is feasible to either implement the kernel explicitly as _variational quantum classifier_ (VQC) or to build a hybrid model with a circuit called _quantum kernel estimator_ (QKE) which is plugged into a classical SVM and calculates the kernel matrix implicitly [39].
In the implicit approach, the quantum computer is used to estimate the inner products \(\kappa(x,x^{\prime})=\langle\phi(x),\phi(x^{\prime})\rangle\) for a kernel-dependent model. Thus, the quantum computer is required to implement the state preparation routine \(\mathcal{U}_{\phi}(x)\) for any \(x\in\mathcal{X}\)[31]. The decision boundary is determined classically using a SVM. Formally, the kernel circuit is described by:
\[\kappa(x,x^{\prime})=\langle 0...0|V_{\phi}(\mathbf{x})V_{\phi}^{\dagger}( \mathbf{x}^{\prime})|0...0\rangle \tag{5}\]
where the function \(V_{\phi}\) is implemented twice, once to encode the training sample \(\mathbf{x}\) and once in its adjoint form \(V_{\phi}^{\dagger}\) to encode the sample \(\mathbf{x}^{\prime}\) to which we want to measure the distance. The idea behind the variational classifier is to perform the classification explicitly in the quantum Hilbert space. Similar to a perceptron, the algorithm relies on a parametrized weight matrix, which is updated by an optimizer after determining the classification error. Formally, there exists a weight matrix \(W=W(\theta)\) which learns a model \(|w(\theta)\rangle=W(\theta)|0...0\rangle\) that is evaluated by the linear model function \(f(x;w)=\langle w|\phi(x)\rangle\). The simplest circuit with one weight layer is given by \(W(\theta)U_{\phi}|0...0\rangle\)[31]. [32] also introduce circuits with multiple layers and find that a higher number of parameterized weight matrices increases the model capabilities to approximate an arbitrary function.
## 3 Experimental setup
The goal of our experiments is threefold. First, we aim to compare quantum and classical methods to solve the inter-case PPM problem. Second, we want to investigate properties of the quantum kernels. Third, we want to build our test framework as reusable as possible which allows us to integrate it into existing software. Thus, the pipeline for our experiment is straightforward: Intra-case feature vectors are built from the event log augmented by the inter-case features as in Table 1. Next, a selection of ML models is trained. Features are encoded on the quantum circuit, if applicable to the algorithm. Forthwith, we will briefly justify the choice of our training data and state the configuration of our simulator.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Dataset** & **no. cases** & **no. events no. activities no. variants median case time** \\ \hline BPIC17 & 31,509 & 1,160,405 & 26 & 15484 & 19.1d \\ BPIC17-P & 18,112 & 507,161 & 25 & 2087 & 20.4d \\ RTFM & 150,370 & 561,470 & 11 & 231 & 28.8w \\ RTFM-P & 150,270 & 560,551 & 11 & 131 & 28.3w \\ \hline \hline \end{tabular}
\end{table}
Table 2: Statistics about the raw event logs and the version preprocessed by variants.
A well-known source for evaluation datasets in process mining is 4TU Center for Research Data10 which provides event logs for the Business Process Intelligence (BPI) Challenge. Therefore, the following two logs will be used as raw data for the evaluation (see Table 2 for some general statistics about the dataset).
Footnote 10: [https://data.4tu.nl/](https://data.4tu.nl/)
**Loan Application (BPIC17)** This log comprises loan applications of a Dutch financial institute. It is selected for the evaluation as it contains information about resources working on the tasks.
**Road Traffic Fine Management (RTFM)** The log is extracted from an Italian police office and contains all events related to the fine collection process. It is included in the evaluation because there are inherent inter-case dependencies. For example, unpaid fines are collected once a year, which means that all cases of the same year depend on this event.
Because the quantum algorithms will be simulated on a classical computer, the datasets must have been reduced to conduct tests in time. Hence, the number of cases will be reduced by first removing all variants that occur only once. The RTFM log does not seem to be affected that much by the filtering as most of the cases depict mainstream behaviour. Also, the BPIC17 dataset still contains more than 500.000 events which will be too large for the quantum kernel simulation. Thus, secondly, all datasets will be shortened by picking a random timeframe.
This procedure is similar to what one would expect in a real-world setting: Often classifiers are trained on historical data of a certain timeframe and used for predictions in the following period. To ensure there are enough cases inside the time window, a period of around two to three times the preprocessed median case time will be chosen. The resulting global statistics about the remaining datasets are shown in Table 3 in the appendix. Furthermore, all experiments will be conducted on cross-validation with three folds which is a typical value used in other benchmarks for PPM [27, 38].
The experiments are conducted on a classical computer that simulates the quantum circuits using the default_qubit simulator11 from pennylane with 1000 measurement shots12. Circuits are accelerated using jaxlib which requires the operating system to be Linux-based. Experiments run on a server with
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Dataset** & **no. cases** & **no. events no. activities no. variants** & **date range** \\ \hline BPIC17-S & 1,940 & 57,107 & 24 & 731 & 20160215-20160415 \\ RTFM-S & 3,321 & 12,406 & 11 & 27 & 20030501-20040430 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Statistics about the datasets with reduced sample sizes.
Ubuntu 18.04.6 LTS and Python 3.8.13 installed13. The CPU is an Intel i9-9820X with 10 cores and 20 threads on 4.20 GHz and there is 64 Gb of RAM.
Footnote 13: A list of libraries and dependencies can be found in the requirements.txt file
## 4 Evaluation
In our experiments, we focus on three aspects. First, we measure the influence of adding inter-case features. Next, we investigate to which extent quantum kernel methods improve the prediction's accuracy compared to classical kernels. And third, we present ways to accelerate the training for quantum kernels against the backdrop that our experiments are still simulated and real hardware currently runs on a limited number of qubits. For reference, we also include common classification models from PPM and the VQC into our pipeline.
First, we run a grid search with to find the length for the intra-case feature which saturated for index-based encodings with more than four steps. Consequently, a four-step index-based encoding forms the baseline for inter-case scenarios (see left graph in Figure 4). In the following, all intra-case features are augmented with exactly one inter-case feature. As all seven inter-case encoders share the hyperparameter of the peer cases window, we try three configurations - 0.15, 0.30, and 0.5 - and multiply by 20.4 weeks, which is the median case duration on the RTFM dataset. [13] choose the window to be 20% or sometimes even less of the average case length on other datasets, but do not explain in detail which criteria to take into account when determining the window size. The parameter has an influence on the information gain of the features. If set too small, no dependencies were captured. If set too large, dependencies were taken into account that would not have existed in the real world.
As there are plenty of encoding combinations to be simulated, we decided to conduct the tests on the RTFM-S dataset, which is the optimal trade-off for the
Figure 4: Increase in accuracy when adding inter-case features. The quantum kernels (brown and pink) lead to a more extreme increase than the classical kernel (red) and compete with XGBoost (orange).
number of training instances. On the BPIC17-S dataset, it would not be possible to conduct tests in bearable time. The tree-based algorithms will be random forests with tree depths 3 and 4 as well as the standard XGBoost algorithm. We chose these hyperparameters because we found them to be ideal in previous experiments. Moreover, the linear and radial base (RBF) kernel of the classical SVM will be evaluated against the angle, zz and the angle_zz feature map with one and two kernel layers. Also here, we chose the zz feature map, because it delivered competitive results in a quick intra-case evaluation and is an upcoming standard kernel in the quantum community [14]. We include variational quantum circuits and try several number of layers as this hyperparameter is expected to be the most influential [30].
The results of the benchmark can be found in Table 4 and Figure 4. Noteworthy, the quantum kernel with a zz feature map delivers throughout higher accuracies than the SVM with RBF kernel. Still, one can see that XGBoost achieves high accuracies for all features. To mention the importance of the inter-case features in comparison to intra-case features only, for XGBoost, the increase is about 3.6% and for the quantum kernel accuracies are about 1.7% higher than on an _index_bsd_4_ baseline encoding. The VQC delivers comparably low accuracies - it might be possible that the optimizer found a local, but not a global minimum of the error function.
details about overfitting and the exact values for the accuracies, we refer to the regression analysis and the tables in the respective supplementary materials 14.
Footnote 14: Supplementary material D+E: [https://bit.ly/qppm-supplementary](https://bit.ly/qppm-supplementary)
While XGBoost was still far ahead with only one inter-case feature, this is no longer the case with two inter-case features. On the one hand, XGBoost delivers the highest overall score (74.55%) on the feature _peer_cases+freq_act_ while 72.90% is the highest score for the quantum kernel with zz feature map on the same encoding. On the other hand, quantum algorithms achieve a high accuracy on the encoding _peer_act+avg_delay_ (74.51%) where the maximum score for XGBoost is 73.51% and 70.33% for the RBF kernel. As five out of ten results are higher for a quantum kernel than XGBoost and all results of a zz feature map outperform RBF, we conclude that quantum kernels are a competitive choice for the inter-case PPM problem.
Because the bottleneck of quantum simulations is the training time, it is interesting to find out how to reduce the number of kernel evaluations. The most simple approach is in resampling the training dataset and using only a certain percentage of the training features. Figure 5 shows accuracies and fit times for
Figure 5: Accuracies and fit times for a selection of algorithms on the BPIC17-S dataset with 57,107 samples. Quantum algorithms in the lower part of the figure, classical algorithms in the upper part.
quantum and classical methods on the BPIC17-S dataset when we applied stratification sampling. There is no decrease in accuracy for all selected algorithms when only 50% percent of the training samples are taken into account. However, runtime decreases from 40683 seconds to 4886 seconds for the _qke_angle_2_ which is less than 15% of the original train time.
## 5 Directions
When taking a closer look to research in QML, there are mainly two approaches to show a quantum advantage of specific algorithms. First, by proving that certain advantages exist theoretically due to the structure of a problem. Second, by investigating the quantum advantage in an exploratory way and simply apply quantum algorithms on any kind of classical data where their classical counterpart already established as the common way to solve the problem. An example for this approach is be the application of a quantum neural network on an image recognition task [3]. Our work definitely belongs to the latter group of articles and is to best of our knowledge the first one in the domain of PPM.
During our experiments, the applications of QML developed rapidly. A number of similar approaches explore QML algorithms in domains where data follow complex patterns. These include quantum kernel methods for high energy physics [40] or in aircraft design [43] as well as in image recognition for manufacturing defects [3]. Classically, convolutional neural networks achieve high accuracy on image recognition tasks, which is why [23] investigate the advantage of quantum neural networks. On a more theoretical side, the work by [11] shows experiments on synthetic data while [17] and [18] work with the MNIST-fashion dataset [41]. Since the number of papers in QML is exploding over the last three years, there is a need for a more comprehensive review of recent applications.
The biggest issue with quantum computing is, that nowadays quantum hardware is in its very early phase of development [26]. There is still a need for more high quality research on the theoretical side of the algorithms such as in [16]. Also, one has to see the full pipeline from hardware to end user and build applications that provide added value as a whole. To show, how a practical implementation of a QML pipeline to real hardware could look like, we further implemented an interface to the workflow engine Camunda in the form of an extension to the PPM plugin presented by [2]. The inter-case classifiers are implemented as a wrapper that connects to the Python evaluation framework and the quantum algorithms. It is then possible to connect a real IBM quantum computer hosted in the cloud. We believe that possible advantages and givevous limitations of current NISQ systems will become more aware when quantum technology finds its way out of academia and establishes in real-world applications. We hope that this work could show how quantum technology can be used in the process sciences, thereby serving as a draught horse for such applications [4]. Those applications are not restricted to belong to the machine learning domain, but can also contain SAT solvers to find inconsistencies in business processes or apply
the quantum approximate optimization algorithm (QAOA) to optimization such as assigning resource priorities to tasks.
## 6 Conclusion
In this paper, we proposed implicit quantum kernel methods to solve the inter-case PPM problem. Overall, the experiments showed, that quantum kernels achieve a practical improvement in accuracy compared to similar classical methods. While the robust quantum hardware still needs to scale, a first prototypical implementation proves that integration into existing workflow engines is feasible. For the intermediate time, we have shown empirically that undersampling to a certain amount leads to faster simulations of the quantum kernels without loss of accuracy. Conclusively, supported by our findings and a working demonstrator, we confidently call for courage to conduct further exploratory investigations on quantum algorithms in process sciences. Because BPM is an always evolving, but already a mature field of research, we claim it is the ideal candidate to take a leading role paving the way towards a new family of reliable quantum business applications.
|
2309.07243 | LInKs "Lifting Independent Keypoints" -- Partial Pose Lifting for
Occlusion Handling with Improved Accuracy in 2D-3D Human Pose Estimation | We present LInKs, a novel unsupervised learning method to recover 3D human
poses from 2D kinematic skeletons obtained from a single image, even when
occlusions are present. Our approach follows a unique two-step process, which
involves first lifting the occluded 2D pose to the 3D domain, followed by
filling in the occluded parts using the partially reconstructed 3D coordinates.
This lift-then-fill approach leads to significantly more accurate results
compared to models that complete the pose in 2D space alone. Additionally, we
improve the stability and likelihood estimation of normalising flows through a
custom sampling function replacing PCA dimensionality reduction previously used
in prior work. Furthermore, we are the first to investigate if different parts
of the 2D kinematic skeleton can be lifted independently which we find by
itself reduces the error of current lifting approaches. We attribute this to
the reduction of long-range keypoint correlations. In our detailed evaluation,
we quantify the error under various realistic occlusion scenarios, showcasing
the versatility and applicability of our model. Our results consistently
demonstrate the superiority of handling all types of occlusions in 3D space
when compared to others that complete the pose in 2D space. Our approach also
exhibits consistent accuracy in scenarios without occlusion, as evidenced by a
7.9% reduction in reconstruction error compared to prior works on the Human3.6M
dataset. Furthermore, our method excels in accurately retrieving complete 3D
poses even in the presence of occlusions, making it highly applicable in
situations where complete 2D pose information is unavailable. | Peter Hardy, Hansung Kim | 2023-09-13T18:28:04Z | http://arxiv.org/abs/2309.07243v1 | LInKs "Lifting Independent Keypoints" - Partial Pose Lifting for Occlusion Handling with Improved Accuracy in 2D-3D Human Pose Estimation
###### Abstract
We present LInKs, a novel unsupervised learning method to recover 3D human poses from 2D kinematic skeletons obtained from a single image, even when occlusions are present. Our approach follows a unique two-step process, which involves first lifting the occluded 2D pose to the 3D domain, followed by filling in the occluded parts using the partially reconstructed 3D coordinates. This lift-then-fill approach leads to significantly more accurate results compared to models that complete the pose in 2D space alone. Additionally, we improve the stability and likelihood estimation of normalising flows through a custom sampling function replacing PCA dimensionality reduction previously used in prior work. Furthermore, we are the first to investigate if different parts of the 2D kinematic skeleton can be lifted independently which we find by itself reduces the error of current lifting approaches. We attribute this to the reduction of long-range keypoint correlations. In our detailed evaluation, we quantify the error under various realistic occlusion scenarios, showcasing the versatility and applicability of our model. Our results consistently demonstrate the superiority of handling all types of occlusions in 3D space when compared to others that complete the pose in 2D space. Our approach also exhibits consistent accuracy in scenarios without occlusion, as evidenced by a 7.9% reduction in reconstruction error compared to prior works on the Human3.6M dataset. Furthermore, our method excels in accurately retrieving complete 3D poses even in the presence of occlusions, making it highly applicable in situations where complete 2D pose information is unavailable.
## 1 Introduction
Human pose estimation (HPE) is an important task in computer vision with applications in various fields, such as human-computer interaction, augmented reality, and healthcare [4, 10, 21]. However, recovering 3D human poses from a single image is known to be an ill-posed inverse problem, as multiple different 2D poses can correspond to the same 3D pose. Traditional approaches in 2D-3D HPE have therefore required either multiple views of the subject or a depth sensor, which limits their applicability in real-world scenarios where multiple views or depth sensors may be unobtainable. In recent years, unsupervised learning methods have shown promising results in 3D HPE from single images, where the model learns to extract 3D pose information from 2D poses without any 3D annotations [3, 6, 9, 29, 25]. However, these approaches operate by lifting the entire 2D kinematic skeleton during training which has several limitations. Firstly, they do not account for occlusions or 2D keypoint detection errors, making them unable to handle incomplete or adjust to bad, information. As a result, the omission of just a single joint will lead to these models being unable to work. Secondly, lifting the entire 2D kinematic skeleton may result in long-range correlations between anatomically unassociated keypoints being learned during training. As a result, the 2D coordinate of one keypoint may inappropriately influence the 3D estimate of another that is far away. This can lead to inaccurate pose estimations, especially in relation to complex human poses with multiple degrees of freedom. Thus, there is a need for more robust methods that can effectively handle occlusions and account for the complex dependencies between keypoints in the estimation process. Our paper, therefore, makes the following important contributions:
### Lift then Fill Two-Stage Approach
Prior 2D-3D lifting research [3, 6, 29, 25, 26, 14] have mainly assumed that 2D pose detection models can accurately capture complete 2D poses from single images. We argue that this assumption may be flawed. By using OpenPose on videos from the Human3.6M dataset, we found that complete poses were detected in only 54.8% and 35.5% of frames from the front and rear cameras respectively, with an average full-pose retrieval rate of 45.1%. Put plainly,
the previously mentioned approaches would only function in 45.1% of all available frames. This can be attributed to self-occlusion or 2D detection error, which results in missing limbs or keypoints in various scenes. To address this, for the first stage of our approach, we lifted different segments of the 2D human pose independently of one another to obtain a partial 3D pose in cases of occlusion. In the next stage, this partial 3D pose was then seen by an occlusion handling network which filled in the missing coordinates to retrieve a complete 3D pose. An overview of these stages can be seen in Figure 1. Our lift-then-fill approach has multiple benefits over handling the occlusion at the 2D stage namely:
* **Consistency with Human Anatomy:** Human joints have specific ranges of motion and dependencies on neighbouring joints, which are more accurately captured in 3D. By choosing to fill in the occluded joints solely in 2D space we may obtain poses that violate anatomical constraints and look unnatural once lifted into 3D.
* **Reducing Model Complexity:** As mentioned earlier, a single 3D pose can have various corresponding 2D representations. Consequently, a 2D occlusion model must learn the multiple potential 2D pose interpretations stemming from a single 3D pose. However, addressing this challenge becomes more manageable when working with partial 3D poses due to the inherent consistency of 3D poses during rotations.
* **Reducing Error Propagation:** By tackling occlusion in 3D space, we remove any errors or inaccuracies that would have otherwise been propagated to the lifting network. Even if a 2D occlusion handling network managed to infer the 2D location of the occluded joint relatively well, the lifting network may still struggle to accurately convert them to the correct 3D coordinates. This is especially true if the 2D occlusion handling network introduces subtle errors or noise during its process.
Further results of our OpenPose analysis investigating the percentage of full poses detected in the Human3.6M dataset can be found in the supplementary material.
### Normalising Flow Sampling
Since unsupervised learning approaches can benefit more from increased data than their supervised counterparts, it is beneficial to exploit multiple data sources [6]. To address the challenges of limited data sources within 3DHPE, we leverage the power of normalising flows both in terms of their generative and likelihood capabilities. Put simply, normalising flows are invertible functions that learn to map between simple and complicated distributions. By sampling from the estimated distribution during training, we can therefore generate additional data, allowing us to effectively exploit the limited data we have available. However, we found that random sampling led to the formation of impossible and unnatural 2D poses. To solve this, we introduced a new sampling approach that allowed the flow to generate and learn more meaningful representations of the data.
## 2 Related Work
There currently exist two main avenues of deep-learning research for 3D HPE. The first learns the mapping of 3D joints directly from a 2D image [12, 13, 15, 18, 22]. The second builds upon an accurate intermediate 2D pose estimate,
Figure 1: Overview of the lifting and occlusion handling of the LInKs approach. A partially detected 2D skeleton is obtained from an occluded image. The 2D skeleton is broken into its corresponding torso, legs, and left and right-hand side keypoints which are provided to their respective lifting networks whose outputs are combined to obtain a partial 3D pose. This partial 3D pose is provided to an occlusion handling network which predicts the missing keypoints giving us our full 3D pose. In the above scenario the right arm is occluded therefore the right-hand side and torso lifting networks are not used due to incomplete 2D keypoint information.
with the 2D pose obtained from an image using techniques such as Stacked-Hourglass Architectures [16] or Part Affinity Fields [2] and lifting this 2D pose to 3D. This paper focuses on the latter 2D to 3D lifting avenue which can be organized into the following categories:
### Full and Weak Supervision
Fully supervised approaches seek to learn mappings from paired 2D-3D data, which contain ground truth 2D locations of keypoints and their corresponding 3D coordinates. In comparison, weakly-supervised approaches do not use corresponding 2D-3D data, instead using either augmented 3D data during training or unpaired 2D-3D data. Martinez [14] introduced the first baseline fully connected regression model that learned 3D coordinates from their relative 2D locations in images. This fully supervised work also introduced the residual block architecture which has been adopted as the standard in the field of 2D-3D pose estimation. Yang [28] used an adversarial approach, with a critic network that compared their lifted "in-the-wild" 3D poses against ground-truth 3D poses obtained from a controlled setting. Wandt and Rosenhahn [26] also followed this line of research but transformed their predicted and ground truth 3D poses into a kinematic chain [24] before being seen by a Wasserstein critic network [7] which reduced the overfitting present in direct 3D correspondence. In contrast, Drover [6] investigated if 3D poses can be learned through 2D self-consistency alone, rotating a predicted 3D pose and reprojecting it back into 2D before passing it through the model for comparison. This led to the discovery that an adversarial 2D critic network was needed as self-consistency alone was not sufficient. Conversely, Mehta [15] and Kundu [11] took a transfer learning approach to 3D HPE. They used mixed 2D-3D labels and images in a unified network to predict the 3D pose in scenarios where no 3D data was available. Compared to fully-supervised approaches, weakly supervised methods generalise better to unseen pose scenarios. However, they still struggle with poses that are very different from those within the training data.
### Unsupervised
Unsupervised approaches do not utilise any 3D data during training, unpaired or otherwise. Kudo [9] introduced one of the first unsupervised adversarial networks, utilising random reprojections and a 2D critic network under the assumption that any predicted 3D pose, once rotated and reprojected, would still produce a believable 2D pose. Chen [3] expanded this work and introduced an unsupervised adversarial approach with the rotational consistency cycle presented by Drover [6]. Yu [29] built upon this work further. They highlighted that the temporal constraints introduced in Chen [3] may hinder a model's performance due to balancing multiple training objectives simultaneously, and proposed splitting the problem into both a lifting and temporal scale estimation module. More recently, Wandt [25] dispensed with the critic network and relied instead on the pose likelihood of a pre-trained normalising flow, allowing for a more interpretable loss during training. However, to stabilise the normalising flow during training they performed PCA on the distribution of 2D poses, reducing the dimensionality as well as the information present. Importantly, PCA may have hindered one of the primary advantages of normalising flows, their ability to learn exact bijective mappings between two different data distributions. Therefore, in our approach, we removed dimensionality reduction and replaced it with generative sampling from the latent distribution during training. Additionally, all previously mentioned approaches lift the entire 2D pose to 3D during training making them insufficient in occlusion scenarios. To our knowledge, our study is the first to utilise completely unsupervised networks to explore the feasibility of lifting different sections of a 2D pose to 3D independently for the purpose of obtaining partial 3D poses in occlusion scenarios.
### Unsupervised 2D-3D Occlusion Handling
In our systemically undertaken review of published literature, only one prior work was identified that investigated unsupervised 2D-3D lifting from a single image, OCR-Pose by Wang [27]. OCR-Pose incorporates two modules: a topology invariant contrastive learning (TiCLR) module and a view equivariant contrastive learning (VeCLR) module. The TiCLR module aims to bridge the gap between an occluded and unoccluded 2D pose while the VeCLR module encourages the lifting network to produce consistent 3D skeletons from multiple viewpoints. However, by completing the occluded pose solely in 2D space the the accuracy of their approach is limited as previously discussed. We suggest a different strategy. We attempted to generate a partial 3D pose using the available 2D keypoints and subsequently fill in the occluded parts in 3D space. We find our alternative methodology reduced the error and we also simulate more realistic occlusion scenarios than OCR-Pose during our evaluation.
## 3 Method
In this section, we present our unsupervised learning approach for lifting 2D poses to 3D. Our 2D poses consisted of \(N\) keypoints, \((x_{i},y_{i})\), \(i=1...N\), where the root keypoint, located at the origin \((0,0)\), was the midpoint between the left and right hip joint. As monocular images are subject to fundamental perspective ambiguity, deriving absolute depth from a single view is not possible [3, 17]. To address this, we adopted the practice of fixing the distance of the pose from the camera at a constant \(c\) units and normalising such
that the average distance from the head to the root keypoint was \(\frac{1}{c}\) units in 2D, with \(c\) being set to 10 as is consistent with previous research [29, 3, 25].
### Independent Lifting Networks
Our lifting networks, inspired by [14], were trained to predict the 3D depth off-set (\(\hat{d}\)) from the poses root keypoint for each 2D keypoint \((x,y)\). The final 3D location of a specific keypoint, \(\mathbf{x}_{i}\), was then obtained via perspective projection:
\[\begin{split}\mathbf{x}_{i}&=(x_{i}\hat{z}_{i},y_{ i}\hat{z}_{i},\hat{z}_{i}),\\ \mathbf{where}&\hat{z}_{i}&=\max(1,\hat{d }_{i}+c).\end{split} \tag{1}\]
where \(d_{i}\) was our models' depth-offset prediction for keypoint \(i\). We used independent lifting networks for the legs, torso and left and right side keypoints and adopted this arrangement as during our analysis with OpenPose, we noticed that single-limb occlusions or same-side upper and lower limb occlusions were the most common.
### Normalising Flows with Generative Sampling
Normalising flows are widely used generative models for estimating complex probability distributions, including those of 2D poses. In our scenario, \(x\) is our 2D pose data and \(z\) is the latent variable (Gaussian in our experiment) such that \(x=f(z)\), where \(f\) is the flow function, the probability density function of the input data can be computed by using the change of variables formula:
\[p_{X}(x)=p_{Z}(z)\left|\det\left(\frac{\partial f(z)}{\partial z}\right) \right|^{-1} \tag{2}\]
where \(p_{Z}(z)\) is the probability density function of the latent variable \(z\) and \(p_{X}(x)\) is the probability density function of the input data \(x\) (the 2D poses). The determinant of the Jacobian matrix of the flow function, denoted by \(\det\left(\frac{\partial f(z)}{\partial z}\right)\), serves as a normalisation term that accounts for the change in volume induced by the transformation \(f\). Previously PCA was used to reduce the dimensionality of 2D poses prior to training the normalising flow [25]. The reason for this was the high dimensionality of input data which prevented flow optimisation. Using our approach, we found that this step was not needed as we could incorporate pose sampling during the training of our flows to increase stability. Specifically, while training the flow to maximise the likelihood of the training data, we also maximised the likelihood of sampled 2D poses drawn from the estimated distribution. However, our analysis revealed that simple random sampling can cause the flow to collapse due to impossible 2D poses being generated. We posit that this is due to the interconnected nature of 2D keypoints which the flow may have trouble learning. To address this issue, we introduced a modified sampling strategy that first obtained the estimated distribution samples of the true 2D poses. We then augmented this sample by adding Gaussian noise, scaled by a factor of \(\sigma\), and the Gaussian features of the current sample, before passing it through the inverse of the normalising flow to generate new 2D poses. This modification can be expressed as follows:
\[\mathbf{x}_{i}^{\prime}=f_{\theta}(\mathbf{z}_{i}+\sigma\mathbf{z}_{i}\mathcal{ N}(0,1)) \tag{3}\]
where \(\mathbf{x}_{i}^{\prime}\) is the sampled 2D pose, \(f_{\theta}\) is the normalising flow with parameters \(\theta\), \(\mathbf{z}_{i}\) is the estimated distribution sample of a true 2D pose, \(\sigma\) is a scaling factor and \(\mathcal{N}(0,1)\) is our random Gaussian noise. Examples of poses drawn from the latent distribution of our flow via random sampling and our own sampling approach can be seen in Figure 2.
During training, we updated the parameters of our normalising flow to minimise the negative log-likelihood of both the sampled and ground-truth 2D poses, defined as:
\[\mathcal{L}_{\theta}=-\frac{1}{N}\sum_{i=1}^{N}\left[\log p_{\theta}(\mathbf{ x}_{i}^{\prime})+\log p_{\theta}(\mathbf{x}_{i})\right] \tag{4}\]
where \(p_{\theta}\) is the probability density function estimated by the normalising flow, and \(\mathbf{x}_{i}\) is the ground-truth 2D pose. We trained a total of five normalising flows for our model. Four flows were trained on the leg, torso, and left and right side keypoints and were used for likelihood estimation during the training of each corresponding lifting network. The final normalising flow was trained on the entire pose and was specifically used for the generative sampling of new poses during the training of each lifting network and their respective flows.
### Independent Lifting, Rotational consistency and Re-projection Likelihood.
Motivated by multi-view camera setups, where depth can be inferred through re-projection to another view, unsupervised learning utilises a virtual second view during training to mimic this property [29, 3, 6, 3]. Given a 2D input pose \(\mathbf{Y}_{2D}\in\mathbb{R}^{N\times 2}\), a corresponding 3D pose \(\hat{\mathbf{Y}}_{3D}\) is obtained via perspective projection. \(\hat{\mathbf{Y}}_{3D}\) is then rotated by a rotation matrix \(\mathbf{R}\) to produce a new 3D pose. The azimuth angle of
Figure 2: Showing 2D poses drawn from the learned latent space of our normalising flows via random sampling (left) and our improved sampling (right). Note how random sampling leads to abnormal poses such as limbs being too long and unnaturally bent
\(\mathbf{R}\) is obtained by random sampling from a uniform distribution between \([-\pi,\pi]\), with the elevation angle learned during training as detailed in work by [25]. Subsequently, the rotated 3D pose is re-projected back to 2D using the projection matrix \(\mathbf{P}\). This yields a new synthetic viewpoint of the pose, represented by the matrix \(\tilde{\mathbf{Y}}_{2D}\in\mathbb{R}^{N\times 2}\). The new 2D pose is then split into the leg, torso, and left and right side keypoints before being given to our pre-trained normalising flows for density estimation. As a result, each normalising flow provides a likelihood value representing the probability of each 2D segment occurring within the learned distribution from the training dataset \((\mathcal{L}_{NF})\). To promote self-consistency during training, we passed \(\tilde{\mathbf{Y}}_{2D}\) through the same lifting networks and obtain a new 3D pose from this virtual viewpoint \(\tilde{\mathbf{Y}}_{3D}\). We performed the inverse rotation \(\mathbf{R}^{-1}\) on \(\tilde{\mathbf{Y}}_{3D}\), and reprojected it back into 2D. Adopting this process enabled the original matrix of 2D keypoints \(\mathbf{Y}_{2D}\) to be recreated, thereby facilitating our model to learn consistency. Specifically, our lifting networks sought to minimize the following component:
\[\mathcal{L}_{2D}=\left|\mathbf{Y}_{2D}-\mathbf{P}\mathbf{R}^{-1}\tilde{ \mathbf{Y}}_{3D}\right| \tag{5}\]
As we lifted different 2D keypoints independently, each lifter received its own \(\mathcal{L}_{2D}\) loss based on different keypoints. For instance, the right side lifter incurred a loss due to the discrepancy between the original 2D keypoint coordinates of the right wrist and its 2D coordinates once lifted into 3D, inversely rotated, and re-projected. The \(\mathcal{L}_{2D}\) loss for the left side lifter excluded this error, as it does not predict the 3D ordinate for this keypoint. In addition to the 2D loss, a 3D consistency loss is also included to improve the self-consistency within our model. This loss compares the original 3D pose predictions \(\hat{\mathbf{Y}}_{3D}\) with the 3D pose obtained when the predictions from the virtual viewpoint are inversely rotated. The 3D consistency loss is given by:
\[\mathcal{L}_{3D}=\left|\hat{\mathbf{Y}}_{3D}-\mathbf{R}^{-1}\tilde{\mathbf{Y} }_{3D}\right| \tag{6}\]
During training, as we were using multiple lifting networks, we produced three 3D poses for each pose of input. The first pose was created by combining the results of the leg and torso network. The second was produced by combining the results of the left and right side lifting networks, with the right side network predictions used for the spine, neck, head and head-top keypoints. The third was produced identically to the second but with the left side lifter predictions used for the spine, neck, head and head-top keypoints. For the first pose the elevation angle used within \(\mathbf{R}\) was the average value from the leg and torso lifting networks. For the second and third pose the elevation angle used in \(\mathbf{R}\) was the average from the left and right lifting networks.
### Occlusion Handling Network
The final stage of our proposed method involved the transfer of knowledge from the independent lifting networks to our occlusion network \(O_{3D}\). Specifically, we simulated various occlusion scenarios by masking a 2D pose and then rearranged the lifting networks to obtain a partial 3D pose in this occlusion scenario. Our occlusion network was then trained to predict the 3D coordinates for the occluded part when given this partial 3D pose as input. We trained our occlusion models using knowledge distillation and it learnt to match its own predictions of the missing coordinates with that of the lifting networks if there were no occlusions present. This loss is given by:
\[\begin{split}\mathcal{L}_{3Docc}=\|(\mathbf{x}_{m},\mathbf{y}_{ m},\hat{\mathbf{z}}_{m})-(\hat{\mathbf{x}}_{o},\hat{\mathbf{y}}_{o},\hat{ \mathbf{z}}_{o})\|^{2}\\ \textbf{where}\quad(\hat{\mathbf{x}}_{o},\hat{\mathbf{y}}_{o}, \hat{\mathbf{z}}_{o})=O_{3D}(\mathbf{x}_{p},\mathbf{y}_{p},\hat{\mathbf{z}}_{ p})\end{split} \tag{7}\]
where, \((\mathbf{x}_{m},\mathbf{y}_{m},\hat{\mathbf{z}}_{m})\) represents the 3D coordinate predictions of the missing part by our lifting networks if the occluded part was visible, \((\hat{\mathbf{x}}_{o},\hat{\mathbf{y}}_{o},\hat{\mathbf{z}}_{o})\) represents the occlusion models predictions of this missing part and \((\mathbf{x}_{p},\mathbf{y}_{p},\hat{\mathbf{z}}_{p})\) represents our partial 3D pose given as input to the occlusion model.
### Additional Losses
We included two additional losses within our study which have shown to improve results in prior work. [25] demonstrated that although many properties of the human body are unknown in an unsupervised setting, relative bone lengths are nearly constant among people [20]. Using this assumption, we calculated the error between the predicted poses' relative bone lengths and a pre-calculated mean (given in [25]):
\[\mathcal{L}_{b}=\frac{1}{K}\Sigma_{i}^{K}\|b_{i}-\hat{b}_{i}\|^{2} \tag{8}\]
where \(b_{i}\) is the pre-calculated relative bone length of bone \(i\), \(\hat{b}_{i}\) is our models predicted bone length for bone \(i\) and \(K\) is the total amount of bones in our 3D pose. Our second loss is that of temporal deformation introduced by [29], where they showed that it was beneficial to consider the movement between two poses at different time steps. As we were not dealing with temporal data, we defined the same loss to be the same between two different samples from the training batch:
\[\mathcal{L}_{def}=\|(\hat{\mathbf{Y}}_{3D}^{a}-\hat{\mathbf{Y}}_{3D}^{b})-( \tilde{\mathbf{Y}}_{3D}^{a}-\tilde{\mathbf{Y}}_{3D}^{b})\|^{2} \tag{9}\]
where \(\hat{\mathbf{Y}}_{3D}\) and \(\tilde{\mathbf{Y}}_{3D}\) are the predicted 3D poses from the real and virtual viewpoint and \(a\) and \(b\) represent their position in the training batch.
### Training and Architecture
As previously stated, we trained four lifting networks that each predicted the 3D coordinates for the leg, torso, and left and right side respectively. The likelihood estimation of each 2D segment obtained from the rotated and re-projected 3D pose comes from a respective pre-trained normalising flow. For the normalising flows we adopted the neural network architecture proposed in [5], which includes 8 coupling blocks. Each sub-network responsible for predicting the affine transformation was comprised of 2 fully connected layers with 1024 neurons and utilised the ReLU activation function. As for the lifting networks, we drew inspiration from the works of [14] and [25]. These networks were designed with two paths: one for predicting depth and the other for estimating the elevation angle. Each path consisted of 3 residual blocks with a shared residual block before each path. We trained our lifting networks and flows for 100 epochs with a batch size of 256 using the Adam optimiser with an initial learning rate of \(2\times 10^{-4}\) which decayed exponentially by 0.95 every epoch. Generative sampling was included within the training of our flows and lifting networks with a \(\sigma\) of 0.2. The final objective function for our lifting networks was:
\[\mathcal{L}_{lift}=\mathcal{L}_{NF}+\mathcal{L}_{2D}+\mathcal{L}_{3D}+ \mathcal{L}_{def}+50\mathcal{L}_{b} \tag{10}\]
Our occlusion networks were trained for 10 epochs with the same optimiser and learning rate as the lifting networks. To exploit the rotational coherence within 3D poses and enhance the occlusion handling network's adaptability, we also randomly rotated the partial 3D poses fed into the occlusion handling network. These rotations were executed solely along the azimuth axis and ranged from \(-\pi\) to \(\pi\).
## 4 Results and Evaluation
Here we compare the performance of both our lifting networks and occlusion networks on two widely used 3D datasets: Human3.6M [8] and MPI-INF-3DHP [15]. Our findings indicate that in scenarios without occlusion, our lifting model achieves superior performance on both the Human3.6M and MPI-INF-3DHP datasets for all metrics when compared to prior research. Moreover, we assessed the effectiveness of our approach in handling 2D occlusion on the Human3.6M dataset. Qualitative results of our approach in non-occlusion scenarios can be seen in Figure 3.
### Human3.6M Results.
Human3.6M [8] is one of the largest and most widely used pose datasets consisting of motion capture data and videos captured from four viewpoints of eleven actors performing diverse actions. The dataset is evaluated using Mean Per Joint Position Error (MPJPE), which is the Euclidean distance in millimetres between the predicted and ground-truth 3D coordinates. Two common protocols are employed when evaluating Human3.6M. The first is N-MPJPE which employs scaling to the 3D predicted pose prior to evaluation. The second is PA-MPJPE where the 3D predicted pose undergoes Procrustes alignment to the GT 3D pose prior to evaluation. Comparing our approach against other approaches with differing levels of supervision (Table 1), it can be seen that our approach demonstrates a 7.9% improvement over the current reference standard in unsupervised pose estimation [25] in PA-MPJPE and a 5% improvement in N-MPJPE. Additionally, our approach outperforms multiple adversarial unsupervised methods [29, 3, 27].
### MPI-INF-3DHP Results.
MPI-INF-3DHP [15] is a markerless MoCap dataset containing the 3D human poses of 8 actors performing 8 different activities. We report the PA-MPJPE result as well as the Percentage of Correct keypoints (N-PCK) and the corresponding area under the curve (AUC). N-PCK is the percentage of predicted coordinates that are within a fixed threshold of \(150mm\) to the ground-truth and AUC reports the N-PCK at a range of thresholds between 0-150\(mm\). The results of our analyses using this dataset (Table 2) once again demonstrated that our approach outperforms previous unsupervised and weakly-supervised approaches that are unable to handle occlusions.
### Results in Occlusion Scenarios.
To show quantitatively that it is better to first lift an occluded 2D pose to 3D space rather than complete the pose in 2D space, we conducted an additional study on simu
\begin{table}
\begin{tabular}{l l c c} \hline \hline Supervision & Method & PA-MPJPE \(\downarrow\) & N-MPJPE \(\downarrow\) \\ \hline Full & Martinez _et al_. [14] & 37.1 & 45.5 \\ & Cai _et al_. [1] & 40.2 & 48.8 \\ & Pavllo _et al_. [19] (T) & 27.2 & 37.2 \\ \hline Weak & Wandt and Rosenhahn [26] & 38.2 & 50.9 \\ & Drover _et al_. [6] & 38.2 & - \\ & Yang _et al_. [28] & 37.7 & 58.6 \\ & Tung _et al_. [23] & 79.0 & - \\ \hline Unsupervised & Chen _et al_. [3] & 58.0 & - \\ & Yu _et al_. [29] (T) & 42.0 & 85.3* \\ & Wandt _et al_. [25] & 36.7 & 64.0 \\ & Wang _et al_. [27] & 44.7 & - \\ \hline \multicolumn{3}{c}{LInKs (Ours)} & **33.8** & **61.6** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Evaluation results for the Human3.6M dataset in mm where the input to the model are the GT 2D image keypoints. The bottom section labelled unsupervised shows comparable methods. [27] is the only comparable model that can also handle occlusions. Best results in bold. Numbers are taken from their respective papers. * indicates the use of a scale prior from the dataset. T indicates the use of additional temporal information.
lated occlusion scenarios on Human3.6M. First, we trained a near-identical occlusion network \(O_{2D}\) on the Human3.6M dataset. The only difference between \(O_{2D}\) and \(O_{3D}\) was that \(O_{2D}\) would learn to predict the missing coordinates first in 2D space before the full complete pose was then lifted into 3D. \(O_{2D}\) was trained for 10 epochs with the exact same optimiser and learning rate as \(O_{3D}\). We trained one network for each type of occlusion evaluated. The results of this experiment can be seen in Table 3 along with the occlusion results of OCR-Pose presented in [27]. Qualitative results of our model in simulated occlusion scenarios can be seen in Figure 4. As shown, when we compare the two nearly identical occlusion models, one completing the pose in 2D space and the other in 3D space, and the same lifting network, the 3D space occlusion model outperforms the 2D space occlusion model when handling all types of occlusion. It is important to note that the simulated occlusion in OCR-Pose uses random uniform sampling between 0 and 3 when deciding the number of keypoints to occlude, meaning that some poses in their evaluation may actually be unoccluded. In addition, simulating occlusion this way is not realistic as typically specific limbs or segments are missing not just 0-3 random points. Our results show that legs are reconstructed with greater accuracy than arms when using a partial 3D pose, which is consistent with the intuitive understanding that arms possess a higher degree of freedom compared to legs in a 3D context. A surprising find was the fact that our model more accurately located the right arm when it was occluded than the left arm. We hypothesised that the opposite would occur as we assumed people in the Human3.6M dataset were right-handed as the majority of their actions within the dataset were performed with their right arm. Consequently, we expected the right arm to be harder to discern than the left.
\begin{table}
\begin{tabular}{l l c c c} \hline \hline Supervision & Method & PA-MPJPE \(\downarrow\) & N-PCK \(\uparrow\) & AUC \(\uparrow\) \\ \hline Weak & Kundu _et al._[11] & 93.9 & 84.6 & 60.8 \\ & Wandt and Rosenhahn [26] & - & 81.8 & 54.8 \\ \hline Unsupervised & Chen _et al._[3] & - & 71.1 & 36.3 \\ & Yu _et al._[29] & - & 86.2 & 51.7 \\ & Wandt _et al._[25] & 54.0 & 86.0 & 50.1 \\ \hline & LInKs (Ours) & **49.7** & **86.3** & **54.0** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Evaluation results for the MPI-INF-3DHP dataset in scenarios without occlusion. The bottom unsupervised section shows comparable models.
Figure 3: Showing qualitative results obtained from our LInKs model when there is no occlusion present in the scene. The top 3 rows show results from the Human3.6M dataset and the bottom 3 rows show results from the MPI-INF-3DHP dataset.
\begin{table}
\begin{tabular}{l l c c} \hline \hline Model & Occlusion & PA-MPJPE \(\downarrow\) & N-MPJPE \(\downarrow\) \\ \hline Wang _et al._[27] & _L_(0,3) Random Keypoints & 54.8 & - \\ \hline \(O_{2D}\) (Ours) & Left Arm & 61.4 & 85.7 \\ & Left Leg & 49.4 & 76.0 \\ & Right Arm & 59.8 & 84.5 \\ & Right Leg & 51.2 & 75.4 \\ & Left Arm \& Leg & 70.7 & 94.9 \\ & Right Arm \& Leg & 71.9 & 95.0 \\ & Both Legs & 81.5 & 117.0 \\ & Full Torso & 96.3 & 136.0 \\ \hline \(O_{3D}\) (Ours) & Left Arm & **52.1** & **78.1** \\ & Left Leg & **46.0** & **73.2** \\ & Right Arm & **49.8** & **75.7** \\ & Right Leg & **44.5** & **71.6** \\ & Left Arm \& Leg & **62.0** & **86.0** \\ & Right Arm \& Leg & **60.2** & **83.7** \\ & Both Legs & **69.3** & **99.8** \\ & Full Torso & **88.4** & **122.0** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Evaluation results for the Human3.6M dataset in occlusion scenarios. Our 2D MSE model denotes a model with identical parameters trained to fill in the occluded coordinates in 2D space. When a single limb is occluded in our scenarios all keypoints belonging to that limb are occluded e.g. for the arm it will be the shoulder, elbow and wrist.
torso lifter under non-occlusion scenarios as our final results due to their improved performance on N-MPJPE when compared to the left and right lifter. We hypothesise this is due to the left and right network predictions being at slightly different scales. This occurs in scenarios where one of the subjects' sides is facing the camera making this side appear to have a larger scale in 2D, thus having a larger scale when predicted in 3D. The leg and torso network is able to mitigate this due to seeing both left and right segments of the pose.
### Limitations
Our proposed method enables 3D poses to be accurately retrieved even with incomplete 2D information thank to our lift-then-fill approach. The main limitation of this approach however, is that it is not currently capable of accurately dealing with all types of occlusion. For instance, in cases where one keypoint is missing, e.g. the left wrist, our current approach would remove the potentially valuable information of the left shoulder and left elbow 2D coordinates. This is because our partial 3D estimate would be produced using the right side and legs network, both of which do not use these 2D coordinates. Furthermore, if across-body occlusions such as the left wrist and right ankle are present then our approach in its current form would not work. Additionally, we found our leg occlusion networks made some consistent errors such as predicting someone as crouching or lunging during a sitting action. However, we must appreciate the fact that trying to predict the 3D coordinates of the legs from just the 3D of the torso is a highly difficult task. In future work, we plan to address these challenges by looking at a more robust lift-then-fill approach which is able to handle all types of occlusions.
## 5 Conclusion
In conclusion, our LInKs approach of 2D-3D lifting with partial 3D pose retrieval in occlusion scenarios can reduce the average error on popular 3D human pose datasets. Moreover, our work extends the applications of normalising flows in pose estimation by incorporating generative sampling, which enables the flow to learn a more-defined prior distribution of 2D poses. This, in turn, leads to a stronger likelihood of reconstructed 2D poses during training while also providing additional data to the model. Our approach differs from all prior approaches in that we lift individual parts of the 2D pose independently, specifically for the purpose of occlusion handling where all prior approaches would not work. Furthermore, we show that dealing with occlusion in 2D space is inferior to our occlusion handling process in 3D space. In the future, we plan to address the limitations of our approach and investigate adaptive network structures that can handle inputs with different dimensions to acquire a partial 3D pose in more scenarios. We hope our work can inspire more research investigating the difficult task of occlusion handling in unsupervised 2D-3D lifting.
|
2309.04992 | Mitigating Word Bias in Zero-shot Prompt-based Classifiers | Prompt-based classifiers are an attractive approach for zero-shot
classification. However, the precise choice of the prompt template and label
words can largely influence performance, with semantically equivalent settings
often showing notable performance difference. This discrepancy can be partly
attributed to word biases, where the classifier may be biased towards classes.
To address this problem, it is possible to optimise classification thresholds
on a labelled data set, however, this mitigates some of the advantages of
prompt-based classifiers. This paper instead approaches this problem by
examining the expected marginal probabilities of the classes. Here,
probabilities are reweighted to have a uniform prior over classes, in an
unsupervised fashion. Further, we draw a theoretical connection between the
class priors and the language models' word prior, and offer the ability to set
a threshold in a zero-resource fashion. We show that matching class priors
correlates strongly with the oracle upper bound performance and demonstrate
large consistent performance gains for prompt settings over a range of NLP
tasks. | Adian Liusie, Potsawee Manakul, Mark J. F. Gales | 2023-09-10T10:57:41Z | http://arxiv.org/abs/2309.04992v1 | # Mitigating Word Bias in Zero-shot Prompt-based Classifiers
###### Abstract
Prompt-based classifiers are an attractive approach for zero-shot classification. However, the precise choice of the prompt template and label words can largely influence performance, with semantically equivalent settings often showing notable performance difference. This discrepancy can be partly attributed to word biases, where the classifier may be biased towards classes. To address this problem, it is possible to optimise classification thresholds on a labelled data set, however, this mitigates some of the advantages of prompt-based classifiers. This paper instead approaches this problem by examining the expected marginal probabilities of the classes. Here, probabilities are reweighted to have a uniform prior over classes, in an unsupervised fashion. Further, we draw a theoretical connection between the class priors and the language models' word prior, and offer the ability to set a threshold in a zero-resource fashion. We show that matching class priors correlates strongly with the oracle upper bound performance and demonstrate large consistent performance gains for prompt settings over a range of NLP tasks.1
Footnote 1: code available on github at [https://github.com/adianliusie/robust-prompt-classifier](https://github.com/adianliusie/robust-prompt-classifier)
## 1 Introduction
Large language models (LLM) have shown impressive general ability for natural language processing (NLP) tasks. LLMs can effectively handle a range of NLP tasks through 'prompting', where a natural language instruction is added to the input, conditioning the model to the task at hand. Prompting can either be an emergent ability learned through scaling up model size Brown et al. (2020); Wei et al. (2022) or an ability learned through instruction tuning Wei et al. (2021); Chung et al. (2022); Ouyang et al. (2022). Despite the recent popularity of prompting, there is a known sensitivity of prompt-based LLMs to elements such as prompt template and label words Gao et al. (2021); Schick and Schutze (2021). Previous works have demonstrated that prompt templates can significantly impact task performance Shin et al. (2020); Zhou et al. (2019) and that factors such as chosen label words can influence system performance for classification tasks Zhao et al. (2021); Holtzman et al. (2021).
This work focuses on the influence of 'word biases' for prompt-based classifiers. i.e. the bias that prompts may have towards certain classes, independent of the input text. To account for this bias, one could use a labelled dataset to find optimal class decision thresholds. This, however, requires labelled task data, which may limit the zero-shot benefits of prompt-based classifiers. We propose a simple unsupervised solution of re-weighting probabilities, where we use unlabelled data to search for weight parameters that ensure a uniform prior over classes. We show that this prior matching leads to greater
Figure 1: Instead of using the raw LM output probabilities of the label words, we consider mitigating bias by finding weights that make the classifier unbiased over classes. This is connected to normalising by word priors, which we use as a zero-resource de-biasing approach.
robustness for diverse prompt settings and that the unsupervised weights which debias the classifier is highly correlated with the oracle weights that maximise accuracy. Further, we provide theoretical analysis that draws a connection between word priors and inherent class bias, which we use to motivate a zero-resource normalisation approach that is competitive with prior matching. Overall, we demonstrate that our unsupervised approach highly reduces sensitivity to the chosen prompt and label words, and that settings which initially fail can often be made effective through a simple probability re-weighting.
Our contributions are 1) We propose a simple unsupervised probability re-weighting method, and empirically demonstrate greater robustness to prompt and label word choice, with large accuracy gains across prompt settings for a range of standard NLP tasks. 2) We theoretically connect the weight parameters to word priors and use this to motivate a zero-resource re-weighting approach. 3) We show that the weights of prior matching are highly correlated with the optimal oracle weights that maximize accuracy, illustrating that our approach is a near-optimal use of a system's output probabilities.
## 2 Mitigating Bias by Re-weighting
**Prompt-based classifiers** Given an input sequence \(\mathbf{x}\in\mathcal{X}\), large language models (LLMs) model \(P_{\theta}(\mathbf{w}|\mathbf{x})\), the output probability distribution over all possible sequences \(\mathbf{w}\in\mathcal{X}\). For a classification task \(\mathcal{T}\), a prompt-based classifier 1) reformats the input text \(x\) to prompt \(\mathbf{p}\in\mathcal{X}\) by including the task instruction, and 2) selects class words \(\{w_{i}\}_{1:K}\) which are associated to each output class \(\{y_{i}\}_{1:K}\). For example in sentiment classification, one can use prompt _'what is the sentiment of the following review? <x>'_, (where <x> is the current input \(x\), e.g. _'Inception was absolutely brilliant'_), and class words \(w_{0}\)=_bad_ and \(w_{1}\)=_good_ for the negative and positive classes respectively. For a prompt classifier, \(Q=\{\mathbf{p},\{w_{i}\}_{1:K}\}\), class probabilities can be set to be proportional to the probability of the associated class word, where the final decision \(\hat{y}\) is the class with the highest probability (Zhao et al., 2021; Jiang et al., 2020).
\[\tilde{P}_{\theta}(y_{k}|\mathbf{x},Q) =\frac{P_{\theta}(w_{k}|\mathbf{p}(\mathbf{x}))}{\sum_{w_{i}}P_{ \theta}(w_{i}|\mathbf{p}(\mathbf{x}))} \tag{1}\] \[\hat{y} =\underset{k}{\text{argmax}}\ \tilde{P}_{\theta}(y_{k}|\mathbf{x},Q) \tag{2}\]
However, as a large language model, the prompt-based classifier may return probabilities that are influenced by distributional statistics of words (Gardner et al., 2021; Liusie et al., 2022). This may lead to inherent class bias, where label words may have high probability not because they better answer the prompt, but because they have a high LM prior.
**Optimal Weights** To account for this, one can define weight parameters \(\boldsymbol{\alpha}=\{\alpha_{i}\}_{1:K}\), where each \(\alpha_{i}\in\mathbb{R}^{+}\) scales the probabilities of the classifier,
\[\hat{P}_{\theta}(y_{k}|\mathbf{x},Q,\boldsymbol{\alpha})=\frac{\alpha_{k} \tilde{P}_{\theta}(y_{k}|\mathbf{x},Q)}{\sum_{i}\alpha_{i}\tilde{P}_{\theta}(y _{i}|\mathbf{x},Q)} \tag{3}\]
Given labelled task dataset \(\mathcal{D}=\{(\mathbf{x}^{(j)},y^{(j)})\}_{j=1}^{N}\), one can then find the optimal weights \(\boldsymbol{\alpha}^{*}\) that maximises the accuracy of the prompt classifier \(\tilde{P}_{\theta}(y_{k}|\mathbf{x},Q,\boldsymbol{\alpha})\) over the dataset,
\[\boldsymbol{\alpha}^{*}=\underset{\boldsymbol{\alpha}}{\text{argmax}}\ \text{Accuracy}(Q,\boldsymbol{\alpha},\mathcal{D}) \tag{4}\]
**Prior-Matching** The previous approach requires labelled data, which may limit the benefit of using prompt-based classifiers. As an alternative, one can find the values \(\bar{\boldsymbol{\alpha}}\) that ensure that the classifier is unbiased, such that the class prior \(\hat{P}(y_{k}|Q,\boldsymbol{\alpha})\) matches the true prior \(P(y_{k})\)
\[\hat{P}_{\theta}(y_{k}|Q,\boldsymbol{\alpha}) =\mathbb{E}_{\mathbf{x}}\{\hat{P}_{\theta}(y_{k}|\mathbf{x},Q, \boldsymbol{\alpha})\} \tag{5}\] \[\approx\frac{1}{N}\sum_{j=1}^{N}\hat{P}_{\theta}(y_{k}^{(j)}| \mathbf{x}^{(j)},Q,\boldsymbol{\alpha}) \tag{6}\]
\[\bar{\boldsymbol{\alpha}}=\underset{\boldsymbol{\alpha}}{\text{argmin}}\ \sum_{\forall y_{k}}|\hat{P}_{\theta}(y_{k}|Q,\boldsymbol{\alpha})-P(y_{k})| \tag{7}\]
A deterministic solution that exactly matches the distributions exists, which can be found with a search with 1 degree of freedom (that can be accounted for by setting \(\alpha_{1}=1\)). If there is no expected class bias, one can assume equal probabilities over all classes, \(P(y_{k})=\mathcal{U}(y_{k})=\frac{1}{N}\). This approach is therefore unsupervised and only requires text inputs \(\mathcal{D}_{x}=\{\mathbf{x}^{(j)}\}_{j=1}^{M}\), which therefore can be applied at inference to any test set.
**Null-Input Approximation** The dependence of prior-matching on unlabelled dataset \(\mathcal{D}_{x}\) is a drawback. In Appendix A, we show that one can make the analytical approximation
\[\bar{\alpha}_{k}\approx\frac{1}{\mathbb{E}_{x}\{P_{\theta}(w_{k}|\mathbf{x},Q)\}}=\frac{1}{P_{\theta}(w_{k}|Q)} \tag{8}\]
Inspired by Zhao et al. (2021), we consider a resource-free approximation of the word prior (equation 8) by considering the output word probabilities of the null input \(\emptyset\) (i.e. an empty string).
\[P_{\theta}(w_{k}|Q)\approx P_{\theta}(w_{k}|\mathbf{p}(\emptyset)) \tag{9}\]
This enables a zero-resource approximation of weight parameters \(\bar{\boldsymbol{\alpha}}\).
## 3 Experiments
### Experimental Setup
**Data** Experimental results are run on standard NLP benchmarks, including sentiment classification (_IMDB_Maas et al., 2011), _Rotten Tomatoes_Pang and Lee (2005) and _amazon_), natural language inference (_SNLI_Bowman et al., 2015) and _MNLI_Williams et al. (2018)) and paraphrase detection (_QQP_Wang et al. (2018)). Evaluation is reported on standard test sets, except for amazon polarity where 5000 test examples were randomly sampled.
**Models** We use FlanT5 large2Chung et al. (2022), a T5 with further instruction-tuning stage where the system was trained in a multi-task fashion over 1,836 tasks, each prepended with a natural instruction prompt. This work evaluates FlanT5 for different NLP tasks with arbitrary prompting set ups. For each task, we select 6 prompt templates and for binary classification tasks consider 25 possible class words pairs, while for NLI we have 64 class word triplets (where all permutations of valid class words are considered). All prompts and label words used are given in Appendix B. Further experiments for FlanT5 base and Llama-2-chat can be found in Appendix D.
Footnote 2: [https://huggingface.co/google/flan-t5-large](https://huggingface.co/google/flan-t5-large)
**Methods** We consider 4 different methods to leverage LLM probabilities for classification. Class word probability via equation 1 (**baseline**). Normalised probabilities calculated using null-inputs priors via equation 9 (**null-input**). Optimising \(\alpha_{k}\) with a search to have unbiased class prior via equation 7 (**prior-match**). The oracle upper-bound performance, found by optimising the optimal accuracy threshold via equation 4 (**optimal**).
### Experimental Results
**Classification Robustness** Table 1 shows the mean and standard deviation of accuracies among all prompt and class word settings for a given task. We observe large consistent gains from both re-weighting approaches, with prior-matching increasing baseline accuracy by between 6.7% to 12.1% for sentiment classification, 13.7% for qqp, and over 25% for natural language inference. Prior-matching also demonstrates performance very similar to the oracle upper-bound, often within 1%, showing that the unsupervised prior-match approach is competitive with the supervised threshold search. Prior-matching also performs better than null-input by a small margin in all tasks, where this small gap confirms that the word-prior normalisation is a very reasonable zero-shot approximation.
**Prompt Robustness** Figure 2 illustrates a boxplot of rotten tomatoes performance over all class-words for each considered method, over all 6 prompts. As observed in Table 1, naively using raw label word probabilities (dark blue) leads to considerable fluctuations in accuracy; some prompt and label word settings lead to reasonable accuracy (92%+ accuracy), however there is observed britteness to label word choice, with many settings demonstrating poor performance. Prior matching (green) leads to significant robustness, with nearly all sensible settings above 85% accuracy. We further find that, as shown in Table 1, the unsupervised approach has accuracies very comparable to those
\begin{table}
\begin{tabular}{c c c|c c c c c c} \hline \hline method & inputs & labels & imdb & rt & amazon & snli & mnli & qqp \\ \hline baseline & ✗ & ✗ & 85.4\(\pm\)12.7 & 78.8\(\pm\)14.0 & 86.0\(\pm\)13.8 & 45.2\(\pm\)13.7 & 43.5\(\pm\)11.3 & 65.4\(\pm\)14.0 \\ null-input & ✗ & ✗ & 92.1\(\pm\)3.2 & 89.1\(\pm\)3.8 & 95.0\(\pm\)1.8 & 75.2\(\pm\)10.4 & 66.1\(\pm\)9.7 & 77.4\(\pm\)6.6 \\ prior-match & ✓ & ✗ & 93.1\(\pm\)3.3 & 90.9\(\pm\)1.6 & 96.0\(\pm\)0.8 & 78.5\(\pm\)9.3 & 69.8\(\pm\)9.7 & 79.1\(\pm\)2.4 \\ optimal & ✓ & ✓ & 93.5\(\pm\)2.7 & 91.2\(\pm\)1.5 & 96.1\(\pm\)0.7 & 79.4\(\pm\)8.2 & 70.8\(\pm\)8.6 & 82.3\(\pm\)2.8 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Average dataset accuracy and standard deviations, over all prompts and label words. **baseline** and **null-input** are zero-resource classification methods, **prior matching** uses the text inputs but not labels, while **optimal** is an oracle approach that uses the labels to search for the best thresholds. Results for FlanT5 large
when using optimal thresholds.
In Figure 3, we consider similar boxplots for SNLI and observe larger gains through reweighting. This was as higher probabilities are often assigned to the entailment and contradiction labels words, leading to under-classification of the neutral class. We observe greater sensitivity to prompt choice and label words for snli than as observed in rotten tomatoes, even with reweighting.
**Weight Alignment** Figure 4 shows a scatter plot of the weights found by the optimal threshold search \(\mathbf{\alpha}^{*}\) (equation 4), with those found from the unsupervised prior matching method \(\bar{\mathbf{\alpha}}\) (equation 7) and the zero-resource word prior approximation (equation 9). We see a clear linear relationship between optimal and prior-match, illustrating that accounting for the marginal bias is almost equivalent with maximising accuracy, however, achieved in an unsupervised fashion. Null-input is also well correlated with the optimal thresholds, but there is a less direct relationship. Similar linear relationships are observed also for other binary-classification tasks and prompts, as shown in Appendix C.
## 4 Conclusions
This paper analyzes prompt-based classifiers and demonstrates that inherent class bias is a significant factor that influences the sensitivity of the system to prompt and label words. We propose an unsupervised approach of prior matching, which we demonstrate performs competitively to the supervised alternative of searching for optimal thresholds, while avoiding the need for labelled data. We
Figure 4: Scatter plot of the optimal weights \(\mathbf{\alpha}^{*}\) (equation 4) with the prior match weights \(\bar{\mathbf{\alpha}}\) (equation 7) and the approximation via null-input (equation 9), for all settings of prompt 1 on **amazon**
Figure 3: boxplots of the accuracy of all label-word sets for **snli**, for the first 3 prompts
Figure 2: boxplots of the accuracy of all label-word pairs for **rotten tomatoes**, over all the considered prompts
relate prior matching with word biases, and motivate a zero-resource approach of debiasing model probabilities. We show that our methods lead to practical approaches that reduce the sensitivity to design choices such as prompts and label words.
## Limitations
This work considered sentiment classification, natural language inference, and paraphrase detection, and could have been extended over a greater suite of tasks to guarantee its effectiveness. Further, this paper ran experiments on FlanT5 and Llama2, and this work has not yet explored a larger range of prompted language models. FlanT5 has also been instruction-tuned on similar tasks, so the findings may be limited in scenarios where known capabilities have to be elicited from models robustly.
## Ethical Considerations
Though this work suggests methods to improve the robustness of prompt-based classifiers to prompts and label words, this does not imply that all design choices will work. In some set ups, the system may be ineffective and have poor generalisation over the task. Deploying machine learning classifiers in real-world classification settings has many associated risks, and careful analysis should be made before deploying such systems.
## Acknowledgements
This work is supported by Cambridge University Press & Assessment (CUP&A), a department of The Chancellor, Masters, and Scholars of the University of Cambridge, and the Cambridge Commonwealth, European & International Trust.
|
2309.05454 | Flesch or Fumble? Evaluating Readability Standard Alignment of
Instruction-Tuned Language Models | Readability metrics and standards such as Flesch Kincaid Grade Level (FKGL)
and the Common European Framework of Reference for Languages (CEFR) exist to
guide teachers and educators to properly assess the complexity of educational
materials before administering them for classroom use. In this study, we select
a diverse set of open and closed-source instruction-tuned language models and
investigate their performances in writing story completions and simplifying
narratives--tasks that teachers perform--using standard-guided prompts
controlling text readability. Our extensive findings provide empirical proof of
how globally recognized models like ChatGPT may be considered less effective
and may require more refined prompts for these generative tasks compared to
other open-sourced models such as BLOOMZ and FlanT5--which have shown promising
results. | Joseph Marvin Imperial, Harish Tayyar Madabushi | 2023-09-11T13:50:38Z | http://arxiv.org/abs/2309.05454v2 | # _Flesch or Fumble?_ Evaluating Readability Standard Alignment of
###### Abstract
Readability metrics and standards such as Flesch Kincaid Grade Level (FKGL) and the Common European Framework of Reference for Languages (CEFR) exist to guide teachers and educators to properly assess the complexity of educational materials before administering them for classroom use. In this study, we select a diverse set of open and closed-source instruction-tuned language models and investigate their performances in writing story completions and simplifying narratives--tasks that teachers perform--using standard-guided prompts controlling text readability. Our extensive findings provide empirical proof of how globally recognized models like ChatGPT may be considered less effective and may require more refined prompts for these generative tasks compared to other open-sourced models such as BLOOMZ and FlanT5--which have shown promising results1.
Footnote 1: Code and data: [https://github.com/imperialite/readability-standard-alignment/](https://github.com/imperialite/readability-standard-alignment/)
## 1 Introduction
The introduction of public-facing text generative models with easy-to-use interfaces, such as ChatGPT by OpenAI, Perplexity Ask by Perplexity AI, and Bard by Google, has catalyzed the research progress of large language models (LLMs) that can follow and execute complex instructions in human language. This particular advantage over regular language models has seen a rapid growth of appreciation and utilization across a number of disciplines and sectors, such as medicine and healthcare Thirunavukarasu et al. (2023); Singhal et al. (2023), teaching and assessment in education Tack and Piech (2022); Kasneci et al. (2023); Wang and Demszky (2023), business and e-commerce Paul et al. (2023), and software development Chen et al. (2021); Roziere et al. (2023); Muennighoff et al. (2023) to name a few.
One of the primary drivers of this advancement in LLMs is _instruction tuning_. This process involves finetuning an LLM on a diverse collection of multi-task corpora transformed in an instruction-answer pair format, which in turn allows the model to learn and improve upon tasks it was not trained on Wei et al. (2021); Wang et al. (2023). In the same vein, other advancements explored the involvement of human raters where a reward-driven language model learns from the aggregated preferences and is incentivized through reinforcement learning if its generated content from a series of executed instructions is acceptable Ziegler et al. (2019); Ouyang et al. (2022). These training methodologies, in essence, allow LLMs to have some form of knowledge in relation to what aligns with humans and bridge the gap between the LLM-oriented goal of next token prediction and a user-oriented objective. Likewise, specifications from various instruction-answer corpora act as signals of constraint to control a model's output Zhang et al. (2023).
However, one of the main research gaps that these powerful instruction-following models may need to be rigorously tested with is the _ability to capture human standards_. Standards or domain-specific frameworks are expert-defined sets of rules that humans follow in various interdisciplinary fields. For example, a teacher must be properly knowledgeable of assessment standards such as the Common European Framework of Reference for Languages (CEFR) for evaluating the quality of text-based educational content before they can use it in a classroom setting Jones and Saville (2009). Therefore, if LLMs such as ChatGPT are to be utilized to generate educational content for the teacher, then it would be ideal for these models to be evaluated or trained based on how they accept inputs, such as prompting or finetuning, to acquire some form of knowledge of how CEFR works and how it is used to assess the quality of texts.
In this work, we tackle the main research question: **To what extent can instruction-tuned large language models capture readability level specifications from prompts and reflect it to the generated content?** Towards this end, our major contributions are as follows:
1. To the best of our knowledge, our work is the first to explore the readability-alignment capabilities anchored on realistic standards such as the Flesch-Kincaid Grade Level and the Common European Framework of Reference for Languages (CEFR) of a diverse set of open and close-sourced instruction-tuned large language models.
2. Our findings provide empirical and quantitative evidence of the true performances of models such as ChatGPT, FlanT5, and Llama for the tasks of story completion and simplification often performed by non-technical users such as teachers to produce classroom-ready content.
## 2 Readability Standard Alignment of Large Language Models
### Background
Instruction-tuned language models are developed to be used by the wider non-technical and interdisciplinary audiences of the general public. As such, users may impose or desire to have current domain-specific and expert-outlined standards in their respective fields integrated into these models for seamless use. For example, simple text prompts with grade-level specifications such as _"Write a story for second-grade readers."_ are often used and suggested by academic groups for teachers and educators who want to produce classroom-ready materials using commercial generative tools such as ChatGPT [11, 12]. This notion, however, assumes that these models already have some knowledge of how text readability assessment metrics, such as Flesch Kincaid Grade Level, work and also assumes that they can generate any text conforming to any readability level specification on the fly. In this study, we put this assumption to stringent tests and formally frame the task as evaluating for _readability standard alignment_. We discuss our experimental procedures in this section concerning the choice of instruction-tuned models to be investigated, metrics for evaluation, and corpora for prompting generations from models.
### Selected Models
We explore a diverse set of open and closed-source instruction-tuned large language models to assess their capability to follow readability specifications from the prompts and reflect it to their generated content. We consider a model's _standard_ size with respect to the selection that will be included in our main experiments. For example, if Llama 2 has multiple models ranging from 7B, 13B, and 70B, we select the one with 7B parameters as this is considered the base model that is accessible by most. To further clarify, we did not perform any finetuning method as these models are already finetuned towards maximizing their instruction-following capabilities.
**Llama 2**[13] is an improved version of the original Llama 1 model [13] with an added mix of publicly available online data and pretrained with over 2T tokens with a context length of 4096. Specifically, we use the 7B model2 finetuned for chat with over 1M human annotations using the Reinforcement Learning from Human Feedback (RLHF) method [10].
Footnote 2: [https://huggingface.co/meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
**FlanT5**[14] is another enhanced instruction-tuned language model built on top of the T5 model [12] with 11B parameters. For this study, we use the FlanT5-Base model3 hosted in Huggingface with 250M parameters and trained with over 14M examples from instruction datasets including Muffin [15], T0-SF [16], and Natural Instructions V2 [14].
Footnote 3: [https://huggingface.co/google/flan-t5-base](https://huggingface.co/google/flan-t5-base)
**BLOOMZ**[13] by BigScience4 is an enhanced version of the multilingual language model BLOOM [12] through finetuning on xP3 which is a compilation of multilingual multitask learning datasets in 46 languages with English prompts. We use the standard 3B model5 hosted on Huggingface for our experiments. We included this multilingual
language model in our study to diversify the models being investigated and see if finetuning on multilingual instruction-tuned datasets can affect the performances for our complexity-specific prompting tasks.
**Longform-T5**Koksal et al. (2023) is a recent model finetuned using the Longform dataset on top of the various architectures such as T5-XL, OPT, and Llama 1. The Longform dataset contains over 27,739 LLM-generated instructions and long text pairs from parsed structured corpora and reformulated NLG tasks derived from existing corpora such as C4 Raffel et al. (2020), WikiHow Koupaee and Wang (2018), BigBench Srivastava et al. (2023), and StackExchange Longpre et al. (2019). We use the standard 3B T5-XL model6 hosted on Huggingface for this study.
Footnote 6: [https://huggingface.co/akoksal/LongForm-T5-XL](https://huggingface.co/akoksal/LongForm-T5-XL)
**Dolly** is one of the earlier instruction-tuned models released subsequently after ChatGPT. The model is finetuned with a publicly accessible dataset containing 15K human-generated prompting pairs collated by Databricks conforming to tasks such as classification, closed and open QA, summarization, and trained on top of EleutherAI's 3B Pythia model Biderman et al. (2023). We use the standard 3B model7 for this study available on Huggingface.
Footnote 7: [https://huggingface.co/databricks/dolly-v2-3b](https://huggingface.co/databricks/dolly-v2-3b)
**ChatGPT (GPT-3.5-Turbo)** is the only closed-source model we consider within our computing budget. We include this model in our experimentation since ChatGPT is globally recognized and one of the few models with a publicly accessible interface. For this study, we use the latest regular-sized GPT-3.5-Turbo context model covering up to 2021 in its training data through the OpenAI API8. We label this model as _close-sourced_ since there are no publicly available reports about its data and training procedures.
Footnote 8: [https://platform.openai.com/docs/guides/gpt](https://platform.openai.com/docs/guides/gpt)
### Assessment Standards as Evaluation Metrics
We select two standard metrics used by teachers and educators in assessing the quality and complexity of texts in a classroom setting described below:
**Flesch Kincaid Grade Level (FKGL)**Kincaid et al. (1975) is a simple but long-standing readability formula used in all aspects of text quality assessment both in globally recognized text editing software such as Microsoft Word as well as in text complexity and simplification research Wubben et al. (2012); Shardlow (2014); Scarton and Specia (2018); Alva-Manchego et al. (2020); Maddela et al. (2021); Alva-Manchego et al. (2021); Tanprasert and Kauchak (2021). Derived from the original Flesch Reading Ease formula Flesch (1948), FKGL considers surface-level variables such as the total number of words \(TW\), sentences \(TS\), and syllables \(TSL\). In terms of output, FKGL provides a score \(x\) within the range \([0,18]\), where lower values indicate easier readability (e.g. short stories) and higher values denote increased complexity (e.g. academic papers). We show the formula of FKGL below:
\[FKGL=0.39(\frac{TW}{TS})+11.8(\frac{TSL}{TW})-15.59 \tag{1}\]
**Common European Framework of Reference for Languages (CEFR)9** is one of the most well-known language learning assessment metrics globally developed by the Council of Europe and is often used as a basis to grade complexity levels of reading materials and educational content for foreign language learners. CEFR uses a six-point reference scale (A1, A2, B1, B2, C1, C2), which denotes increasing levels of complexity when used to grade texts for various learners. In order to identify the CEFR levels of the generated texts of the instruction-following LLMs used in the study, we use the separate SVM classifier model from the work of Xia et al. (2016) trained with the Cambridge Exams dataset composed of CEFR-ready data from A2 to C2. The SVM model was developed by extracting over \(150\)+ linguistic features ranging from traditional, lexico-semantic, parse tree, and discourse-based features and performs at an accuracy of \(0.803\), as reported in the paper. We tried training the feature set using an optimized Random Forest, which obtained a higher accuracy of \(0.836\) and used this model instead for this work.
### The European Language Grid (ELG) Data
For this study, we requested the CEFR corpus from the **European Language Grid (ELG)10** compiled by Breuker (2022) which contains over \(1,200\) text passages from a diverse range of genres such as fiction, science, and history distributed over the six CEFR scales (A1 to C2). From the data, we selected only those text passages that strictly belong to one scale (ex. C2) and disregarded the A1 level due to having only \(24\) documents and to also conform to the CEFR classifier by Xia et al. (2016) used for generation analysis. We balanced the number of entries for each level (60) in order to have a uniform distribution and even comparison for later discussion of results.
Footnote 10: [https://live.european-language-grid.eu/catalogue/corpus/9477](https://live.european-language-grid.eu/catalogue/corpus/9477)
We describe in Table 1 an overview and some basic statistics of the collected ELG dataset. From the Table, a linear relationship can be observed where as the CEFR complexity level increases from A2 to C2, the variables of average word count, sentence count, and corresponding FKGL levels also accumulate.
## 3 Prompt-Based Story Completion
Our first choice of generation task to measure the generation quality of instruction-following language models is the open-ended story completion. We selected this task as it aligns with the natural task of teachers prompting language model-driven interfaces such as ChatGPT for educational content generation such as stories or short narratives Kasneci et al. (2023); Whalen et al. (2023).
### Procedure
For the prompt-based story completion setup, we split each narrative entry from the ELG corpus into prompt-continuation pairs. Each prompt is composed of \(50\)-\(70\) words to provide enough context for the language models, and we set the specifications for each model to generate text with a minimum of \(30\) and a maximum of \(300\) new tokens, respectively. In terms of decoding, we set the nucleus sampling hyperparameter top-\(p\) to \(0.95\) following the recommendation of DeLucia et al. (2021) stating a value of \(0.9\) or higher is the best for narrative generation.
As reported in Table 2, we use four styles of instructional prompting where specific grade levels, the name of the assessment framework, and its description are added iteratively to find out if the increasing information on readability specification will be captured and have a substantial effect on the complexities of instruction-following models' generation quality. We customized the different levels of instructional prompts for both the FKGL and CEFR assessment standards. We replace the {text} token with the prompts from the ELG corpus before sending the entire instruction to each model for generation.
### Results and Insights
Figures 1 and 2 report the performances of the six instruction-tuned models for the story completion task evaluated using the FKGL and CEFR. Actual values from the formula are used for FKGL, while accuracy scores are used to report a model's performance for CEFR. We include additional tables for the mean and standard deviations of FKGL scores in Appendix A.
**Instruction-tuned models struggle in story completion using FKGL specifications**. Using the FKGL as guiding information for generating story completions for Grade 2, none of the models in any of the prompt iterations with increasing readability information specification achieved acceptable performance that is within the range of \(1<\text{FKGL}(x)\)\(<3\). This finding may indicate that formula-based text complexity metrics aside from FKGL, such as SMOG Mc Laughlin (1969), Dale-Chall Dale and Chall (1948), and Coleman-Liau Index Coleman and Liau (1975) that use other forms of predictors beyond total word, sentence, and syllable counts may also not be captured well by instruction-tuned language models unless an explicit series of
\begin{table}
\begin{tabular}{l r r r r} \hline \hline
**Levels** & **Size** & **Ave WC** & **Ave SC** & **Ave FKGL** \\ \hline A2 & 60 & 186.55 & 18.91 & 3.32 \\ B1 & 60 & 264.25 & 15.90 & 6.83 \\ B2 & 60 & 517.71 & 31.71 & 6.91 \\ C1 & 60 & 728.93 & 40.70 & 8.61 \\ C2 & 60 & 749.73 & 37.55 & 9.88 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of ELG dataset for used prompting instruction-following LLMs. Size denotes the number of document instances per level, Ave WC is the average word count, Ave SC is the average sentence count, and Ave FKGL is the average Flesch Kincaid Grade Level score.
computation is provided within the prompts. This limitation may prove to be counter-intuitive as the desired goal is to have the models approximate the readability levels internally to guide its generations instead of the use, but nonetheless, it is still an interesting research challenge.
Going deeper into the analysis, we look at the mean and standard deviations of each model for each iteration style. Without any specifications of grade level, metric, and description, ChatGPT (GPT-3.5-Turbo) achieved the worst performance with a mean of \(8.832\)\((SD=1.549)\) for its FKGL scores from its generations while FlanT5 obtained the closest to the desired range \(1<\) FKGL(_x_) \(<3\) with \(5.133\)\((SD=2.063)\). Interestingly, while none of the models were able to provide generations within the acceptable boundary for FKGL, we observe that only one model, ChatGPT (GPT-3.5-Turbo), showed stable _improving_ scores with the increasing detailedness of the readability information specification in the prompts with a mean trend of \(8.832\)\(\rightarrow\)\(5.155\)\(\rightarrow\)\(5.224\)\(\rightarrow\)\(4.567\). We attribute the performance of this model to its implementation of RLHF to improve alignment to human preferences across a range of tasks (Ouyang et al., 2022). Moreover, since this model is the only one in the set to have a public-facing interface that teachers and educators use, this finding provides empirical support to the various published recommendations by the education community (Staake, 2023; Herft, 2023) to further _specify_ the readability level and assessment framework
Figure 1: Performance via mean Flesch Kincaid Grade Level (FKGL) scores of each instruction-tuned language model for each prompt specification style for the **story completion subtask**. The **red** line and shading indicate the center and the region of acceptable values that are within the target complexity level of the generated text, which is Grade 2.
Figure 2: Performance via accuracy scores of each instruction-tuned language model for each prompt specification style for the **story completion subtask** on the Common European Framework of Reference for Languages (CEFR) standard. The top performing model is highlighted in **dark blue**.
of choice when using these models for content generation, especially ChatGPT.
**Publicly accessible instruction-tuned models show promising results for alignment with CEFR.** Using CEFR as the guiding standard for readability level specification, we see favorable results from open-sourced models such as BLOOMZ, FlanT5, Llama 2, and Longform, which all include extremely diverse instruction-tuned datasets for their finetuning phase. FlanT5 obtained the best performance for no specification prompts with \(0.85\) accuracy while BLOOMZ performs the best of all models for prompts that specify target grade level and assessment metric name with \(0.84\) and \(0.83\) accuracies, respectively. Longform and Llama 2, on the other hand, have the most observable improvements across the board, where the accuracies for generating aligned story completions with respect to the prompts increases linearly as the information on readability is expanded: \(0.54\to 0.65\to 0.63\to 0.81\) for Longform and \(0.28\to 0.56\to 0.64\to 0.62\) for Llama 2.
In terms of poorly performing models, ChatGPT and Dolly obtained \(0-13\%\) accuracies across all prompts. Upon manual inspection of the generated outputs of these two models, we see a misclassification rate of over \(90\%\) from these models due to the tendency that they produced outputs are one level higher than the target level, which is B1 instead of A2 in the CEFR scale. This finding means that these models lack precision in generation with respect to the prompt readability specifications compared to other open-sourced models like BLOOMZ and Llama 2 for the CEFR scale. While we do not know what datasets were used for training ChatGPT as it is closed-source, we attribute the poor performance of Dolly to the very limited variety of instruction datasets with a size of only 15K used for its finetuning compared to the diverse multi-task data used in FlanT5, Longform, Llama 2, and BLOOMZ (Muennighoff et al., 2023, Chung et al., 2022, Koksal et al., 2023, Touvron et al., 2023)
## 4 Prompt-Based Narrative Simplification
Our second choice of generation task is to measure the capability of instruction-following language models to simple short text passages and narratives into a target readability level. Similar to story completion, this task is also aligned with how teachers can use these models to simplify a piece of educational content if it is too complex for a target learner audience (Kasneci et al., 2023, Whalen et al., 2023, Pu and Demberg, 2023).
### Procedure
For narrative simplification, we select only the advanced levels on the CEFR scale, which are C1 and C2, from the ELG dataset. The justification for this is that since the task is simplification, we want the initial text to come from a higher level. A total of \(120\) advanced-level entries were obtained, and
\begin{table}
\begin{tabular}{p{142.3pt} p{142.3pt}} \hline \hline
**Prompt Style** & **Prompt Content** \\ \hline No grade level specifications. & (_Write a story using the following prompt_) \\ & [_Simplify the following narrative_] \\ & [_Simplify the following narrative_] \\ & [_text_] \\ \hline Mentions specific grade level (Grade 2 or A2). & (_Write a story that is readable by Grade 2 learners using the following prompt_) \\ & [_Simplify the following narrative for Grade 2 learners_] \\ & [_text_] \\ & [_Write a story that is readable by A2 learners in the following prompt_) \\ & [_Simplify the following narrative for A2 learners_] \\ \hline Mentions specific grade level and name of the framework (FKG or CEFR). & (_Write a story that is readable by A2 learners in the CEFR scale using the following prompt_) \\ & [_Simplely the following narrative for Grade 2 learners in the CEFR scale using the following prompt_) \\ & [_Simplify the following narrative for Grade 2 learners in the CEFR scale_] \\ \hline Mentions specific grade level, name of framework (FKG), and of CEFR, and description. & (_Write a story that is readable by A2 learners in the CEFR scale using the following prompt. Text assessed at A2 level in CEFR uses basic sentence patterns, explicit information and a limited number of information points_) \\ & [_Simplify the following narrative for Grade 2 readers in the Fleech-Kinacial Grade scale. The Fleech-Kinacial Grade scale looks at total words, total sentences, and total syllables in a text_] \\ & (_Write a story that is readable by A2 learners in the CEFR scale using the following prompt. Text assessed at A2 level in CEFR uses basic sentence patterns, explicit information and a limited number of information points_) \\ & [_Simplify the following narrative for A2 learners in the CEFR scale. Text assessed at A2 level uses basic sentence patterns, explicit information, and limited number of information points_] \\ & [_Simplify the following narrative for A2 learners in the CEFR scale. Text assessed at A2 level uses basic sentence patterns, explicit information, and limited number of information points_] \\ & \{text\} \\ \hline \hline \end{tabular}
\end{table}
Table 2: The various iterations of instructional prompts used for the generation setup of the (**story completion**) and [**narrative simplification**] tasks with respect to information of grade level, framework, and description specifications.
we split each one to get the first \(100\)-\(150\) words to be appended with the instructional prompts for simplification. We specified the models to generate at least a minimum of \(30\) and a maximum of \(300\) new tokens. A nucleus sampling hyperparameter top-\(p\) to \(0.95\) is also used. Similar to story completion, we use four styles of instructional prompting where specific grade levels, the name of the assessment framework, and its descriptions are reported in Table 2.
### Results and Insights
Figures 3 and 4 report the performances of the six instruction-tuned models for the narrative simplification completion task evaluated using the FKGL and CEFR. Actual values from the formula are used for FKGL, while accuracy scores are used to report a model's performance for CEFR. We include additional tables for the mean and standard deviations of FKGL scores in Appendix A.
**Instruction-tuned models also struggle in simplification task using FKGL specifications**. Referring back to the average FKGL scores per CEFR level presented in Table 1, the advanced C1 and C2 levels have a mean of \(8.91\) and \(9.88\), respectively, while the target level for this narrative simplification task is A2 with \(3.32\). Looking at the performances of models illustrated in Figure 3, similar to the story completion subtask, we see that controlling for the readability level, regardless of how informative the prompt is proves to be challenging for
Figure 4: Performance via accuracy scores of each instruction-tuned language model for each prompt specification style for the **narrative simplification subtask** on the Common European Framework of Reference for Languages (CEFR) standard. The top performing model is highlighted in **dark blue**.
Figure 3: Performance via mean Flesch Kincaid Grade Level (FKGL) scores of each instruction-tuned language model for each prompt specification style for the **narrative simplification subtask**. The **red** line and shading indicate the center and the region of acceptable values that is within the target complexity level of the generated text, which is Grade 2.
all instruction-tuned models evaluated in the study. Models including BLOOMZ, Longform, FlanT5, and Dolly all show similar patterns of inconsistencies across all four prompt styles with various levels of readability specifications. While none of the models were able to produce generations that are within the acceptable range of \(1<\text{FKGL}(x)\)\(<3\) for narrative simplification, the ChatGPT and Llama 2 models show improvement of scores as the readability information provided with the prompt is enhanced with \(9.570\to 5.285\to 5.390\to 5.210\) and \(8.221\to 6.137\to 6.471\to 6.339\) for each model respectively. We also report a difference of \(4.36\) and \(1.882\) from the prompt with no specification of target readability level vs. the prompt with the readability level, metric name, and description for ChatGPT and Llama 2, respectively.
From this finding, we echo the same inference from the story completion task, where the reason why these models were not able to fully capture the desired reading level from the generations can be attributed to the need for actual computation information present in the prompt. We also attribute the improvement shown by ChatGPT and Llama 2 to the efficacy of the RLHF algorithm and rejection sampling (Ouyang et al., 2022; Touvron et al., 2023, 2023) used for optimizing these models, which may have helped in the refinement of generation quality as the prompt becomes more informative. Still, we encourage specifying necessary information about the target audience's reading level and the type of assessment used when prompting models in order to minimize the generation of overly complex texts.
**Top performing instruction-tuned models for story completion are also good at narrative simplification tasks**. Using the CEFR framework to guide instruction-tuned models for narrative simplification obtained better results in general compared to using FKGL. We report the accuracies of models in simplifying advanced-level passages from the C1 and C2 scale of the ELG corpus down to the desired readability level of A2 in Figure 4. From the results, FlanT5 is the best model with consistent performances across all prompts with an average accuracy of \(98\%\)--even the ones without specification of target reading level. We cross-examined existing literature and came across several works that support T5-based models' general performance for sentence and narrative-level simplification for English (Sun et al., 2023; Maddela et al., 2023). The second best-performing models are taken by ChatGPT, BLOOMZ, Longform, and Llama 2, which all showed consistent minor improvements as the prompts became more detailed by adding the specific name of the framework and the characteristic of the target readability level. Lastly, the Dolly model performed the worst for the task without an accuracy not going beyond \(10\)%. Upon manual reviewing of the outputs of this model, we see that most of its generations are classified under one level higher, B1, than the target reading level, A2. We attribute this poor performance to the low diversity of instruction dataset used for Dolly compared to the collection of multitask corpora used for finetuning FlanT5 models (Chung et al., 2022).
## 5 Related Work
The majority of literature on evaluating instruction-tuned models has spotlighted ChatGPT due to its global recognition amongst interdisciplinary fields. Specifically, these evaluation works have focused on aspects such as multilinguality (Bang et al., 2023; Gowriraj et al., 2023; Zhang et al., 2023), reasoning (Qin et al., 2023; Laskar et al., 2023), truthfulness (Laskar et al., 2023), toxicity (Guo et al., 2023; Ouyang et al., 2022) to name a few. In terms of incorporating forms of control to guide generations, related works have explored style (Keskar et al., 2019), tone (Sennrich et al., 2016), topic coherence (Tang et al., 2019; Chang et al., 2021; Krishna et al., 2022), sentiment and emotion (Dathathri et al., 2019; Khalifa et al., 2020), and text complexity (Imperial and Tayyar Madabushi, 2022; Pu and Demberg, 2023; Murgia et al., 2023). The main gap in literature that our study fills is the evaluation of LLMs and their alignment with real-world text assessment standards used by teachers, such as the CEFR framework.
## 6 Conclusion
In this work, we tackled a unique perspective of evaluating the capabilities of instruction-tuned language models by integrating readability-specific information anchored on realistic assessment standards such as the CEFR framework used by teachers and educators. Our findings expose the advantages and weaknesses of open and closed-source generative models such as Llama, FlanT5, and ChatGPT for the story completion and narrative simplification tasks, in which we trace back each
model's performance to the quality of instruction datasets used for finetuning them. We hope this study sheds light on both the technical and non-technical audiences, especially the members of the education community, regarding the true capabilities of these generative models in producing educational content.
## Limitations
**On use of FKGL for measuring simplification systems.** We are well aware of the limitations of FKGL for evaluating the performances of simplification systems as highlighted in Tanprasert and Kauchak (2021). However, our choice of metrics and assessment standards, FKGL and CEFR, is made through the selection of those that are often used by teachers and educators in assessing the complexities of texts. Metrics such as SARI Xu et al. (2016) and BLEU Papineni et al. (2002), on the other hand, are researcher-facing technical metrics used for engineering and evaluating simplification systems. Nonetheless, combining all of these technical and non-technical metrics and their interactions may be a good future study for this work.
**On experiments exclusively with English data.** All experiments, findings, and insights in this work only apply to English, as evidenced by the language of the datasets used. Thus, our findings may not generalize if similar research derived from this work is to be done with other languages using other models, such as those trained with multilingual data.
**On the use of base versions of instruction-tuned models.** As mentioned in Section 2, we used the standard sizes of generative models since we did not have the required hardware to use the largest versions of a model family (ex. 70B version of Llama 2). The analysis of the effects of scale for these models in terms of capturing readability standards may be pursued as future work of this study.
**On varying parameter sizes of models for comparison.** Our comparison of instruction-tuned model performance for the two tasks may not be completely perfect with respect to variables such as how large a model is via parameter size. We note that this is something that is an independent factor as the developers of these models have their own choice of how much parameter size will be for the smallest language model they will release. For example, the smallest version of FlanT5 is 250M, while 7B for Llama 2.
## Ethics Statement
The ELG corpus is publicly accessible through a request form provided by the website. We use the six open and closed-source instruction models only for the tasks of story completion and narrative simplification in this study. We believe the model generations to be free of harmful content to an average reader.
## Acknowledgements
We thank the anonymous reviewers for their constructive feedback of this work. We also thank Mark Townsend for the assistance with configuring the experiments with the Hex GPU cloud of the Department of Computer Science at the University of Bath. JMI is supported by the UKRI Centre for Doctoral Training in Accountable, Responsible and Transparent AI (ART-AI) [EP/S023437/1] of the University of Bath and the Study Grant Program of National University Philippines.
|
2309.12128 | Convergence and Recovery Guarantees of Unsupervised Neural Networks for
Inverse Problems | Neural networks have become a prominent approach to solve inverse problems in
recent years. While a plethora of such methods was developed to solve inverse
problems empirically, we are still lacking clear theoretical guarantees for
these methods. On the other hand, many works proved convergence to optimal
solutions of neural networks in a more general setting using
overparametrization as a way to control the Neural Tangent Kernel. In this work
we investigate how to bridge these two worlds and we provide deterministic
convergence and recovery guarantees for the class of unsupervised feedforward
multilayer neural networks trained to solve inverse problems. We also derive
overparametrization bounds under which a two-layers Deep Inverse Prior network
with smooth activation function will benefit from our guarantees. | Nathan Buskulic, Jalal Fadili, Yvain Quéau | 2023-09-21T14:48:02Z | http://arxiv.org/abs/2309.12128v3 | # Convergence and Recovery Guarantees of Unsupervised Neural Networks for Inverse Problems
###### Abstract
Neural networks have become a prominent approach to solve inverse problems in recent years. While a plethora of such methods was developed to solve inverse problems empirically, we are still lacking clear theoretical guarantees for these methods. On the other hand, many works proved convergence to optimal solutions of neural networks in a more general setting using overparametrization as a way to control the Neural Tangent Kernel. In this work we investigate how to bridge these two worlds and we provide deterministic convergence and recovery guarantees for the class of unsupervised feedforward multilayer neural networks trained to solve inverse problems. We also derive overparametrization bounds under which a two-layers Deep Inverse Prior network with smooth activation function will benefit from our guarantees.
**Keywords: Inverse problems, Deep Image/Inverse Prior, Overparametrization, Gradient flow, Unsupervised learning**
## 1 Introduction
### Problem Statement
An inverse problem consists in reliably recovering a signal \(\overline{\mathbf{x}}\in\mathbb{R}^{n}\) from noisy indirect observations
\[\mathbf{y}=\mathbf{F}(\overline{\mathbf{x}})+\mathbf{\varepsilon}, \tag{1}\]
where \(\mathbf{y}\in\mathbb{R}^{m}\) is the observation, \(\mathbf{F}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) is a forward operator, and \(\varepsilon\) stands for some additive noise. We will denote by \(\overline{\mathbf{y}}=\mathbf{F}(\overline{\mathbf{x}})\) the ideal observations i.e., those obtained in the absence of noise.
In recent years, the use of sophisticated machine learning algorithms, including deep learning, to solve inverse problems has gained a lot of momentum and provides promising results; see e.g., the reviews [1; 2]. The general framework of these methods is to optimize a generator network \(\mathbf{g}:(\mathbf{u},\boldsymbol{\theta})\in\mathbb{R}^{d}\times\mathbb{R}^ {p}\mapsto\mathbf{x}\in\mathbb{R}^{n}\), with some activation function \(\phi\), to transform a given input \(\mathbf{u}\in\mathbb{R}^{d}\) into a vector \(\mathbf{x}\in\mathbb{R}^{n}\). The parameters \(\boldsymbol{\theta}\) of the network are optimized via (possibly stochastic) gradient descent to minimize a loss function \(\mathcal{L}_{\mathbf{y}}:\mathbb{R}^{m}\rightarrow\mathbb{R}_{+},\mathbf{y}(t )\mapsto\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t))\) which measures the discrepancy between the observation \(\mathbf{y}\) and the solution \(\mathbf{y}(t)=\mathbf{F}(\mathbf{g}(\mathbf{u},\boldsymbol{\theta}(t)))\) generated by the network at time \(t\geq 0\).
Theoretical understanding of recovery and convergence guarantees for deep learning-based methods is of paramount importance to make their routine use in critical applications reliable [3]. While there is a considerable amount of work on the understanding of optimization dynamics of neural network training, especially through the lens of overparametrization, recovery guarantees when using neural networks for inverse problem remains elusive. Some attempts have been made in that direction but they are usually restricted to very specific settings. One kind of results that was obtained [4; 5; 6] is convergence towards the optimal points of a regularized problem, typically with a learned regularizer. However this does not give guarantees about the real sought-after vector. Another approach is used in Plug-and-Play [7] to show that under strong assumptions on the pre-trained denoiser, one can prove convergence to the true vector. This work is however limited by the constraints on the denoiser which are not met in many settings.
Our aim in this paper is to help close this gap by explaining when gradient descent consistently and provably finds global minima of \(\mathcal{L}\), and how this translates into recovery guarantees for both \(\overline{\mathbf{y}}\) and \(\overline{\mathbf{x}}\) i.e., in both the observation and the signal spaces. For this, we focus on a continuous-time gradient flow applied to \(\mathcal{L}\):
\[\begin{cases}\boldsymbol{\dot{\theta}}(t)=-\nabla_{\boldsymbol{\theta}} \mathcal{L}_{\mathbf{y}}(\mathbf{F}(\mathbf{g}(\mathbf{u},\boldsymbol{\theta} (t))))\\ \boldsymbol{\theta}(0)=\boldsymbol{\theta}_{0}.\end{cases} \tag{2}\]
This is an idealistic setting which makes the presentation simpler and it is expected to reflect the behavior of practical and common first-order descent algorithms, as they are known to approximate gradient flows.
In this work, our focus in on an unsupervised method known as Deep Image Prior [8], that we also coin Deep Inverse Prior (DIP) as it is not confined to images. A chief advantage of this method is that it does not need any training data, while the latter is mandatory in most supervised deep learning-based methods used in the literature. In the DIP method, \(\mathbf{u}\) is fixed throughout the optimization/training process, usually a realization of a random variable. By taking out the need of training data, this method focuses on the generation capabilities of the network trained through gradient descent. In turn, this will allow us to get insight into the effect of network architecture on the reconstruction quality.
### Contributions
We deliver a theoretical analysis of gradient flow optimization of neural networks, i.e. (2), in the context of inverse problems and provide various recovery guarantees for general loss functions verifying the Kurdyka-Lojasewicz (KL) property. We first prove that the trained network with a properly initialized gradient flow will converge to an optimal solution in the observation space with a rate characterized by the desingularizing function appearing in the KL property of the loss function. This result is then converted to a prediction error on \(\overline{\mathbf{y}}\) through an early stopping strategy. More importantly, we present a recovery result in the signal space with an upper bound on the reconstruction error of \(\overline{\mathbf{x}}\). The latter result involves for instance a restricted injectivity condition on the forward operator.
We then turn to showing how these results can be applied to the case of a two-layer neural network in the DIP setting where
\[\mathbf{g}(\mathbf{u},\boldsymbol{\theta})=\frac{1}{\sqrt{k}}\mathbf{V}\phi( \mathbf{W}\mathbf{u}),\quad\boldsymbol{\theta}\stackrel{{\mathrm{ def}}}{{=}}(\mathbf{V},\mathbf{W}), \tag{3}\]
with \(\mathbf{V}\in\mathbb{R}^{n\times k}\), \(\mathbf{W}\times\mathbb{R}^{k\times d}\), and \(\phi\) an element-wise nonlinear activation function. The scaling by \(\sqrt{k}\) will become clearer later. We show that for a proper random initialization \(\mathbf{W}(0)\), \(\mathbf{V}(0)\) and sufficient overparametrization, all our conditions are in force to control the eigenspace of the Jacobian of the network as required to obtain the aforementioned convergence properties. We provide a characterization of the overparametrization needed in terms of \((k,d,n)\) and the conditioning of \(\mathbf{F}\).
### Relation to Prior Work
_Data-Driven Methods to Solve Inverse Problems_
Data-driven approaches to solve inverse problems come in various forms; see the comprehensive reviews in [1, 2]. The first type trains an end-to-end network to directly map the observations to the signals for a specific problem. While they can provide impressive results, these methods can prove very unstable as they do not use the physics of the problem which can be severely ill-posed. To cope with these problems, several hybrid models that mix model- and data-driven algorithms were developed in various ways. One can learn the regularizer of a variational problem [9] or use Plug-and-Play methods [10] for example. Another family of approaches, which takes inspiration from classical iterative optimization algorithms, is based on unrolling (see [11] for a review of these methods). Still, all these methods require an extensive amount of training data, which may not always be available.
_Deep Inverse Prior_
The DIP model [8] (and its extensions that mitigate some of its empirical issues [12, 13, 14, 15]) is an unsupervised alternative to the supervised approches briefly reviewed above. The empirical idea is that the architecture of the network acts as an implicit regularizer and will learn a more meaningful transformation before overfitting to artefacts or noise. With an early stopping strategy, one can hope for the network to generate a vector close to the sought-after signal. However, this remains purely empirical and there is no guarantee that a network trained in such manner converges in the observation space (and even less in the signal space). The theoretical
recovery guarantees of these methods are not well understood [3] and our work aims at reducing this theoretical gap by analyzing the behaviour of such networks in both the observation and the signal space under some overparametrization condition.
#### Theory of Overparametrized Networks
To construct our analysis, we build upon previous theoretical work of overparametrized networks and their optimization trajectories [16, 17]. The first works that proved convergence to an optimal solution were based on a strong convexity assumption of the loss which is typically not the case when it is composed with a neural network. A more recent approach is based on a gradient dominated inequality from which we can deduce by simple integration an exponential convergence of the gradient flow to a zero-loss solution. This allows to obtain convergence guarantees for networks trained to minimize a mean square error by gradient flow [18] or its discrete counterpart (i.e., gradient descent with fixed step) [19, 20, 21, 22]. The work that we present here is inspired by these works but it goes far beyond them. Amongst other differences, we are interested in the challenging situation of inverse problems (presence of a forward operator), and we deal with more general loss functions that obey the Kurdyka-Lojasewicz inequality (e.g., any semi-algebraic function or even definable on an o-minimal structure) [23, 24, 25].
Recently, it has been found that some kernels play a very important role in the analysis of convergence of the gradient flow when used to train neural networks. In particular the semi-positive definite kernel given by \(\mathcal{J}_{\mathbf{g}}(t)\mathcal{J}_{\mathbf{g}}(t)^{\top}\), where \(\mathcal{J}_{\mathbf{g}}(t)\) is the Jacobian of the network at time \(t\). When all the layers of a network are trained, this kernel is a combination of the _Neural Tangent Kernel_ (NTK) [26] and the Random Features Kernel (RF) [27]. If one decides to fix the last layer of the network, then this amounts to just looking at the NTK which is what most of the previously cited works do. The goal is then to control the eigenvalues of the kernel to ensure that it stays positive definite during training, which entails convergence to a zero-loss solution at an exponential rate. The control of the eigenvalues of the kernel is done through a random initialization and the overparametrization of the network. Indeed, for a sufficiently wide network, the parameters \(\boldsymbol{\theta}(t)\) will stay near their initialization and they will be well approximated by their linearization (so-called "lazy" regime [18]). The overparametrization bounds that were obtained are mostly for two-layers networks as the control of deep networks is much more complex.
However, even if there are theoretical works on the gradient flow-based optimization of neural networks as reviewed above, similar analysis that would accommodate for the forward operator as in inverse problems remain challenging and open. Our aim is to participate in this endeavour by providing theoretical understanding of recovery guarantees with neural network-based methods.
This paper is an extension of our previous one in [28]. There are however several distinctive and new results in the present work. For instance, the work [28] only dealt with linear inverse problems while our results here apply to non-linear ones. Moreover, we here provide a much more general analysis under which we obtain convergence guarantees for a wider class of models than just the DIP one and for a general class of loss functions, not just the MSE. More importantly we show convergence not only in the observation space but also in the signal space now. When particularized to the DIP case, we also provide overparametrization bounds for the case when the linear layer of the network is not fixed which is also an additional novelty.
#### Paper organization
The rest of this work is organized as follows. In Section 2 we give the necessary notations and definitions useful for this work. In Section 3 we present our main result with the associated assumptions and proof. In Section 4 we present the overparametrization bound on the DIP model. Finally, in Section 5, we show some numerical experiments that validate our findings, before drawing our conclusions in Section 6.
## 2 Preliminaries
### General Notations
For a matrix \(\mathbf{M}\in\mathbb{R}^{a\times b}\) we denote by \(\sigma_{\min}(\mathbf{M})\) and \(\sigma_{\max}(\mathbf{M})\) its smallest and largest non-zero singular values, and by \(\kappa(\mathbf{M})=\frac{\sigma_{\max}(\mathbf{M})}{\sigma_{\min}(\mathbf{M})}\) its condition number. We also denote by \(\langle,\rangle\) the Euclidean scalar product, \(\left\|\cdot\right\|\) the associated norm (the dimension is implicit from the context), and \(\left\|\cdot\right\|_{F}\) the Frobenius norm of a matrix. With a slight abuse of notation \(\left\|\cdot\right\|\) will also denote the spectral norm of a matrix. We use \(\mathbf{M}^{i}\) (resp. \(\mathbf{M}_{i}\)) as the \(i\)-th row (resp. column) of \(\mathbf{M}\). For two vectors \(\mathbf{x},\mathbf{z}\), \([\mathbf{x},\mathbf{z}]=\{(1-\rho)\mathbf{x}+\rho\mathbf{z}:\ \rho\in[0,1]\}\) is the closed segment joining them. We use the notation \(a\gtrsim b\) if there exists a constant \(C>0\) such that \(a\geq Cb\).
We also define \(\mathbf{y}(t)=\mathbf{F}(\mathbf{g}(\mathbf{u},\boldsymbol{\theta}(t)))\) and \(\mathbf{x}(t)=\mathbf{g}(\mathbf{u},\boldsymbol{\theta}(t))\) and we recall \(\overline{\mathbf{y}}=\mathbf{F}(\overline{\mathbf{x}})\). The Jacobian of the network is denoted \(\mathcal{J}_{\mathbf{g}}\). \(\mathcal{J}_{\mathbf{g}}(t)\) is a shorthand notation of \(\mathcal{J}_{\mathbf{g}}\) evaluated at \(\boldsymbol{\theta}(t)\). \(\mathcal{J}_{\mathbf{F}}(t)\) is the Jacobian of the forward operator \(\mathbf{F}\) evaluated at \(\mathbf{x}(t)\). The local Lipschitz constant of a mapping on a ball of radius \(R>0\) around a point \(\mathbf{z}\) is denoted \(\operatorname{Lip}_{\mathbb{B}(\mathbf{z},R)}(\cdot)\). We omit \(R\) in the notation when the Lipschitz constant is global. For a function \(f:\mathbb{R}^{n}\to\mathbb{R}\), we use the notation for the sublevel set \([f<c]=\{\mathbf{z}\in\mathbb{R}^{n}:\ f(\mathbf{z})<c\}\) and \([c_{1}<f<c_{2}]=\{\mathbf{z}\in\mathbb{R}^{n}:\ c_{1}<f(\mathbf{z})<c_{2}\}\).
Given \(\mathbf{z}\in\mathcal{C}^{0}(]0,+\infty[;\mathbb{R}^{a})\), the set of cluster points of \(\mathbf{z}\) is defined as
\[\mathfrak{W}(\mathbf{z}(\cdot))=\left\{\widetilde{\mathbf{z}}\in\mathbb{R}^{ a}:\ \exists(t_{k})_{k\in\mathbb{N}}\to+\infty\ \text{s.t.}\ \lim_{k\to\infty}\mathbf{z}(t_{k})=\widetilde{\mathbf{z}}\right\}.\]
For some \(\Theta\subset\mathbb{R}^{p}\), we define \(\Sigma_{\Theta}=\{\mathbf{g}(\mathbf{u},\boldsymbol{\theta}):\ \boldsymbol{\theta}\in\Theta\}\) the set of signals that the network \(\mathbf{g}\) can generate for all \(\theta\) in the parameter set \(\Theta\). \(\Sigma_{\Theta}\) can thus be viewed as a parametric manifold. If \(\Theta\) is closed (resp. compact), so is \(\Sigma_{\Theta}\). We denote \(\operatorname{dist}(\cdot,\Sigma_{\Theta})\) the distance to \(\Sigma_{\Theta}\) which is well defined if \(\Theta\) is closed and non-empty. For a vector \(\mathbf{x}\), \(\mathbf{x}_{\Sigma_{\Theta}}\) is its projection on \(\Sigma_{\Theta}\), i.e. \(\mathbf{x}_{\Sigma_{\Theta}}\in\operatorname{Argmin}_{\mathbf{z}\in\Sigma_{ \Theta}}\left\|\mathbf{x}-\mathbf{z}\right\|\). Observe that \(\mathbf{x}_{\Sigma_{\Theta}}\) always exists but might not be unique. We also define \(T_{\Sigma_{\Theta}}(\mathbf{x})=\overline{\operatorname{conv}}\left(\mathbb{ R}_{+}(\Sigma_{\Theta}-\mathbf{x})\right)\) the tangent cone of \(\Sigma_{\Theta}\) at \(\mathbf{x}\in\Sigma_{\Theta}\).
The minimal (conic) singular value of a matrix \(\mathbf{A}\in\mathbb{R}^{m\times n}\) w.r.t. the cone \(T_{\Sigma_{\Theta}}(\mathbf{x})\) is then defined as
\[\lambda_{\min}(\mathbf{A};T_{\Sigma_{\Theta}}(\mathbf{x}))=\inf\{\left\| \mathbf{A}\mathbf{z}\right\|/\left\|\mathbf{z}\right\|:\mathbf{z}\in T_{\Sigma_ {\Theta}}(\mathbf{x})\}.\]
### Multilayer Neural Networks
Neural networks produce structured parametric families of functions that have been studied and used for almost 70 years, going back to the late 1950's [29].
**Definition 2.1**.: Let \(d,L\in\mathbb{N}\) and \(\phi:\mathbb{R}\to\mathbb{R}\) an activation map which acts componentwise on the entries of a vector. A fully connected multilayer neural network with input dimension \(d\), \(L\) layers and activation \(\phi\), is a collection of weight matrices \(\big{(}\mathbf{W}^{(l)}\big{)}_{l\in[L]}\) and bias vectors \(\big{(}\mathbf{b}^{(l)}\big{)}_{l\in[L]}\), where \(\mathbf{W}^{(l)}\in\mathbb{R}^{N_{l}\times N_{l-1}}\) and \(\mathbf{b}^{(l)}\in\mathbb{R}^{N_{l}}\), with \(N_{0}=d\), and \(N_{l}\in\mathbb{N}\) is the number of neurons for layer \(l\in[L]\). Let us gather these parameters as
\[\boldsymbol{\theta}=\Big{(}(\mathbf{W}^{(1)},\mathbf{b}^{(1)}),\ldots,( \mathbf{W}^{(L)},\mathbf{b}^{(L)})\Big{)}\in\bigtimes_{l=1}^{L}\big{(}\big{(} \mathbb{R}^{N_{l}\times N_{l-1}}\big{)}\times\mathbb{R}^{N_{l}}\big{)}.\]
Then, a neural network parametrized by \(\boldsymbol{\theta}\) produces a function
\[\mathbf{g}:(\mathbf{u},\boldsymbol{\theta})\in\mathbb{R}^{d}\times\bigtimes_{ l=1}^{L}\big{(}\big{(}\mathbb{R}^{N_{l}\times N_{l-1}}\big{)}\times\mathbb{R}^{N_{l }}\big{)}\mapsto\mathbf{g}(\mathbf{u},\boldsymbol{\theta})\in\mathbb{R}^{N_{ L}},\quad\text{with}\quad N_{L}=n,\]
which can be defined recursively as
\[\begin{cases}\mathbf{g}^{(0)}(\mathbf{u},\boldsymbol{\theta})&=\mathbf{u},\\ \mathbf{g}^{(l)}(\mathbf{u},\boldsymbol{\theta})&=\phi\left(\mathbf{W}^{(l)} \mathbf{g}^{(l-1)}(\mathbf{u},\boldsymbol{\theta})+\mathbf{b}^{(l)}\right), \quad\text{ for }l=1,\ldots,L-1,\\ \mathbf{g}(\mathbf{u},\boldsymbol{\theta})&=\mathbf{W}^{(L)}\mathbf{g}^{(L-1 )}(\mathbf{u},\boldsymbol{\theta})+\mathbf{b}^{(L)}.\end{cases}\]
The total number of parameters is then \(p=\sum_{l=1}^{L}(N_{l-1}+1)N_{l}\). In the rest of this work, \(\mathbf{g}(\mathbf{u},\boldsymbol{\theta})\) is always defined as just described. We will start by studying the general case before turning in Section 4 to a two-layer network, i.e. with \(L=2\).
### KL Functions
We will work under a general condition of the loss function \(\mathcal{L}\) which includes non-convex ones. More precisely, we will suppose that \(\mathcal{L}\) verifies a Kurdyka-Lojasewicz-type (KL for short) inequality [25, Theorem 1].
**Definition 2.2** (KL inequality).: A continuously differentiable function \(f:\mathbb{R}^{n}\to\mathbb{R}\) satisfies the KL inequality if there exists \(r_{0}>0\) and a strictly increasing function \(\psi\in\mathcal{C}^{0}([0,r_{0}[)\cap\mathcal{C}^{1}(]0,r_{0}[)\) with \(\psi(0)=0\) such that
\[\psi^{\prime}(f(\mathbf{z})-\min f)\,\|\nabla f(\mathbf{z})\|\geq 1,\quad \text{for all}\quad\mathbf{z}\in[\min f<f<\min f+r_{0}]. \tag{4}\]
We use the shorthand notation \(f\in\text{KL}_{\psi}(r_{0})\) for a function satisfying this inequality.
The KL property basically expresses the fact that the function \(f\) is sharp under a reparameterization of its values. Functions satisfying the KL inequality are also sometimes called gradient dominated functions [30]. The function \(\psi\) is known as the desingularizing function for \(f\). The Lojasiewicz inequality [23, 24] corresponds to the case where the desingularizing function takes the form \(\psi(s)\,=\,cs^{\alpha}\) with \(\alpha\,\in\,[0,1]\). The KL inequality plays a fundamental role in several fields of applied mathematics among which convergence behaviour of (sub-)gradient-like systems and minimization algorithms [31, 32, 33, 34, 35, 36], neural networks [37], partial differential equations [38, 39, 40], to cite a few. The KL inequality is closely related to
error bounds that also play a key role to derive complexity bounds of gradient descent-like algorithms [41].
Let us give some examples of functions satisfying (4); see also [35].
**Example 2.3** (Convex functions with sufficient growth).: Let \(f\) be a differentiable convex function on \(\mathbb{R}^{n}\) such that \(\operatorname{Argmin}(f)\neq\emptyset\). Assume that \(f\) verifies the growth condition
\[f(\mathbf{z})\geq\min f+\varphi(\operatorname{dist}(\mathbf{z},\operatorname {Argmin}(f))),\quad\text{for all}\quad\mathbf{z}\in[\min f<f<\min f+r], \tag{5}\]
where \(\varphi:\mathbb{R}_{+}\to\mathbb{R}_{+}\) is continuous, increasing, \(\varphi(0)=0\) and \(\int_{0}^{r}\frac{\varphi^{-1}(s)}{s}ds<+\infty\). Then by [36, Theorem 30], \(f\in\operatorname{KL}_{\psi}(r)\) with \(\psi(r)=\int_{0}^{r}\frac{\varphi^{-1}(s)}{s}ds\).
**Example 2.4** (Uniformly convex functions).: Suppose that \(f\) is a differentiable uniformly convex function, i.e., \(\forall\mathbf{z},\mathbf{x}\in\mathbb{R}^{n}\),
\[f(\mathbf{x})\geq f(\mathbf{z})+\left\langle\nabla f(\mathbf{z}),\mathbf{x}- \mathbf{z}\right\rangle+\varphi\left(\left\|\mathbf{x}-\mathbf{z}\right\|\right) \tag{6}\]
for an increasing function \(\varphi:\mathbb{R}_{+}\to\mathbb{R}_{+}\) that vanishes only at \(0\). Thus \(f\) has a unique minimizer, say \(\mathbf{z}^{*}\), see [42, Proposition 17.26]. This example can then be deduced from the previous one since a uniformly convex function obviously obeys (5). However, we here provide an alternative and sharper characterization. We may assume without loss of generality that \(\min f=0\). Applying inequality (6) at \(\mathbf{x}=\mathbf{z}^{*}\) and any \(\mathbf{z}\in[0<f]\), we get
\[f(\mathbf{z}) \leq\left\langle\nabla f(\mathbf{z}),\mathbf{z}-\mathbf{x} \right\rangle-\varphi\left(\left\|\mathbf{x}-\mathbf{z}\right\|\right)\] \[\leq\left\|\nabla f(\mathbf{z})\right\|\left\|\mathbf{x}- \mathbf{z}\right\|-\varphi\left(\left\|\mathbf{x}-\mathbf{z}\right\|\right)\] \[\leq\varphi_{+}(\left\|\nabla f(\mathbf{z})\right\|),\]
where \(\varphi_{+}:a\in\mathbb{R}_{+}\mapsto\varphi^{+}(a)=\sup_{x\geq 0}ax- \varphi(x)\) is known as the monotone conjugate of \(\varphi\). \(\varphi_{+}\) is a proper closed convex and non-decreasing function on \(\mathbb{R}_{+}\) that vanishes at \(0\). When \(\varphi\) is strictly convex and supercoercive, so is \(\varphi_{+}\) which implies that \(\varphi_{+}\) is also strictly increasing on \(\mathbb{R}_{+}\). Thus \(f\) verifies Definition 2.2 at any \(\mathbf{z}\in[0<f]\) with \(\psi\) a primitive of \(\frac{1}{\varphi_{+}^{-1}}\), and \(\psi\) is indeed strictly increasing, vanishes at \(0\) and is even concave. A prominent example is the case where \(\varphi:s\in\mathbb{R}_{+}\mapsto\frac{1}{p}s^{p}\), for \(p\in]1,+\infty[\), in which case \(\psi:s\in\mathbb{R}_{+}\mapsto q^{-1/q}s^{1/p}\), where \(1/p+1/q=1\).
**Example 2.5**.: In finite-dimensional spaces, deep results from algebraic geometry have shown that the KL inequality is satisfied by a large class of functions, namely, real semi-algebraic functions and more generally, function definable on an o-minimal structure or even functions belonging to analytic-geometric categories [23, 24, 43, 25, 44]. Many popular losses used in machine learning and signal processing turn out to be KL functions (MSE, Kullback-Leibler divergence and cross-entropy to cite a few).
## 3 Recovery Guarantees
### Main Assumptions
Throughout this paper, we will work under the following standing assumptions.
Assumptions on the loss
1. \(\mathcal{L}_{\mathbf{y}}(\cdot)\in\mathcal{C}^{1}(\mathbb{R}^{m})\) whose gradient is Lipschitz continuous on the bounded sets of \(\mathbb{R}^{m}\).
2. \(\mathcal{L}_{\mathbf{y}}(\cdot)\in\operatorname{KL}_{\psi}(\mathcal{L}_{ \mathbf{y}}(\mathbf{y}(0))+\eta)\) for some \(\eta>0\).
3. \(\min\mathcal{L}_{\mathbf{y}}(\cdot)=0\).
4. \(\exists\Theta\subset\mathbb{R}^{p}\) large enough such that \(\nabla_{\mathbf{v}}\mathcal{L}_{\mathbf{y}}(\mathbf{v})\in\operatorname{Im} \left(\mathcal{J}_{\mathbf{F}}(\mathbf{x})\right)\) for any \(\mathbf{v}=\mathbf{F}(\mathbf{x})\) with \(\mathbf{x}\in\Sigma_{\Theta}\).
Assumption on the activation
2. \(\phi\in\mathcal{C}^{1}(\mathbb{R})\) and \(\exists B>0\) such that \(\sup_{x\in\mathbb{R}}|\phi^{\prime}(x)|\leq B\) and \(\phi^{\prime}\) is \(B\)-Lipschitz continuous.
3. **Assumption on the forward operator**
4. \(\mathbf{F}\in\mathcal{C}^{1}(\mathbb{R}^{n};\mathbb{R}^{m})\) whose Jacobian \(\mathcal{J}_{\mathbf{F}}\) is Lipschitz continuous on the bounded sets of \(\mathbb{R}^{n}\).
Let us now discuss the meaning and effects of these assumptions. First, A-1 is made for simplicity to ensure existence and uniqueness of a strong maximal solution (in fact even global thanks to our estimates) of (2) thanks to the Cauchy-Lipschitz theorem (see hereafter). We think this could be relaxed to cover non-smooth losses if we assume path differentiability, hence existence of an absolutely continuous trajectory. This is left to a future work. A notable point in A-2 is that convexity is not always needed for the loss (see the statements of the theorem). Regarding A-3, it is natural yet it would be straightforward to relax it.
Assumption A-4 allows us to leverage the fact that
\[\sigma_{\mathbf{F}}\stackrel{{\mathrm{def}}}{{=}}\inf_{\mathbf{x }\in\Sigma_{\Theta},\mathbf{z}\in\operatorname{Im}\left(\mathcal{J}_{ \mathbf{F}}(\mathbf{x})\right)}\frac{\left\|\mathcal{J}_{\mathbf{F}}(\mathbf{ x})^{\top}\mathbf{z}\right\|}{\left\|\mathbf{z}\right\|}>0, \tag{7}\]
with \(\Theta\) a sufficiently large subset of parameters. Clearly, we will show later that the parameter trajectory \(\boldsymbol{\theta}(t)\) is contained in a ball around \(\boldsymbol{\theta}_{0}\). Thus a natural choice of \(\Theta\) is that ball (or an enlargement of it).
There are several scenarios of interest where assumption A-4 is verified. This is the case when \(\mathbf{F}\) is an immersion, which implies that \(\mathcal{J}_{\mathbf{F}}(\mathbf{x})\) is surjective for all \(\mathbf{x}\). Other interesting cases are when \(\mathcal{L}_{\mathbf{y}}(\mathbf{v})=\eta\left(\left\|\mathbf{v}-\mathbf{y} \right\|^{2}\right)\), \(\mathbf{F}=\Phi\circ\mathbf{A}\), where \(\eta:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}\) is differentiable and vanishes only at \(0\), and \(\Phi:\mathbb{R}^{m}\rightarrow\mathbb{R}^{m}\) is an immersion1. One easily sees in this case that \(\nabla_{\mathbf{v}}\mathcal{L}_{\mathbf{y}}(\mathbf{v})=2\eta^{\prime}\left( \left\|\mathbf{v}-\mathbf{y}\right\|^{2}\right)(\mathbf{v}-\mathbf{y})\) with \(\mathbf{v}=\Phi(\mathbf{A}\mathbf{x})\), and \(\mathcal{J}_{\mathbf{F}}(\mathbf{x})=\mathcal{J}_{\Phi}(\mathbf{A}\mathbf{x})\mathbf {A}\). It is then
sufficient to require that \(\mathbf{A}\) is surjective. This can be weakened for the linear case, i.e. \(\Phi\) is the identity, in which case it is sufficient that \(\mathbf{y}\in\operatorname{Im}\left(\mathbf{A}\right)\) for A-4 to hold.
Assumption A-5 is key in well-posedness as it ensures, by Definition 2.1 which \(\mathbf{g}(\mathbf{u},\boldsymbol{\theta})\) follows, that \(\mathbf{g}(\mathbf{u},\cdot)\) is \(\mathcal{C}^{1}(\mathbb{R}^{p};\mathbb{R}^{p})\) whose Jacobian is Lipschitz continuous on bounded sets, which is necessary for the Cauchy-Lipschitz theorem. This constraint on \(\phi\) is met by many activations such as the softmax, sigmoid or hyperbolic tangent. Including the ReLU requires more technicalities that will be avoided here.
Finally, Assumption A-6 on local Lipschitz continuity on \(\mathbf{F}\) is not only important for well-posedness of (2), but it turns out to be instrumental when deriving recovery rates (as a function of the noise) in the literature of regularized nonlinear inverse problems; see [45] and references therein.
### Well-posedness
In order for our analysis to hold, the Cauchy problem (2) needs to be well-posed. We start by showing that (2) has a unique maximal solution.
**Proposition 3.1**.: _Assume that A-1, A-5 and A-6 hold. There there exists \(T(\boldsymbol{\theta}_{0})\in]0,+\infty]\) and a unique maximal solution \(\boldsymbol{\theta}(\cdot)\in\mathcal{C}^{0}([0,T(\boldsymbol{\theta}_{0})[)\) of (2), and \(\boldsymbol{\theta}(\cdot)\) is \(\mathcal{C}^{1}\) on every compact set of the interior of \([0,T(\boldsymbol{\theta}_{0})[\)._
Proof.: Thanks to A-5, one can verify with standard differential calculus applied to \(\mathbf{g}(\mathbf{u},\cdot)\), as given in Definition 2.1, that \(\mathcal{J}_{\mathbf{g}}\) is Lipschitz continuous on the bounded sets of \(\mathbb{R}^{p}\). This together with A-1 and A-6 entails that \(\nabla_{\boldsymbol{\theta}}\mathcal{L}_{\mathbf{y}}(\mathbf{F}(\mathbf{g}( \mathbf{u},\cdot))\) is also Lipschitz continuous on the bounded sets of \(\mathbb{R}^{p}\). The claim is then a consequence of the Cauchy-Lipschitz theorem [46, Theorem 0.4.1].
\(T(\boldsymbol{\theta}_{0})\) is known as the maximal existence time of the solution and verifies the alternative: either \(T(\boldsymbol{\theta}_{0})=+\infty\) and the solution is called _global_; or \(T(\boldsymbol{\theta}_{0})<+\infty\) and the solution blows-up in finite time, i.e., \(\|\boldsymbol{\theta}(t)\|\to+\infty\) as \(t\to T(\boldsymbol{\theta}_{0})\). We will show later that the maximal solution of (2) is indeed global; see Section 3.4.4.
### Main Results
We are now in position to state our recovery results.
**Theorem 3.2**.: _Recall \(\sigma_{\mathbf{F}}\) from (7). Consider a network \(\mathbf{g}(\mathbf{u},\cdot)\), a forward operator \(\mathbf{F}\) and a loss \(\mathcal{L}\), such that A-1 to A-6 hold. Let \(\boldsymbol{\theta}(\cdot)\) be a solution trajectory of (2) where the initialization \(\boldsymbol{\theta}_{0}\) is such that_
\[\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0))>0\;\;\text{and}\;\;R^{\prime}<R \tag{8}\]
_where \(R^{\prime}\) and \(R\) obey_
\[R^{\prime}=\frac{2}{\sigma_{\mathbf{F}}\sigma_{\min}(\mathcal{J}_{\mathbf{g}}( 0))}\psi(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(0)))\;\;\text{and}\;\;R=\frac{ \sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0))}{2\mathrm{Lip}_{\mathbb{B}( \boldsymbol{\theta}_{0},R)}(\mathcal{J}_{\mathbf{g}})}. \tag{9}\]
_Then the following holds:_
1. _the loss converges to_ \(0\) _at the rate_ \[\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t))\leq\Psi^{-1}(\gamma(t))\] (10)
_with_ \(\Psi\) _a primitive of_ \(-\psi^{\prime 2}\) _and_ \(\gamma(t)=\frac{\sigma_{\mathbf{F}}^{2}\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0)) ^{2}}{4}t+\Psi\left(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(0))\right)\)_. Moreover,_ \(\boldsymbol{\theta}(t)\) _converges to a global minimizer_ \(\boldsymbol{\theta}_{\infty}\) _of_ \(\mathcal{L}_{\mathbf{y}}(\mathbf{F}(\mathbf{g}(\mathbf{u},\cdot)))\)_, at the rate_
\[\left\|\boldsymbol{\theta}(t)-\boldsymbol{\theta}_{\infty}\right\|\leq\frac{2 }{\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0))\sigma_{\mathbf{F}}}\psi\left( \Psi^{-1}\left(\gamma(t)\right)\right). \tag{11}\]
_If_ \(\operatorname{Argmin}(\mathcal{L}_{\mathbf{y}}(\cdot))=\{\mathbf{y}\}\)_, then_ \(\lim_{t\to+\infty}\mathbf{y}(t)=\mathbf{y}\)_. In addition, if_ \(\mathcal{L}\) _is convex then_
\[\left\|\mathbf{y}(t)-\overline{\mathbf{y}}\right\|\leq 2\left\|\boldsymbol{ \varepsilon}\right\|\quad\text{when}\quad t\geq\frac{4\Psi(\psi^{-1}(\left\| \boldsymbol{\varepsilon}\right\|))}{\sigma_{\mathbf{F}}^{2}\sigma_{\min}( \mathcal{J}_{\mathbf{g}}(0))^{2}}-\Psi(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(0) )). \tag{12}\]
_Assume that_ \(\operatorname{Argmin}(\mathcal{L}_{\mathbf{y}}(\cdot))=\{\mathbf{y}\}\)_,_ \(\mathcal{L}\) _is convex, and that_2__
Footnote 2: We suppose here that \(\operatorname{Argmin}_{\boldsymbol{x}\in\Sigma}\left\|\boldsymbol{\mathrm{z}}- \overline{\mathbf{x}}\right\|=\{\overline{\mathbf{x}}_{\Sigma^{\prime}}\}\) is a singleton. In fact, we only need that there exists at least one \(\overline{\mathbf{x}}_{\Sigma^{\prime}}\in\operatorname{Argmin}_{\boldsymbol{x }\in\Sigma}\left\|\boldsymbol{\mathrm{z}}-\overline{\mathbf{x}}\right\|\) such that \(\mu_{\mathbf{F},\Sigma^{\prime}}>0\).
**A-7**.: \(\mu_{\mathbf{F},\Sigma^{\prime}}>0\) _where_ \(\mu_{\mathbf{F},\Sigma^{\prime}}\stackrel{{\mathrm{def}}}{{=}} \inf\limits_{\mathbf{x}\in\Sigma^{\prime}}\frac{\left\|\mathbf{F}(\mathbf{x})- \mathbf{F}(\overline{\mathbf{x}}_{\Sigma^{\prime}})\right\|}{\left\|\mathbf{x} -\overline{\mathbf{x}}_{\Sigma^{\prime}}\right\|}\) _with_ \(\Sigma^{\prime}\stackrel{{\mathrm{def}}}{{=}}\Sigma_{\mathbb{B}_{ R^{\prime}+\left\|\boldsymbol{\theta}_{0}\right\|}(0)}\)_._
_Let_ \(L_{\mathbf{F}}\stackrel{{\mathrm{def}}}{{=}}\max_{\mathbf{x}\in \mathbb{B}(0,2\left\|\overline{\mathbf{x}}\right\|)}\|\mathcal{J}_{\mathbf{F} }(\mathbf{x})\|<+\infty\)_. Then_
\[\left\|\mathbf{x}(t)-\overline{\mathbf{x}}\right\|\leq\frac{2\psi\left(\Psi^{ -1}\left(\gamma(t)\right)\right)}{\mu_{\mathbf{F},\Sigma^{\prime}}\sigma_{ \min}(\mathcal{J}_{\mathbf{g}}(0))\sigma_{\mathbf{F}}}+\left(1+\frac{L_{ \mathbf{F}}}{\mu_{\mathbf{F},\Sigma^{\prime}}}\right)\operatorname{dist}( \overline{\mathbf{x}},\Sigma^{\prime})+\frac{\left\|\boldsymbol{\varepsilon} \right\|}{\mu_{\mathbf{F},\Sigma^{\prime}}}. \tag{13}\]
### Discussion and Consequences
We first discuss the meaning of the initialization condition \(R^{\prime}<R\). This dictates that \(\psi(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(0)))\) must be smaller than some constant that depends on the operator \(\mathbf{F}\) and the Jacobian of the network at initialization. Intuitively, this requires the initialization of the network to be in an appropriate convergence basin i.e., we start close enough from an optimal solution.
#### 3.4.1 Convergence Rate
The first result ensures that under the conditions of the theorem, the network converges towards a zero-loss solution. The convergence speed is given by the application of \(\Psi^{-1}\), which is (strictly) decreasing by definition, on an affine function w.r.t time. The function \(\Psi\) only depends on the chosen loss function and its associated Kurdyka-Lojasewiecz inequality. This inequality is verified for a wide class of functions, including all the semi-algebraic ones [25], but it is not always obvious to know the exact formulation of \(\psi\) (see section 2.3).
In the case where the KL inequality is respected with \(\psi=cs^{\alpha}\) (the Lojasiewicz case), we obtain by direct computation the following decay rate of the loss and convergence rate for the parameters:
**Corollary 3.3**.: _If \(\mathcal{L}\) satisfies the Lojasiewicz inequality, that is A-2 holds with \(\psi(s)=cs^{\alpha}\) and \(\alpha\in[0,1]\), then, \(\exists t_{0}\in R^{+}\) such that \(\forall t>t_{0},\gamma(t)>0\) and the loss and the parameters
converge with rate:_
\[\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t))\leq\left\{\begin{array}{ll}\left(\frac{1 -2\alpha}{\alpha^{2}c^{2}}\gamma(t)\right)^{-\frac{1}{1-2\alpha}}&\text{if }0< \alpha<\frac{1}{2},\\ \left(\frac{2\alpha-1}{\alpha^{2}c^{2}}\gamma(t)\right)^{-\frac{\alpha}{2 \alpha-1}}&\text{if }\frac{1}{2}<\alpha<1\\ \exp\left(-\frac{4}{c^{2}}\gamma(t)\right)&\text{if }\alpha=\frac{1}{2} \end{array}\right.\]
\[\|\boldsymbol{\theta}(t)-\boldsymbol{\theta}_{\infty}\|\leq\left\{\begin{array}[ ]{ll}\left(\frac{1-2\alpha}{\alpha^{2}c^{2}}\gamma(t)\right)^{-\frac{\alpha}{ 1-2\alpha}}&\text{if }0<\alpha<\frac{1}{2},\\ \left(\frac{2\alpha-1}{\alpha^{2}c^{2}}\gamma(t)\right)^{-\frac{\alpha}{2 \alpha-1}}&\text{if }\frac{1}{2}<\alpha<1\\ \exp\left(-\frac{4}{c^{2}}\gamma(t)\right)&\text{if }\alpha=\frac{1}{2} \end{array}\right.\]
These results allow to see precise convergence rates of the loss for a wide variety of functions. First let us observe the particular case when \(\alpha=1/2\) which gives exponential convergence to the solution. In practice a function that matches such Lojasiewicz inequality is the Mean Squared Error (MSE). For other values of \(\alpha\), we obtain convergence rates in \(O(t^{-\frac{1}{1-2\alpha}})\) or \(O(t^{-\frac{1}{2\alpha-1}})\) depending on the interval of \(\alpha\) that was chosen. Furthermore, in theory, the parameters of the model will converge slightly slower than the loss with their convergence speed modulated by \(\alpha\).
#### 3.4.2 Early stopping strategy
While the first result allows us to obtain convergence rates to a zero-loss solution, it does so by overfitting the noise inherent to the problem. A classical way to avoid this to happen is to use an early stopping strategy to ensure that our solution will lie in a ball around the desired solution. The bound on the time given in (12) will verify that all the solutions found past that time will be no more than \(2\left\|\boldsymbol{\varepsilon}\right\|\) away from the noiseless solution. This bound is given by balancing the convergence rate offered by the KL properties of the loss, the loss of the model at initialization and the level of noise in the problem.
#### 3.4.3 Signal Recovery Guarantees
Our third result provides a bound on the distance between the solution found at time \(t\) and the true solution \(\overline{\mathbf{x}}\). This bound is a sum of three terms representing three kinds of errors. The first term is an "optimization error", which represents how far \(\mathbf{x}(t)\) is from the solution found at the end of the optimization process. Of course, this decreases to 0 as \(t\) goes to infinity. The second error is a "modeling error" which captures the expressivity of the optimized network, i.e. its ability to generate solutions close to \(\overline{\mathbf{x}}\). Finally, the third term is a "noise error" that depends on \(\left\|\boldsymbol{\varepsilon}\right\|\) which is inherent to the problem at hand.
Obviously, the operator \(\mathbf{F}\) also plays a key role in this bound where its influence is reflected by three quantities of interest: \(\sigma_{\mathbf{F}}\), \(L_{\mathbf{F}}\) and \(\mu_{\mathbf{F},\Sigma^{\prime}}\). First, \(L_{\mathbf{F}}\) is the Lipschitz constant of the Jacobian of \(\mathbf{F}\) on \(\Sigma^{\prime}\). Moreover, we always have \(\sigma_{\mathbf{F}}>0\) and the dependence of the bound on \(\sigma_{\mathbf{F}}\) (or the ratio \(L_{\mathbf{F}}/\sigma_{\mathbf{F}}\)) reflects the fact that this bound degrades as the Jacobian of \(\mathbf{F}\) over \(\Sigma_{\Theta}\) becomes badly-conditioned. Second, \(\mu_{\mathbf{F},\Sigma^{\prime}}\) corresponds to a restricted injectivity condition,
which is a classical and natural assumption if one hopes for recovering \(\overline{\mathbf{x}}\) (to a good controlled error). In particular, in the case where \(\mathbf{F}\) is a linear operator \(\mathbf{A}\in\mathbb{R}^{m\times n}\), \(\mu_{\mathbf{F},\Sigma^{\prime}}\) becomes the minimal conic singular value \(\lambda_{\min}(\mathbf{A};T_{\Sigma^{\prime}}(\overline{\mathbf{x}}_{\Sigma^{ \prime}}))\) and \(L_{\mathbf{F}}\) is replaced by \(\|\mathbf{A}\|\). (A-7) then amounts to assuming that
\[\ker(\mathbf{A})\cap T_{\Sigma^{\prime}}(\overline{\mathbf{x}}_{\Sigma^{ \prime}})=\{0\}\,. \tag{14}\]
Assuming the rows of \(\mathbf{A}\) are linearly independent, one easily checks that (14) imposes that \(m\geq\dim(T_{\Sigma^{\prime}}(\overline{\mathbf{x}}_{\Sigma^{\prime}}))\). We will give a precise sample complexity bound for the case of compressed sensing in Example 3.4. It is worth mentioning that condition (14) (and (A-7) in some sense) is not uniform as it only requires a control at \(\overline{\mathbf{x}}\) and not over the whole set \(\Sigma^{\prime}\).
Observe that the restricted injectivity condition (A-7) depends on \(\Sigma^{\prime}\) which itself depends on \(R^{\prime}\), that is, the radius of the ball around \(\boldsymbol{\theta}_{0}\) containing the whole trajectory \(\theta(t)\) during the network training (see the proof of Lemma 3.10). On the other hand, \(R^{\prime}\) depends on the loss at initialization, which means that the higher the initial error of the network, the larger the set of parameters it might reach during optimization, and thus the larger the set \(\Sigma^{\prime}\). This discussion clearly reveals an expected phenomenon: there is a trade-off between the restricted injectivity condition on \(\mathbf{F}\) and the expressivity of the network. If the model is highly expressive then \(\operatorname{dist}(\overline{\mathbf{x}},\Sigma^{\prime})\) will be smaller. But this is likely to come at the cost of making \(\mu_{\mathbf{F},\Sigma^{\prime}}\) decrease, as restricted injectivity can be required to hold on a larger subset (cone).
This discussion relates with the work on the instability phenomenon observed in learned reconstruction methods as discussed in [47, 48]. For instance, when \(\mathbf{F}\) is a linear operator \(\mathbf{A}\), the fundamental problem that creates these instabilities and/or hallucinations in the reconstruction is due to the fact that the kernel of \(\mathbf{A}\) is non-trivial. Thus a method that can correctly learn to reconstruct signals whose difference lies in or close to the kernel of \(\mathbf{A}\) will necessarily be unstable or hallucinate. In our setting, this is manifested through the restricted injectivity condition, that imposes that the smallest conic singular value is bounded away from \(0\), i.e. \(\mu_{\mathbf{F},\Sigma^{\prime}}=\lambda_{\min}(\mathbf{A};T_{\Sigma^{\prime} }(\overline{\mathbf{x}}_{\Sigma^{\prime}}))>0\). This is a natural (and minimal) condition in the context of inverse problems to have stable reconstruction guarantees. Note that our condition is non-uniform as it is only required to hold at \(\overline{\mathbf{x}}_{\Sigma^{\prime}}\) and not at all points of \(\Sigma^{\prime}\).
In A-11, we generalize the restricted injectivity condition (14) beyond the linear case provided that \(\mathcal{J}_{\mathbf{F}}\) is Lipschitz continuous. This covers many practical cases, for instance that of phase retrieval. Observe that whereas assumption A-7 requires a uniform control of injectivity of \(\mathbf{F}\) on the whole signal class \(\Sigma^{\prime}\), A-11 is less demanding and only requires injectivity of the Jacobian of \(\mathbf{F}\) at \(\overline{\mathbf{x}}_{\Sigma^{\prime}}\) on the tangent space of \(\Sigma^{\prime}\) at \(\overline{\mathbf{x}}_{\Sigma^{\prime}}\). However the price is that the recovery bound in Theorem A.1 is only valid for high signal-to-noise regime and \(\operatorname{dist}(\overline{\mathbf{x}},\Sigma^{\prime})\) is small enough. Moreover, the convergence rate in noise becomes \(O(\sqrt{\|\boldsymbol{\varepsilon}\|})\) which is worse than \(O(\|\boldsymbol{\varepsilon}\|)\) of Theorem 3.2.
**Example 3.4** (Compressed sensing with sub-Gaussian measurements).: Controlling the minimum conic singular value is not easy in general. Amongst the cases where results are available, we will look at the compressed sensing framework with linear random measurements. In this setting, the forward operator \(\mathbf{A}\in\mathbb{R}^{m\times n}\) is a random sensing matrix. Exploiting the randomness of \(\mathbf{A}\), a natural question is then how many measurements are sufficient to ensure that \(\lambda_{\min}(\mathbf{A};T_{\Sigma^{\prime}}(\overline{\mathbf{x}}_{\Sigma^{ \prime}}))>0\) with high probability. In the case of Gaussian and sub-Gaussian measurements, we can exploit the non-uniform results of [49, 50] to derive sample complexity bounds, i.e. lower bounds on \(m\), for this to hold. By using [50, Theorem 6.3], we have the following proposition:
**Proposition 3.5**.: _Assume that each row \(\mathbf{A}^{i}\) is an independent sub-Gaussian vector, that is_
1. \(\mathbb{E}[\mathbf{A}^{i}]=0\)_,_
2. \(\alpha\leq\mathbb{E}[\big{|}\langle\mathbf{A}^{i},\mathbf{w}\rangle\big{|}]\) _for each_ \(\mathbf{w}\in\mathbb{S}^{n-1}\) _with_ \(\alpha>0\)_,_
3. \(\mathbb{P}\left(\big{|}\langle\mathbf{A}^{i},\mathbf{w}\rangle\big{|}\geq \tau\right)\leq 2e^{-\tau^{2}/(2\sigma^{2})}\) _for each_ \(\mathbf{w}\in\mathbb{S}^{n-1}\)_, with_ \(\sigma>0\)_._
_Let \(C\) and \(C^{\prime}\) be positive constants and \(w(K)\) the Gaussian width of the cone \(K\) defined as:_
\[w(K)=\mathbb{E}_{\mathbf{z}\sim\mathcal{N}(0,\mathbf{I})}\left[\sup_{\mathbf{ w}\in K\cap\mathbb{S}^{d-1}}\langle\mathbf{z},\mathbf{w}\rangle\right].\]
_If_
\[m\geq C^{\prime}\left(\frac{\sigma}{\alpha}\right)^{6}w(T_{\Sigma^{\prime}}( \overline{\mathbf{x}}_{\Sigma^{\prime}}))^{2}+2C^{-2}\frac{\sigma^{2}}{ \alpha^{4}}\tau^{2},\]
_then \(\lambda_{\min}(\mathbf{A},T_{\Sigma^{\prime}}(\overline{\mathbf{x}}_{\Sigma^ {\prime}}))>0\) with probability at least \(1-\exp(-C\tau^{2})\)._
The Gaussian width is an important tool in high-dimensional convex geometry and can be interpreted as a measure of the "dimension" of a cone. Except in some specific settings (such as when \(K\) is a descent cone of a convex function and other special cases), it is notoriously difficult to compute this quantity; see the discussion in [49]. Another "generic" tool for computing Gaussian widths is based on Dudley's inequality which bounds the width of a set in terms of the covering number of the set at all scales. Estimating the covering number is not easy either in general. This shows the difficulty of computing \(w(T_{\Sigma^{\prime}}(\overline{\mathbf{x}}_{\Sigma^{\prime}}))\) which we leave to a future work.
Analyzing recovery guarantees in the compressed sensing framework using unsupervised neural networks such as DIP was proposed in [51, 52]. In [51], the authors restricted their analysis to the case of networks without non-linear activations nor training/optimization. The authors of [52] studied the case of the DIP method but their optimization algorithms is prohibitively intensive necessitating at each iteration retraining the DIP network. Another distinctive difference with our work is that these existing results are uniform relying on RIP-type arguments and their specialization for Gaussian measurements.
#### 3.4.4 Existence and Uniqueness of a Global Strong Solution
We have already stated in Section 3.2 that (2) admits a unique maximal solution. Assumption (8) allows us to further specify this solution as strong and global. Indeed, (11) ensures that the trajectory \(\boldsymbol{\theta}(t)\) is uniformly bounded. Let us start by recalling the notion of a strong solution.
**Definition 3.6**.: Denote \(\boldsymbol{\theta}:t\in[0,+\infty[\mapsto\boldsymbol{\theta}(t)\in\mathbb{R}^ {p}\). The function \(\boldsymbol{\theta}(\cdot)\) is a strong global solution of (2) if it satisfies the following properties:
* \(\boldsymbol{\theta}\) is in \(\mathcal{C}^{1}([0,+\infty[;\mathbb{R}^{p})\);
* for almost all \(t\in[0,+\infty[\), (2) holds with \(\boldsymbol{\theta}(0)=\boldsymbol{\theta}_{0}\).
**Proposition 3.7**.: _Assume that A-1-A-6 and (8) are satisfied. Then, for any initial condition \(\boldsymbol{\theta}_{0}\), the evolution system (2) has a unique strong global solution._
Proof.: Proposition 3.1 ensures the existence and uniqueness of a maximal solution. Following the discussion after the proof of Proposition 3.1, if \(\mathbf{\theta}(t)\) is bounded, then we are done. This is precisely what is ensured by Theorem 3.2 under our conditions.
### Proofs
We start with the following lemmas that will be instrumental in the proof of Theorem 3.2.
**Lemma 3.8**.: _Assume that A-1, A-3, A-5 and A-6 hold. Let \(\mathbf{\theta}(\cdot)\) be a solution trajectory of (2). Then,_
1. \(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(\cdot)))\) _is nonincreasing, and thus converges._
2. _If_ \(\mathbf{\theta}(\cdot)\) _is bounded,_ \(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(\cdot)))\) _is constant on_ \(\mathfrak{W}(\mathbf{\theta}(\cdot))\)_._
Proof.: Let \(V(t)=\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t))\).
1. Differentiating \(V(\cdot)\), we have for \(t>0\): \[\dot{V}(t) =\langle\dot{\mathbf{y}}(t),\nabla_{\mathbf{y}(t)}\mathcal{L}_{ \mathbf{y}}(\mathbf{y}(t))\rangle\] \[=\langle\mathcal{J}_{\mathbf{F}}(t)\mathcal{J}_{\mathbf{g}}(t) \dot{\mathbf{\theta}}(t),\nabla_{\mathbf{y}(t)}\mathcal{L}_{\mathbf{y}}(\mathbf{ y}(t))\rangle\] \[=-\left\langle\mathcal{J}_{\mathbf{F}}(t)\mathcal{J}_{\mathbf{g} }(t)\mathcal{J}_{\mathbf{g}}(t)^{\top}\mathcal{J}_{\mathbf{F}}(t)^{\top} \nabla_{\mathbf{y}(t)}\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t)),\nabla_{ \mathbf{y}(t)}\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t))\right\rangle\] \[=-\left\|\mathcal{J}_{\mathbf{g}}(t)^{\top}\mathcal{J}_{\mathbf{ F}}(t)^{\top}\nabla_{\mathbf{y}(t)}\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t)) \right\|^{2}=-\left\|\dot{\mathbf{\theta}}(t)\right\|^{2},\] (15) and thus \(V(\cdot)\) is decreasing. Since it is bounded from below (by \(0\) by assumption), it converges to say \(\mathcal{L}_{\infty}\) (\(0\) in our case).
2. Since \(\mathbf{\theta}(\cdot)\) is bounded, \(\mathfrak{W}(\mathbf{\theta}(\cdot))\) is non-empty. Let \(\mathbf{\theta}_{\infty}\in\mathfrak{W}(\mathbf{\theta}(\cdot))\). Then \(\exists t_{k}\to+\infty\) such that \(\mathbf{\theta}(t_{k})\to\mathbf{\theta}_{\infty}\) as \(k\to+\infty\). Combining claim 1 with continuity of \(\mathcal{L}\), \(\mathbf{F}\) and \(\mathbf{g}(\cdot,\mathbf{u})\), we have \[\mathcal{L}_{\infty}=\lim_{k\to+\infty}\mathcal{L}_{\mathbf{y}}(\mathbf{F}( \mathbf{g}(\mathbf{u},\mathbf{\theta}(t_{k}))))=\mathcal{L}_{\mathbf{y}}(\mathbf{ F}(\mathbf{g}(\mathbf{u},\mathbf{\theta}_{\infty}))).\] Since this is true for any cluster point, the claim is proved.
**Lemma 3.9**.: _Assume that A-1 to A-6 hold. Let \(\mathbf{\theta}(\cdot)\) be a solution trajectory of (2). If for all \(t\geq 0\), \(\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(t))\geq\frac{\sigma_{\min}(\mathcal{J} _{\mathbf{g}}(0))}{2}>0\), then \(\|\dot{\mathbf{\theta}}(\cdot)\|\in L^{1}([0,+\infty[)\). In turn, \(\lim_{t\to+\infty}\mathbf{\theta}(t)\) exists._
Proof.: From Lemma 3.8(i), we have for \(t\geq 0\):
\[\mathbf{y}(t)\in[0\leq\mathcal{L}_{\mathbf{y}}(\cdot)\leq\mathcal{L}_{ \mathbf{y}}(\mathbf{y}(0))].\]
We may assume without loss of generality that \(\mathbf{y}(t)\in[0<\mathcal{L}_{\mathbf{y}}(\cdot)\leq\mathcal{L}_{\mathbf{y} }(\mathbf{y}(0))]\) since otherwise \(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(\cdot))\) is eventually zero which implies, by Lemma 3.8, that \(\dot{\mathbf{\theta}}\) is eventually zero, in which case there is nothing to prove.
We are now in position to use the KL property on \(\mathbf{y}(\cdot)\). We have for \(t>0\):
\[\frac{\mathrm{d}\psi(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t)))}{\mathrm{d}t}= \psi^{\prime}(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t)))\frac{\mathrm{d} \mathcal{L}_{\mathbf{y}}(\mathbf{y}(t))}{\mathrm{d}t}\]
\[=-\psi^{\prime}(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t)))\left\| \mathcal{J}_{\mathbf{g}}(t)^{\top}\mathcal{J}_{\mathbf{F}}(t)^{\top}\nabla_{ \mathbf{y}(t)}\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t))\right\|^{2}\] \[\leq-\frac{\left\|\mathcal{J}_{\mathbf{g}}(t)^{\top}\mathcal{J}_{ \mathbf{F}}(t)^{\top}\nabla_{\mathbf{y}(t)}\mathcal{L}_{\mathbf{y}}(\mathbf{y} (t))\right\|^{2}}{\left\|\nabla_{\mathbf{y}(t)}\mathcal{L}_{\mathbf{y}}( \mathbf{y}(t))\right\|}\] \[\leq-\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(t))\sigma_{\mathbf{F }}\left\|\mathcal{J}_{\mathbf{g}}(t)^{\top}\mathcal{J}_{\mathbf{F}}(t)^{\top} \nabla_{\mathbf{y}(t)}\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t))\right\|\] \[\leq-\frac{\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0))\sigma_{ \mathbf{F}}}{2}\left\|\dot{\boldsymbol{\theta}}(t)\right\|. \tag{16}\]
where we used A-4 and that \(\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(t))\geq\frac{\sigma_{\min}(\mathcal{J }_{\mathbf{g}}(0))}{2}>0\). Integrating, we get
\[\int_{0}^{t}\left\|\dot{\boldsymbol{\theta}}(s)\right\|\mathrm{d}s\leq\frac{2 }{\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0))\sigma_{\mathbf{F}}}\left(\psi( \mathcal{L}_{\mathbf{y}}(\mathbf{y}(0)))-\psi(\mathcal{L}_{\mathbf{y}}( \mathbf{y}(t)))\right). \tag{17}\]
Since \(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t))\) converges thanks to Lemma 3.8(i) and \(\psi\) is continuous and increasing, the right hand side in (17) has a limit. Thus passing to the limit as \(t\to+\infty\), we get that \(\dot{\boldsymbol{\theta}}\in L^{1}([0,+\infty[)\). This in turn implies that \(\lim_{t\to+\infty}\boldsymbol{\theta}(t)\) exists, say \(\boldsymbol{\theta}_{\infty}\), by applying Cauchy's criterion to
\[\boldsymbol{\theta}(t)=\boldsymbol{\theta}_{0}+\int_{0}^{t}\dot{\boldsymbol{ \theta}}(s)\mathrm{d}s.\]
**Lemma 3.10**.: _Assume that A-1 to A-6 hold. Recall \(R\) and \(R^{\prime}\) from (9). Let \(\boldsymbol{\theta}(\cdot)\) be a solution trajectory of (2)._
1. _If_ \(\boldsymbol{\theta}\in\mathbb{B}(\boldsymbol{\theta}_{0},R)\) _then_ \[\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(\boldsymbol{\theta}))\geq\sigma_{\min} (\mathcal{J}_{\mathbf{g}}(0))/2.\]
2. _If for all_ \(s\in[0,t]\)_,_ \(\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(s))\geq\frac{\sigma_{\min}(\mathcal{J }_{\mathbf{g}}(0))}{2}\) _then_ \[\boldsymbol{\theta}(t)\in\mathbb{B}(\boldsymbol{\theta}_{0},R^{\prime}).\]
3. _If_ \(R^{\prime}<R\)_, then for all_ \(t\geq 0\)_,_ \(\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(t))\geq\sigma_{\min}(\mathcal{J}_{ \mathbf{g}}(0))/2\)_._
Proof.:
1. Since \(\boldsymbol{\theta}\in\mathbb{B}(\boldsymbol{\theta}_{0},R)\), we have \[\left\|\mathcal{J}_{\mathbf{g}}(\boldsymbol{\theta})-\mathcal{J}_{\mathbf{g}} (\boldsymbol{\theta}_{0})\right\|\leq\mathrm{Lip}_{\mathbb{B}(\boldsymbol{ \theta}_{0},R)}(\mathcal{J}_{\mathbf{g}})\left\|\boldsymbol{\theta}- \boldsymbol{\theta}_{0}\right\|\leq\mathrm{Lip}_{\mathbb{B}(\boldsymbol{\theta} _{0},R)}(\mathcal{J}_{\mathbf{g}})R\leq\frac{\sigma_{\min}(\mathcal{J}_{ \mathbf{g}}(0))}{2}.\] By using that \(\sigma_{\min}(\cdot)\) is 1-Lipschitz, we obtain \[\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(\boldsymbol{\theta}))\geq\sigma_{\min}( \mathcal{J}_{\mathbf{g}}(0))-\left\|\mathcal{J}_{\mathbf{g}}(\boldsymbol{ \theta})-\mathcal{J}_{\mathbf{g}}(\boldsymbol{\theta}_{0})\right\|\geq\frac{ \sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0))}{2}.\]
2. We have for \(t>0\) \[\frac{1}{2}\frac{\mathrm{d}\left\|\boldsymbol{\theta}(t)-\boldsymbol{\theta}_{0} \right\|^{2}}{\mathrm{d}t}=\left\|\boldsymbol{\theta}(t)-\boldsymbol{\theta}_{0} \right\|\frac{\mathrm{d}\left\|\boldsymbol{\theta}(t)-\boldsymbol{\theta}_{0} \right\|}{\mathrm{d}t}=\left\langle\dot{\boldsymbol{\theta}}(t),\boldsymbol{ \theta}(t)-\boldsymbol{\theta}_{0}\right\rangle,\]
and Cauchy-Schwarz inequality then implies \[\frac{\mathrm{d}\left\|\boldsymbol{\theta}(t)-\boldsymbol{\theta}_{0}\right\|}{ \mathrm{d}t}\leq\left\|\dot{\boldsymbol{\theta}}(t)\right\|.\] Combining this with (17) yields \[\left\|\boldsymbol{\theta}(t)-\boldsymbol{\theta}_{0}\right\|\leq\int_{0}^{t} \left\|\dot{\boldsymbol{\theta}}(s)\right\|\mathrm{d}s\leq\frac{2}{\sigma_{ \min}(\mathcal{J}_{\mathbf{g}}(0))\sigma_{\mathbf{F}}}\psi(\mathcal{L}_{ \mathbf{y}}(\mathbf{y}(0))),\] where we argue that \(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t))\) is positive and bounded and \(\psi\) is positive and increasing.
3. Actually, we prove the stronger statement that \(\boldsymbol{\theta}(t)\in\mathbb{B}(\boldsymbol{\theta}_{0},R^{\prime})\) for all \(t\geq 0\), whence our claim will follow thanks to (i). Let us assume for contradiction that \(R^{\prime}<R\) and \(\exists\;t<+\infty\) such that \(\boldsymbol{\theta}(t)\notin\mathbb{B}(\boldsymbol{\theta}_{0},R^{\prime})\). By (ii), this means that \(\exists\;s\leq t\) such that \(\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(s))<\sigma_{\min}(\mathcal{J}_{ \mathbf{g}}(0))/2\). In turn, (i) implies that \(\boldsymbol{\theta}(s)\notin\mathbb{B}(\boldsymbol{\theta}_{0},R)\). Let us define \[t_{0}=\inf\{\tau\geq 0:\boldsymbol{\theta}(\tau)\notin\mathbb{B}( \boldsymbol{\theta}_{0},R)\},\] which is well-defined as it is at most \(s\). Thus, for any small \(\boldsymbol{\varepsilon}>0\) and for all \(t^{\prime}\leq t_{0}-\boldsymbol{\varepsilon}\), \(\boldsymbol{\theta}(t^{\prime})\in\mathbb{B}(\boldsymbol{\theta}_{0},R)\) which, in view of (i) entails that \(\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(\boldsymbol{\theta})(t^{\prime}))\geq \sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0))/2\). In turn, we get from (ii) that \(\boldsymbol{\theta}(t_{0}-\boldsymbol{\varepsilon})\in\mathbb{B}(\boldsymbol{ \theta}_{0},R^{\prime})\). Since \(\boldsymbol{\varepsilon}\) is arbitrary and \(\boldsymbol{\theta}\) is continuous, we pass to the limit as \(\boldsymbol{\varepsilon}\to 0\) to deduce that \(\boldsymbol{\theta}(t_{0})\in\mathbb{B}(\boldsymbol{\theta}_{0},R^{\prime}) \subsetneq\mathbb{B}(\boldsymbol{\theta}_{0},R)\) hence contradicting the definition of \(t_{0}\).
Proof of Theorem 3.2.:
1. We here use a standard Lyapunov analysis with several energy functions. Let us reuse \(V(t)\). Embarking from (15), we have for \(t>0\) \[\dot{V}(t) =-\left\|\mathcal{J}_{\mathbf{g}}(t)^{\top}\mathcal{J}_{\mathbf{ F}}(t)^{\top}\nabla_{\mathbf{y}(t)}\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t)) \right\|^{2}\] \[\leq-\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(t))^{2}\sigma_{ \mathbf{F}}^{2}\left\|\nabla_{\mathbf{y}(t)}\mathcal{L}_{\mathbf{y}}(\mathbf{y }(t))\right\|^{2},\] where we used A-4. In view of Lemma 3.10(iii), we have \(\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(t))\geq\sigma_{\min}(\mathcal{J}_{ \mathbf{g}}(0))/2>0\) for all \(t\geq 0\) if the initialization error verifies (8). Using once again A-2, we get \[\dot{V}(t) \leq-\frac{\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0))^{2}\sigma_{ \mathbf{F}}^{2}}{4}\left\|\nabla_{\mathbf{y}(t)}\mathcal{L}_{\mathbf{y}}( \mathbf{y}(t))\right\|^{2}\] \[\leq-\frac{\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0))^{2}\sigma_ {\mathbf{F}}^{2}}{4\psi^{\prime}(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t)))^{2}}.\] Let \(\Psi\) be a primitive of \(-\psi^{\prime 2}\). Then, the last inequality gives \[\dot{\Psi}(V(t)) =\Psi^{\prime}(V(t))\dot{V}(t)\] \[\geq\frac{\sigma_{\mathbf{F}}^{2}\sigma_{\min}(\mathcal{J}_{ \mathbf{g}}(0))^{2}}{4}.\]
By integration on \(s\in[0,t]\) alongside the fact that \(\Psi\) and \(\Psi^{-1}\) are (strictly) decreasing functions, we get
\[\Psi(V(t))-\Psi(V(0)) \geq\frac{\sigma_{\mathbf{F}}^{2}\sigma_{\min}(\mathcal{J}_{ \mathbf{g}}(0))^{2}}{4}t\] \[V(t) \leq\Psi^{-1}\left(\frac{\sigma_{\mathbf{F}}^{2}\sigma_{\min}( \mathcal{J}_{\mathbf{g}}(0))^{2}}{4}t+\Psi(V(0))\right),\]
which gives (10).
By Lemma 3.9, \(\boldsymbol{\theta}(t)\) converges to some \(\boldsymbol{\theta}_{\infty}\). Continuity of \(\mathcal{L}_{\mathbf{y}}(\cdot)\), \(\mathbf{F}\) and \(\mathbf{g}(\mathbf{u},\cdot)\) implies that
\[0=\lim_{t\to+\infty}\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t))=\lim_{t\to+\infty }\mathcal{L}_{\mathbf{y}}(\mathbf{F}(\mathbf{g}(\mathbf{u},\boldsymbol{ \theta}(t))))=\mathcal{L}_{\mathbf{y}}(\mathbf{F}(\mathbf{g}(\mathbf{u}, \boldsymbol{\theta}_{\infty}))),\]
and thus \(\boldsymbol{\theta}_{\infty}\in\operatorname{Argmin}(\mathcal{L}_{\mathbf{y}} (\mathbf{F}(\mathbf{g}(\mathbf{u},\cdot))))\). To get the rate, we argue as in the proof of Lemma 3.10 (ii), replacing \(\boldsymbol{\theta}_{0}\) by \(\boldsymbol{\theta}_{\infty}\), to obtain
\[\|\boldsymbol{\theta}(t)-\boldsymbol{\theta}_{\infty}\|\leq\int_{t}^{+\infty} \left\|\dot{\boldsymbol{\theta}}(s)\right\|\mathrm{d}s.\]
We then get by integrating (16) that
\[\|\boldsymbol{\theta}(t)-\boldsymbol{\theta}_{\infty}\| \leq-\frac{2}{\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0))\sigma_{ \mathbf{F}}}\int_{t}^{+\infty}\frac{\mathrm{d}\psi(\mathcal{L}_{\mathbf{y}}( \mathbf{y}(s)))}{\mathrm{d}s}\mathrm{d}s\] \[\leq\frac{2}{\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0))\sigma_{ \mathbf{F}}}\psi(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t))).\]
Thanks to (10), and using that \(\psi\) is increasing, we arrive at (11).
2. By Lemma 3.9 and continuity of \(\mathbf{F}\) and \(\mathbf{g}(\mathbf{u},\cdot)\), we can infer that \(\mathbf{y}(\cdot)\) also converges to \(\mathbf{y}_{\infty}=\mathbf{F}(\mathbf{g}(\mathbf{u},\boldsymbol{\theta}_{ \infty}))\), where \(\boldsymbol{\theta}_{\infty}=\lim_{t\to+\infty}\boldsymbol{\theta}(t)\). Thus using also continuity of \(\mathcal{L}_{\mathbf{y}}(\cdot)\), we have \[0=\lim_{t\to+\infty}\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t))=\mathcal{L}_{ \mathbf{y}}(\mathbf{y}_{\infty}),\] and thus \(\mathbf{y}_{\infty}\in\operatorname{Argmin}(\mathcal{L}_{\mathbf{y}})\). Since the latter is the singleton \(\{\mathbf{y}\}\) by assumption, we conclude. In order to obtain the early stopping bound, we use [41, Theorem 5] that links the KL property of \(\mathcal{L}_{\mathbf{y}}(\cdot)\) with an error bound. In our case, this reads \[\operatorname{dist}(\mathbf{y}(t),\operatorname{Argmin}(\mathcal{L}_{\mathbf{y }}))=\|\mathbf{y}(t)-\mathbf{y}\|\leq\psi(\mathcal{L}_{\mathbf{y}}(\mathbf{y} (t))).\] (18) It then follows that \[\|\mathbf{y}(t)-\overline{\mathbf{y}}\| \leq\|\mathbf{y}(t)-\mathbf{y}\|+\|\mathbf{y}-\overline{\mathbf{y }}\|\] \[\leq\psi(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t)))+\|\boldsymbol{ \varepsilon}\|\]
\[\|\mathbf{F}(\overline{\mathbf{x}})-\mathbf{F}(\overline{\mathbf{x}}_{\Sigma^{ \prime}})\|\leq\max_{\mathbf{z}\in\mathbb{B}(0,2\|\overline{\mathbf{x}}\|)}\| \mathcal{J}_{\mathbf{F}}(\mathbf{z})\|\operatorname{dist}(\overline{\mathbf{x}},\Sigma^{\prime}). \tag{19}\]
## 4 Case of The Two-Layer DIP Network
This section is devoted to studying under which conditions on the neural network architecture the key condition in (8) is fulfilled. Towards this goal, we consider the case of a two-layer DIP network. Therein, \(\mathbf{u}\) is randomly set and kept fixed during the training, and the network is trained to transform this input into a signal that matches the observation \(\mathbf{y}\). In particular, we will provide bounds on the level of overparametrization ensuring that (8) holds, which in turn will provide the subsequent recovery guarantees in Theorem 3.2.
### The Two-Layer Neural Network
We take \(L=2\) in Definition 2.1 and thus consider the network defined in (3):
\[\mathbf{g}(\mathbf{u},\boldsymbol{\theta})=\frac{1}{\sqrt{k}}\mathbf{V}\phi( \mathbf{W}\mathbf{u})\]
with \(\mathbf{V}\in\mathbb{R}^{n\times k}\) and \(\mathbf{W}\in\mathbb{R}^{k\times d}\), and \(\phi\) an element-wise nonlinear activation function. Observe that it is immediate to account for the bias vector in the hidden layer by considering the bias as a column of the weight matrices \(\mathbf{W}\), augmenting \(\mathbf{u}\) by \(1\) and then normalizing to unit norm. The normalization is required to comply with A-8 hereafter. The role of the scaling by \(\sqrt{k}\) will become apparent shortly, but it will be instrumental to concentrate the kernel stemming from the jacobian of the network.
In the sequel, we set \(C_{\phi}=\sqrt{\mathbb{E}_{X\sim\mathcal{N}(0,1)}\left[\phi(X)^{2}\right]}\) and \(C_{\phi^{\prime}}=\sqrt{\mathbb{E}_{X\sim\mathcal{N}(0,1)}\left[\phi^{\prime}( X)^{2}\right]}\). We will assume without loss of generality that \(\mathbf{F}(0)=0\). This is a very mild assumption that is natural in the context of inverse problems, but can be easily removed if needed. We will also need the following assumptions:
**Assumptions on the network input and intialization**
1. \(\mathbf{u}\) _is a uniform vector on_ \(\mathbb{S}^{d-1}\)_;_
2. \(\mathbf{W}(0)\) _has iid entries from_ \(\mathcal{N}(0,1)\) _and_ \(C_{\phi},C_{\phi^{\prime}}<+\infty\)_;_
3. \(\mathbf{V}(0)\) _is independent from_ \(\mathbf{W}(0)\) _and_ \(\mathbf{u}\) _and has iid columns with identity covariance and_ \(D\)_-bounded centered entries._
### Recovery Guarantees in the Overparametrized Regime
Our main result gives a bound on the level of overparameterization which is sufficient for (8) to hold.
**Theorem 4.1**.: _Suppose that assumptions A-1, A-3, A-5 and A-6 hold. Let \(C\), \(C^{\prime}\) two positive constants that depend only on the activation function and \(D\). Let:_
\[L_{\mathbf{F},0}=\max_{\mathbf{x}\in\mathbb{B}\left(0,C\sqrt{n\log(d)}\right) }\left\|\mathcal{J}_{\mathbf{F}}(\mathbf{x})\right\|\]
_and_
\[L_{\mathcal{L},0}=\max_{\mathbf{v}\in\mathbb{B}\left(0,CL_{\mathbf{F},0}\sqrt {n\log(d)}+\sqrt{m}\left(\left\|\mathbf{F}(\mathbf{\overline{x}})\right\|_{ \infty}+\left\|\boldsymbol{\varepsilon}\right\|_{\infty}\right)\right)}\frac{ \left\|\nabla_{\mathbf{v}}\mathcal{L}_{\mathbf{y}}(\mathbf{v})\right\|}{ \left\|\mathbf{v}-\mathbf{y}\right\|}.\]
_Consider the one-hidden layer network (3) where both layers are trained with the initialization satisfying A-8 to A-10 and the architecture parameters obeying_
\[k\geq C^{\prime}\sigma_{\mathbf{F}}^{-4}n\psi\left(\frac{L_{\mathcal{L},0}}{2 }\left(CL_{\mathbf{F},0}\sqrt{n\log(d)}+\sqrt{m}\left(\left\|\mathbf{F}( \mathbf{\overline{x}})\right\|_{\infty}+\left\|\boldsymbol{\varepsilon}\right\| _{\infty}\right)\right)^{2}\right)^{4}.\]
_Then (8) holds with probability at least \(1-2n^{-1}-d^{-1}\)._
Before proving Theorem 4.1, a few remarks are in order.
_Remark 4.2_ (Randomness of \(\Sigma^{\prime}\)).: It is worth observing that since the initialization is random, so is the set of signals \(\Sigma^{\prime}=\Sigma_{\mathbb{B}_{R^{\prime}+\left\|\boldsymbol{\varepsilon} \right\|}(0)}\) by definition, where \(\boldsymbol{\theta}_{0}=(\mathbf{V}(0),\mathbf{W}(0))\). This set is contained in a larger deterministic set with high probability. Indeed, Gaussian concentration gives us, for any \(\delta>0\),
\[\left\|\mathbf{W}(0)\right\|_{F}\leq(1+\delta)\sqrt{kd}\]
with probability larger than \(1-e^{-\delta^{2}kd/2}\). Moreover, since by A-10\(\mathbf{V}(0)\) has independent columns with bounded entries and \(\mathbb{E}\left[\left\|\mathbf{V}_{i}(0)\right\|^{2}\right]=n\), we can apply Hoeffding's inequality
to \(\left\|\mathbf{V}(0)\right\|_{F}^{2}=\sum_{i=1}^{k}\left\|\mathbf{V}_{i}(0)\right\|^ {2}\) to infer that
\[\left\|\mathbf{V}(0)\right\|_{F}\leq(1+\delta)\sqrt{kn}\]
with probability at least \(1-e^{-\delta^{2}kd/(2D^{2})}\). Collecting the above, we have
\[\left\|\boldsymbol{\theta}_{0}\right\|\leq(1+\delta)\sqrt{k}\left(\sqrt{n}+ \sqrt{d}\right),\]
with probability at least \(1-e^{-\delta^{2}kd/2}-e^{-\delta^{2}kd/(2D^{2})}\). In view of the bound on \(R^{\prime}\) (see (22)), this yields that with probability at least \(1-e^{-\delta^{2}kd/2}-e^{-\delta^{2}kd/(2D^{2})}-2n^{-1}-d^{-1}\), \(\Sigma^{\prime}\subset\Sigma_{\mathbb{B}_{\rho}(0)}\), where
\[\rho=\frac{4}{\sigma_{\mathbf{F}}\sqrt{C_{\phi}^{2}+C_{\phi^{\prime}}^{2}}} \psi\left(\frac{L_{\mathcal{L},0}}{2}\left(CL_{\mathbf{F},0}\sqrt{n\log(d)}+ \sqrt{m}\left(\left\|\mathbf{F}(\overline{\mathbf{x}})\right\|_{\infty}+ \left\|\boldsymbol{\varepsilon}\right\|_{\infty}\right)\right)^{2}\right)+(1+ \delta)\sqrt{k}\left(\sqrt{n}+\sqrt{d}\right).\]
This confirms the expected behaviour that expressivity of \(\Sigma^{\prime}\) is higher as the overparametrization increases.
_Remark 4.3_ (Distribution of \(\mathbf{u}\)).: The generator \(\mathbf{g}(\cdot,\boldsymbol{\theta})\) synthesize data by transforming the input (latent) random variable \(\mathbf{u}\). As such, it generates signals \(\mathbf{x}\in\Sigma^{\prime}\) who are in the support of the measure \(\mathbf{g}(\cdot,\boldsymbol{\theta})\#\mu_{\mathbf{u}}\), where \(\mu_{\mathbf{u}}\) is the distribution of \(\mathbf{u}\), and \(\#\) is the push-forward operator. Expressivity of these generative models, coined also push-forward models, in particular GANs, have been recently studied either empirically or theoretically [53, 54, 55, 56, 57]. In particular, this literature highlights the known fact that, since \(\mathbf{g}(\cdot,\boldsymbol{\theta})\) is continuous by construction, the support of \(\mathbf{g}(\cdot,\boldsymbol{\theta})\#\mu_{\mathbf{u}}\) is connected if that of \(\mu_{\mathbf{u}}\) is connected (as in our case). On the other hand, a common assumption in the imaging literature, validated empirically by [58], is that distributions of natural images are supported on low dimensional manifolds. It is also conjectured that the distribution of natural images may in fact lie on a union of disjoint manifolds rather than one globally connected manifold; the union of subspaces or manifolds model is indeed a common assumption in signal/image processing. In the latter case, a generator \(\mathbf{g}(\cdot,\boldsymbol{\theta})\) that will attempt to cover the different modes (manifolds) of the target distribution from one unimodal latent variable \(\mathbf{u}\) will generate samples out of the real data manifold. There are two main ways to avoid this: either making the support of \(\mu_{\mathbf{u}}\) disconnected (e.g. using a mixture of distributions [54, 59]), or making \(\mathbf{g}(\cdot,\boldsymbol{\theta})\) discontinuous [53]. The former strategy appears natural in our context and it will be interesting to investigate this generalization in a future work.
_Remark 4.4_ (Restricted injectivity).: As argued above, if \(\Sigma^{\prime}\) belongs to a target manifold \(\mathcal{M}\), then the restricted injectivity condition (14) tells us that \(\mathbf{A}\) has to be invertible on the tangent space of the target manifold \(\mathcal{M}\) at the closest point of \(\overline{\mathbf{x}}\) in \(\mathcal{M}\).
_Remark 4.5_ (Dependence on \(L_{\mathcal{L},0}\) and \(L_{\mathbf{F},0}\)).: The overparametrization bound on \(k\) depends on \(L_{\mathcal{L},0}\) and \(L_{\mathbf{F},0}\) which in turn may depend on \((n,m,d)\). Their estimate is therefore important. For instance, if \(\mathbf{F}\) is globally Lipschitz, as is the case when it is linear, then \(L_{\mathbf{F},0}\) is independent of \((n,m,d)\). As far as \(L_{\mathcal{L},0}\) is concerned, it is of course independent of \((n,m,d)\) if the loss gradient is globally Lipschitz continuous. Another situation of interest is when \(\nabla_{\mathbf{v}}\mathcal{L}_{\mathbf{y}}(\mathbf{v})\) verifies
\[\left\|\nabla_{\mathbf{v}}\mathcal{L}_{\mathbf{y}}(\mathbf{v})-\nabla_{ \mathbf{z}}\mathcal{L}_{\mathbf{y}}(\mathbf{z})\right\|\leq\varphi\left(\left\| \mathbf{v}-\mathbf{z}\right\|\right),\quad\forall\mathbf{v},\mathbf{z}\in\mathbb{ R}^{m},\]
where \(\varphi:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}\) is increasing and vanishes at \(0\). This is clearly weaker than global Lipschitz continuity and covers it as a special case. It also encompasses many important
situations such as e.g. losses with Holderian gradients. It then easily follows, see e.g. [42, Theorem 18.13], that for all \(\mathbf{v}\in\mathbb{R}^{m}\):
\[\mathcal{L}_{\mathbf{y}}(\mathbf{v})\leq\Phi\left(\left\|\mathbf{v}-\mathbf{y} \right\|\right)\quad\text{where}\quad\Phi(s)=\int_{0}^{1}\frac{\varphi(st)}{t} \mathrm{d}t.\]
In this situation, and if \(\mathbf{F}\) is also globally \(L_{\mathbf{F}}\)-Lipschitz, following our line of proof, the overparametrization bound of Theorem 4.1 reads
\[k\geq C^{\prime}\sigma_{\mathbf{F}}^{-4}n\psi\left(\Phi\left(CL_{\mathbf{F}} \sqrt{n\log(d)}+\sqrt{m}\left(\left\|\mathbf{F}(\overline{\mathbf{x}})\right\| _{\infty}+\left\|\boldsymbol{\varepsilon}\right\|_{\infty}\right)\right) \right)^{4}.\]
_Remark 4.6_ (Dependence on the loss function).: If we now take interest in the scaling of the overparametrization bound on \(k\) with respect to \((n,m,d)\) in the general case we obtain that \(k\gtrsim\sigma_{\mathbf{F}}^{-4}n\psi(L_{\mathcal{L},0}(L_{\mathbf{F},0}^{2}n +m))^{4}\). Aside from the possible dependence of \(L_{\mathcal{L},0}\) and \(L_{\mathbf{F},0}\) on the parameters \((n,m,d)\) discussed before, we observe that this bound is highly dependent on the desingularizing function \(\psi\) given by the loss function. In the Lojasiewicz case where \(\psi=cs^{\alpha}\) with \(\alpha\in[0,1]\), one can choose to use a sufficiently small \(\alpha\) to reduce the scaling on the parameters but then one would slow the convergence rate as described in Corollary 3.3 which implies a tradeoff between the convergence rate and the number of parameters to ensure this convergence.
In the special case where \(\alpha=\frac{1}{2}\) which corresponds to the MSE loss, and where \(L_{\mathbf{F},0}\) is of constant order and independent of \((n,m,d)\), then the overparametrization of \(k\) necessary for ensuring convergence to a zero-loss is \(k\gtrsim n^{3}m^{2}\). Another interesting case is when \(\mathbf{F}\) is linear. In that setting, the overparametrization bound becomes \(k\gtrsim\sigma_{\mathbf{F}}^{-4}n\psi(L_{\mathcal{L},0}(\left\|\mathbf{F} \right\|^{2}n+m))^{4}\). By choosing the MSE loss, and thus controlling \(\psi\) to be a square root operator, then we obtain that we need \(k\gtrsim\kappa(\mathbf{F})^{4}n^{3}m^{2}\). The bound is thus more demanding as \(\mathbf{F}\) becomes more and more ill-conditioned. The latter dependency can be interpreted as follows: the more ill-conditioned the original problem is, the larger the network needs to be.
_Remark 4.7_ (Scaling when \(\mathbf{V}\) is fixed).: When the linear layer \(\mathbf{V}\) is fixed and only \(\mathbf{W}\) is trained, the overparametrization bound to guarantee convergence can be improved (see Appendix B and the results in [28]). In this case, one needs \(k\gtrsim\sigma_{\mathbf{F}}^{-2}n\psi(L_{\mathcal{L},0}(L_{\mathbf{F},0}^{2}n +m))^{2}\). In particular, for the MSE loss and an operator such that \(L_{\mathbf{F},0}\) is of constant order (as is the case when \(\mathbf{F}\) is linear), we only need \(k\gtrsim n^{2}m\). The main reason underlying this improvement is that there is no need in this case to control the deviation of \(\mathbf{V}\) from its initial point to compute the local Lipschitz constant of the jacobian of the network. This allows to have a far better Lipschitz constant estimate which turns out to be even global in this case.
_Remark 4.8_ (Effect of input dimension \(d\)).: Finally, the dependence on \(d\) is far smaller (by a log factor) than the one on \(n\) and \(m\). In the way we presented the theorem, it does also affect the probability obtained but it is possible to write the same probability without \(d\) and with a stronger impact of \(n\). This indicates that \(d\) plays a very minor role on the overparametrization level whereas \(k\) is the key to reaching the overparametrized regime we are looking for. In fact, this is demonstrated by our numerical experiments where we obtained the same results by using very small \(d\in[1,10]\) or larger values up to 500, for all our experiments with potentially large \(n\).
### Proofs
We start with the following lemmas that will be instrumental in the proof of Theorem 4.1.
**Lemma 4.9** (Bound on \(\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0))\) with both layers trained).: _Consider the one-hidden layer network (3) with both layers trained under assumptions A-5 and A-8-A-10. We have_
\[\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0))\geq\sqrt{C_{\phi}^{2}+C_{\phi^{ \prime}}^{2}}/2\]
_with probability at least \(1-2n^{-1}\) provided that \(k/\log(k)\geq Cn\log(n)\) for \(C>0\) large enough that depends only on \(B\), \(C_{\phi}\), \(C_{\phi^{\prime}}\) and \(D\)._
Proof.: Define the matrix \(\mathbf{H}=\mathcal{J}_{\mathbf{g}}(\boldsymbol{\theta}_{0})\mathcal{J}_{ \mathbf{g}}(\boldsymbol{\theta}_{0})^{\top}\). Since \(\mathbf{u}\) is on the unit sphere, \(\mathbf{H}\) reads
\[\mathbf{H}=\frac{1}{k}\sum_{i=1}^{k}\mathbf{H}_{i},\quad\text{ where}\quad \mathbf{H}_{i}\stackrel{{\mathrm{def}}}{{=}}\phi^{\prime}( \mathbf{W}^{i}(0)\mathbf{u})^{2}\mathbf{V}_{i}(0)\mathbf{V}_{i}(0)^{\top}+ \phi(\mathbf{W}^{i}(0)\mathbf{u})^{2}\mathbf{I}_{n}.\]
It then follows that
\[\mathbb{E}\left[\mathbf{H}\right] =\frac{1}{k}\mathbb{E}_{X\sim\mathcal{N}(0,1)}\left[\phi^{\prime }(X)^{2}\right]\sum_{i=1}^{k}\mathbb{E}\left[\mathbf{V}_{i}(0)\mathbf{V}_{i}( 0)^{\top}\right]+\mathbb{E}_{X\sim\mathcal{N}(0,1)}\left[\phi(X)^{2}\right] \mathbf{I}_{n}\] \[=(C_{\phi^{\prime}}^{2}+C_{\phi}^{2})\mathbf{I}_{n},\]
where we used A-8, A-9 and orthogonal invariance of the Gaussian distribution, hence \(\mathbf{W}^{i}(0)\mathbf{u}\) are iid in \(\mathcal{N}(0,1)\), as well as A-10 and independence between \(\mathbf{V}(0)\) and \(\mathbf{W}(0)\). Moreover, \(\mathbb{E}\left[\phi(X)\right]\leq C_{\phi}\), and since \(X\sim\mathcal{N}(0,1)\) and in view of A-5, we can upper-bound \(\phi(X)\) using the Gaussian concentration inequality to get
\[\mathbb{P}\left(\phi(X)\geq C_{\phi}\sqrt{\log(nk)}+\tau\right)\leq\mathbb{P} \left(\phi(X)\geq\mathbb{E}\left[\phi(X)\right]+\tau\right)\leq\exp\left(- \frac{\tau^{2}}{2B^{2}}\right)\!. \tag{20}\]
By choosing \(\tau=\sqrt{2}B\sqrt{\log(nk)}\), and taking \(c_{1}=C_{\phi}+\sqrt{2}B\), we get
\[\mathbb{P}\left(\phi(X)\geq c_{1}\sqrt{\log(nk)}\right)\leq(nk)^{-1}. \tag{21}\]
Using a union bound, we obtain
\[\mathbb{P}\left(\max_{i\in[k]}\phi(\mathbf{W}^{i}(0)\mathbf{u})^{2}>c_{1}\log (nk)\right)\leq n(nk)^{-1}\leq n^{-1}.\]
Thus, with probability at least \(1-n^{-1}\) we get
\[\max_{i\in[k]}\lambda_{\max}\left(\mathbf{H}_{i}\right)\leq B^{2}D^{2}n+c_{1} \log(nk)\leq c_{2}n\log(k),\]
where \(c_{2}=B^{2}D^{2}+2c_{1}\). We can then apply the matrix Chernoff inequality [60, Theorem 5.1.1] to get
\[\mathbb{P}\left(\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0))\leq \delta\sqrt{C_{\phi^{\prime}}^{2}+C_{\phi}^{2}}\right)\] \[\leq ne^{-\frac{(1-\delta)^{2}k(C_{\phi^{\prime}}^{2}+C_{\phi}^{ 2})}{c_{2}n\log(k)}}+n^{-1}.\]
Taking \(\delta=1/2\) and \(k\) as prescribed with a sufficiently large constant \(C\), we conclude.
**Lemma 4.10** (Local Lipschitz constant of \(\mathcal{J}_{\mathbf{g}}\) with both layers trained).: _Suppose that assumptions A-5, A-8 and A-10 are satisfied. For the one-hidden layer network (3) with both layers trained, we have for \(n\geq 2\) and any \(\rho>0\):_
\[\mathrm{Lip}_{\mathbb{B}(\boldsymbol{\theta}_{0},\rho)}(\mathcal{J}_{ \mathbf{g}})\leq B(1+2(D+\rho))\sqrt{\frac{n}{k}}.\]
Proof.: Let \(\boldsymbol{\theta}\in\mathbb{R}^{k(d+n)}\) (resp. \(\boldsymbol{\widetilde{\theta}}\)) be the vectorized form of the parameters of the network \((\mathbf{W},\mathbf{V})\) (resp. \((\widetilde{\mathbf{W}},\widetilde{\mathbf{V}})\)). For \(\boldsymbol{\theta},\boldsymbol{\widetilde{\theta}}\in\mathbb{B}(R, \boldsymbol{\theta}_{0})\), we have
\[\left\|\mathcal{J}_{\mathbf{g}}(\boldsymbol{\theta})-\mathcal{J}_ {\mathbf{g}}(\boldsymbol{\widetilde{\theta}})\right\|^{2} \leq\frac{1}{k}\left(\sum_{i=1}^{k}\left\|\phi^{\prime}(\mathbf{ W}^{i}\mathbf{u})\mathbf{V}_{i}\mathbf{u}^{\top}-\phi^{\prime}(\widetilde{ \mathbf{W}}^{i}\mathbf{u})\widetilde{\mathbf{V}}_{i}\mathbf{u}^{\top}\right\| _{F}^{2}+\left\|\mathrm{diag}_{n}\left(\phi(\mathbf{W}\mathbf{u})-\phi( \widetilde{\mathbf{W}}\mathbf{u})\right)\right\|_{F}^{2}\right)\] \[\leq\frac{1}{k}\Bigg{(}2\sum_{i=1}^{k}\left(\left\|\phi^{\prime}( \mathbf{W}^{i}\mathbf{u})\left(\mathbf{V}_{i}-\widetilde{\mathbf{V}}_{i} \right)\mathbf{u}^{\top}\right\|_{F}^{2}+\left\|\left(\phi^{\prime}(\mathbf{ W}^{i}\mathbf{u})-\phi^{\prime}(\widetilde{\mathbf{W}}^{i}\mathbf{u})\right) \widetilde{\mathbf{V}}_{i}\mathbf{u}^{\top}\right\|_{F}^{2}\right)\] \[\qquad+\left\|\mathrm{diag}_{n}\left(\phi(\mathbf{W}\mathbf{u})- \phi(\widetilde{\mathbf{W}}\mathbf{u})\right)\right\|_{F}^{2}\Bigg{)}\] \[\leq\frac{1}{k}\left(2B^{2}\sum_{i=1}^{k}\left(\left\|\mathbf{V} _{i}-\widetilde{\mathbf{V}}_{i}\right\|^{2}+\left\|\mathbf{W}^{i}-\widetilde{ \mathbf{W}}^{i}\right\|^{2}\left\|\widetilde{\mathbf{V}}_{i}\right\|^{2}\right) +n\left\|\phi(\mathbf{W}\mathbf{u})-\phi(\widetilde{\mathbf{W}}\mathbf{u}) \right\|^{2}\right)\] \[\leq\frac{1}{k}\left(2B^{2}\left\|\mathbf{V}-\widetilde{\mathbf{ V}}\right\|_{F}^{2}+2B^{2}\sum_{i=1}^{k}\left\|\mathbf{W}^{i}-\widetilde{\mathbf{W}}^{ i}\right\|^{2}\left\|\widetilde{\mathbf{V}}_{i}\right\|^{2}+B^{2}n\left\| (\mathbf{W}-\widetilde{\mathbf{W}})\mathbf{u}\right\|^{2}\right)\] \[\leq\frac{1}{k}\left(2B^{2}\left\|\mathbf{V}-\widetilde{\mathbf{ V}}\right\|_{F}^{2}+2B^{2}\max_{i}\left\|\widetilde{\mathbf{V}}_{i}\right\|^{2} \left\|\mathbf{W}-\widetilde{\mathbf{W}}\right\|_{F}^{2}+B^{2}n\left\|\mathbf{ W}-\widetilde{\mathbf{W}}\right\|_{F}^{2}\right)\] \[\leq\frac{n}{k}B^{2}\left(\left\|\mathbf{V}-\widetilde{\mathbf{ V}}\right\|_{F}^{2}+\left\|\mathbf{W}-\widetilde{\mathbf{W}}\right\|_{F}^{2} \right)+\frac{2}{k}B^{2}\max_{i}\left\|\widetilde{\mathbf{V}}_{i}\right\|^{2} \left\|\mathbf{W}-\widetilde{\mathbf{W}}\right\|_{F}^{2}\] \[=\frac{n}{k}B^{2}\left\|\boldsymbol{\theta}-\boldsymbol{\widetilde{ \theta}}\right\|^{2}+\frac{2}{k}B^{2}\max_{i}\left\|\widehat{\mathbf{V}}_{i} \right\|^{2}\left\|\mathbf{W}-\widetilde{\mathbf{W}}\right\|_{F}^{2}.\]
Moreover, for any \(i\in[k]\):
\[\left\|\widetilde{\mathbf{V}}_{i}\right\|^{2}\leq 2\left\|\mathbf{V}_{i}(0)\right\|^ {2}+2\left\|\widetilde{\mathbf{V}}_{i}-\mathbf{V}_{i}(0)\right\|^{2}\leq 2 \left\|\mathbf{V}_{i}(0)\right\|^{2}+2\left\|\boldsymbol{\theta}-\boldsymbol{ \theta}_{0}\right\|^{2}\leq 2nD^{2}+2\rho^{2},\]
where we used A-10. Thus
\[\left\|\mathcal{J}_{\mathbf{g}}(\boldsymbol{\theta})-\mathcal{J}_{\mathbf{g}} (\boldsymbol{\widetilde{\theta}})\right\|^{2}\leq\frac{n}{k}B^{2}\left(1+4D^{ 2}+2\rho^{2}\right)\left\|\boldsymbol{\theta}-\boldsymbol{\widehat{\theta}} \right\|^{2}.\]
**Lemma 4.11** (Bound on the initial error).: _Under assumptions A-5, A-6 and A-8 to A-10, the initial error of the network satisfies_
\[\left\|\mathbf{y}(0)-\mathbf{y}\right\|\leq CL_{\mathbf{F},0}\sqrt{n\log(d)} +\sqrt{m}\left(\left\|\mathbf{F}(\mathbf{\overline{x}})\right\|_{\infty}+ \left\|\boldsymbol{\varepsilon}\right\|_{\infty}\right),\]
_with probability at least \(1-d^{-1}\), where \(C\) is a constant that depends only on \(B\), \(C_{\phi}\), and \(D\)._
Proof.: By A-6 and the mean value theorem, we have
\[\left\|\mathbf{y}(0)-\mathbf{y}\right\|\leq\max_{\mathbf{x}\in\mathbb{B}(0, \left\|\mathbf{x}(0)\right\|)}\left\|\mathcal{J}_{\mathbf{F}}(\mathbf{x}) \right\|\left\|\mathbf{x}(0)\right\|+\sqrt{m}\left(\left\|\mathbf{F}(\mathbf{ \overline{x}})\right\|_{\infty}+\left\|\boldsymbol{\varepsilon}\right\|_{ \infty}\right),\]
where \(\mathbf{x}(0)=\mathbf{g}(\mathbf{u},\boldsymbol{\theta}(0))=\frac{1}{\sqrt{k}} \sum_{i=1}^{k}\phi(\mathbf{W}^{i}(0)\mathbf{u})\mathbf{V}_{i}(0)\). Moreover, by A-10:
\[\left\|\mathbf{g}(\mathbf{u},\boldsymbol{\theta}(0))\right\|\leq\max_{i} \left\|\mathbf{V}_{i}(0)\right\|\frac{1}{\sqrt{k}}\sum_{i=1}^{k}\left|\phi( \mathbf{W}^{i}(0)\mathbf{u})\right|\leq D\sqrt{n}\frac{1}{\sqrt{k}}\sum_{i=1} ^{k}\left|\phi(\mathbf{W}^{i}(0)\mathbf{u})\right|.\]
We now prove that the last term concentrates around its expectation. First, owing to A-8 and A-9, we can argue using orthogonal invariance of the Gaussian distribution and independence to infer that
\[\mathbb{E}\left[\frac{1}{\sqrt{k}}\sum_{i=1}^{k}\left|\phi(\mathbf{W}^{i}(0) \mathbf{u})\right|\right]^{2}\leq\frac{1}{k}\mathbb{E}\left[\left(\sum_{i=1}^ {k}\left|\phi(\mathbf{W}^{i}(0)\mathbf{u})\right|\right)^{2}\right]=\mathbb{E }\left[\phi(\mathbf{W}^{1}(0)\mathbf{u})^{2}\right]=C_{\phi}^{2}.\]
In addition, the triangle inequality and Lipschitz continuity of \(\phi\) (see A-5) yields
\[\frac{1}{\sqrt{k}}\left|\sum_{i=1}^{k}\left|\phi(\mathbf{W}^{i} \mathbf{u})\right|-\left|\phi(\mathbf{\widetilde{W}}^{i}\mathbf{u})\right| \right| \leq\frac{1}{\sqrt{k}}\sum_{i=1}^{k}\left|\phi(\mathbf{W}^{i} \mathbf{u})-\phi(\mathbf{\widetilde{W}}^{i}\mathbf{u})\right|\] \[\leq B\left(\frac{1}{\sqrt{k}}\sum_{i=1}^{k}\left\|\mathbf{W}^{i} -\mathbf{\widetilde{W}}^{i}\right\|\right)\leq BD\left\|\mathbf{W}-\mathbf{ \widetilde{W}}\right\|_{F}.\]
We then get using the Gaussian concentration inequality that
\[\mathbb{P}\left(\frac{1}{\sqrt{k}}\sum_{i=1}^{k}\left|\phi(\mathbf{ W}^{i}(0)\mathbf{u})\right|\right|\geq C_{\phi}\sqrt{\log(d)}+\tau\right)\] \[\leq\mathbb{P}\left(\frac{1}{\sqrt{k}}\sum_{i=1}^{k}\left|\phi( \mathbf{W}^{i}(0)\mathbf{u})\right|\geq\mathbb{E}\left[\frac{1}{\sqrt{k}}\sum_ {i=1}^{k}\left|\phi(\mathbf{W}^{i}(0)\mathbf{u})\right|\right]+\tau\right)\leq e ^{-\frac{\tau^{2}}{2B^{2}D^{2}}}.\]
Taking \(\tau=\sqrt{2}BD\sqrt{\log(d)}\), we get
\[\|\mathbf{x}(0)\|\leq C\sqrt{n\log(d)}\]
with probability at least \(1-d^{-1}\). Since the event above implies \(\mathbb{B}(0,\|\mathbf{x}(0)\|)\subset\mathbb{B}\left(0,C\sqrt{n\log(d)}\right)\), we conclude.
Proof of Theorem 4.1.: Proving Theorem 4.1 amounts to showing that (8) holds with high probability under our scaling. This will be achieved by combining Lemma 4.9, Lemma 4.10 and Lemma 4.11 as well as the union bound.
From Lemma 4.9, we have
\[\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0))\geq\sqrt{C_{\phi}^{2}+C_{\phi^{ \prime}}^{2}}/2\]
with probability at least \(1-2n^{-1}\) provided \(k\geq C_{0}n\log(n)\log(k)\) for \(C_{0}>0\). On the other hand, from Lemma 4.10, and recalling \(R\) from (9), we have that \(R\) must obey
\[R\geq\frac{\sigma_{\min}(\mathcal{J}_{\mathbf{g}}(0))}{2B((1+2D)+2R))}\sqrt{ \frac{k}{n}}\geq\frac{\sqrt{C_{\phi}^{2}+C_{\phi^{\prime}}^{2}}}{8B((1/2+D)+R) )}\sqrt{\frac{k}{n}}.\]
Solving for \(R\), we arrive at
\[R\geq\frac{\sqrt{(1/2+D)^{2}+\frac{\sqrt{(C_{\phi}^{2}+C_{\phi^{\prime}}^{2}) \frac{k}{n}}}{2B}}-(1/2+D)}{2}.\]
Simple algebraic computations and standard bounds on \(\sqrt{1+a}\) for \(a\in[0,1]\) show that
\[R\geq C_{1}\left(\frac{k}{n}\right)^{1/4}\]
whenever \(k\gtrsim n\), \(C_{1}\) being a positive constant that depends only on \(B\), \(C_{\phi}\), \(C_{\phi^{\prime}}\) and \(D\).
Thanks to A-1 and A-3, we have by the descent lemma, see e.g. [42, Lemma 2.64], that
\[\mathcal{L}_{\mathbf{y}}(\mathbf{y}(0))\leq\max_{\mathbf{v}\in[\mathbf{y}, \mathbf{y}(0)]}\frac{\|\nabla\mathcal{L}_{\mathbf{y}}(\mathbf{v})\|}{\| \mathbf{v}-\mathbf{y}\|}\frac{\left\|\mathbf{y}(0)-\mathbf{y}\right\|^{2}}{2}.\]
Combining Lemma 4.11 and the fact that
\[[\mathbf{y},\mathbf{y}(0)]\subset\mathbb{B}(0,\left\|\mathbf{y}\right\|+\left\| \mathbf{y}(0)\right\|)\]
then allows to deduce that with probability at least \(1-d^{-1}\), we have
\[\mathcal{L}_{\mathbf{y}}(\mathbf{y}(0))\leq\frac{L_{\mathcal{L},0}}{2}\left( CL_{\mathbf{F},0}\sqrt{n\log(d)}+\sqrt{m}\left(\left\|\mathbf{F}(\overline{ \mathbf{x}})\right\|_{\infty}+\left\|\boldsymbol{\varepsilon}\right\|_{\infty }\right)\right)^{2}.\]
Therefore, using the union bound and the fact that \(\psi\) is increasing, it is sufficient for (8) to be fulfilled with probability at least \(1-2n^{-1}-d^{-1}\), that
\[\frac{4}{\sigma_{\mathbf{F}}\sqrt{C_{\phi}^{2}+C_{\phi^{\prime}}^{2}}}\psi \left(\frac{L_{\mathcal{L},0}}{2}\left(CL_{\mathbf{F},0}\sqrt{n\log(d)}+ \sqrt{m}\left(\left\|\mathbf{F}(\overline{\mathbf{x}})\right\|_{\infty}+ \left\|\boldsymbol{\varepsilon}\right\|_{\infty}\right)\right)^{2}\right)<C_{1 }\left(\frac{k}{n}\right)^{1/4}, \tag{22}\]
whence we deduce the claimed scaling.
## 5 Numerical Experiments
To validate our theoretical findings, we carried out a series of experiments on two-layer neural networks in the DIP setting. Therein, 25000 gradient descent iterations with a fixed step-size were performed. If the loss reached a value smaller than \(10^{-7}\), we stopped the training and considered it has converged. For these networks, we only trained the first layer, \(\mathbf{W}\), and fixed the second layer, \(\mathbf{V}\), as it allows to have better theoretical scalings as discussed in Remark 4.7. Every network was initialized with respect to the assumption of this work where we used sigmoid activation function. The entries of \(\overline{\mathbf{x}}\) are drawn from \(\mathcal{N}(0,1)\) while the entries of the linear forward operator \(\mathbf{F}\) are drawn from \(\mathcal{N}(0,1/\sqrt{n})\) to ensure that \(L_{\mathbf{F},0}\) is of constant order.
Our first experiment in Figure 1 studies the convergence to a zero-loss solution of networks with different architecture parameters in a noise-free context. The absence of noise allows the networks to converge faster which is helpful to check convergence in 25000 iterations. We used \(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t))=\frac{1}{2}\left\|\mathbf{y}(t)- \mathbf{y}\right\|^{2}\) as it should gives good exponential decay. For each set of architecture parameters, we did 50 runs and calculated the frequency at which the network arrived at the error threshold of \(10^{-7}\). We present two experiments, in the first one we fix \(m=10\) and \(d=500\) and let \(k\) and \(n\) vary while in the second we fix \(n=60\), \(d=500\) and we let \(k\) and \(m\) vary.
Based on Remark 4.7 concerning Theorem B.1 which is a specialisation of Theorem 4.1, for our experimental setting (MSE loss with \(L_{\mathbf{F},0}\) of constant order), one should expect to observe convergence to zero-loss solutions when \(k\gtrsim n^{2}m\). We observe in Figure 0(a) the relationship between \(k\) and \(n\) for a fixed \(m\). In this setup where \(n\gg m\) and \(\mathbf{A}\) is Gaussian, we expect a quadratic relationship which seems to be the case in the plot. It is however surprising that with values of \(k\) restricted to the range \([20,1000]\), the network converges to zero-loss solution with high probability for situations where \(n>k\) which goes against our intuition for these underparametrized cases.
Additionally, the observation of Figure 0(b) provides a very different picture when the ratio \(m/n\) goes away from 0. We first see clearly the expected linear relationship between \(k\) and \(m\).
However, we used in this experiment \(n=60\) and we can see that for the same range of values of \(k\), the method has much more difficulty to converge with already small \(m\). This indicates that the ratio \(m/n\) plays an important role in the level of overparametrization necessary for the network to converge. It is clear from these results that our bounds are not tight as we observe convergence for lower values of \(k\) than expected.
In our second experiment presented in Figure 1(a), we look at the signal evolution under different noise levels when the restricted injectivity constraint A-7 is met to verify our theoretical bound on the signal loss. Due to the fact that our networks can span the entirety of the space \(\mathbb{R}^{n}\), this injectivity constraint becomes a global one, which forces us to use a square matrix as our forward operator, we thus chose to use \(n=m=10\). Following the discussion about assumption A-4, we choose to use \(\mathcal{L}_{\mathbf{y}}(\mathbf{y}(t))=\eta(\left\|\mathbf{y}(t)-\mathbf{y} \right\|^{2})\) with \(\eta(s)=s^{p+1}/\big{(}2(p+1)\big{)}\) where \(p\in[0,1]\) with \(p=0.2\) for this specific experiment. We generated once a forward operator with singular values in \(\{\frac{1}{z^{2}+1}\mid z\in[0,9]\}\) and kept the same one for all the runs. To better see the convergence of the signal, we ran these experiments for 200000 iterations. Furthermore \(\epsilon\) is a noise vector with entries drawn from a uniform distribution \(U(-\beta,\beta)\) with \(\beta\) representing the level of noise.
In this figure, we plot the mean and the standard deviation of 50 runs for each noise level. For comparison we also show with the dashed line the expectation of the theoretical upper bound, corresponding to \(\mathbb{E}\left[\left\|\varepsilon\right\|/\mu_{\mathbf{F},\Sigma^{\prime}} \right]\geq\frac{\sqrt{m\beta}}{\sqrt{6}\mu_{\mathbf{F},\Sigma^{\prime}}}\). We observe that the gap between this theoretical bound and the mean of the signal loss is growing as the noise level grows. This indicates that the more noise, the less tighter our bound becomes. We also see different convergence profiles of the signal depending on the noise level which is to be expected as the network will fit this noise to optimize its loss. Of course, when there is no noise, the signal tends to the ground truth thanks to the injectivity of the forward operator.
We continue the study of the effect of the noise on the convergence of the networks in Figure 1(b). We show the convergence profile of the loss depending on the noise level and \(k\). For
Figure 1: Probability of converging to a zero-loss solution for networks with different architecture parameters confirming our theoretical predictions: linear dependency between \(k\) and \(m\) and at least quadratic dependency between \(k\) and \(n\). The blue line is a quadratic function representing the phase transition fitted on the data.
that we fixed \(n=1000\), \(m=10\), \(d=10\), \(p=0.1\) and ran the optimization of networks with different \(k\) and \(\beta\) values and we took the loss value obtained at the end of the optimization. The results are averaged from 50 runs and help to see that even if a network with insufficient overparametrization does not converge to a zero-loss solution, the more neurons it has, the better in average the solution in term of loss value. Moreover, this effect seems to stay true even with noise. It is interesting to see the behavior of the loss in such cases that are not treated by our theoretical framework.
For our fourth experiment, we are interested by the effect on the convergence speed of the parameter \(p\) of the loss previously described. We fixed \(n=1000\), \(m=10\) and \(k=800\) and varied \(p\) between 0 and 1. For each choice of \(p\), we trained 50 networks and show the mean value of the loss at each iteration in Figure 3. We chose to use \(10^{6}\) iteration steps and let the optimization reach a limit of \(10^{-14}\). As expected by corollary 3.3, smaller \(p\) values lead to faster convergence rate in general. Indeed, smaller \(p\) values are closer to the case where \(\alpha=1/2\) in the corollary and higher \(p\) values means that \(\alpha\) will grow away from \(1/2\) which worsens the theoretical rate of convergence.
## 6 Conclusion and Future Work
This paper studied the optimization trajectories of neural networks in the inverse problem setting and provided both convergence guarantees for the network and recovery guarantees of the solution. Our results hold for a broad class of loss functions thanks to the Kurdyka-Lojasewiecz inequality. We also demonstrate that for a two-layers DIP network with smooth activation and sufficient overparametrization, we obtain with high probability our theoretical
Figure 2: Effect of the noise on both the signal and the loss convergence in different contexts.
guarantees. Our proof relies on bounding the minimum singular values of the Jacobian of the network through an overparametrization that ensures a good initialization of the network. Then the recovery guarantees are obtained by decomposing the distance to the signal in different error terms explained by the noise, the optimization and the architecture. Although our bounds are not tight as demonstrated by the numerical experiments, they provide a step towards the theoretical understanding of neural networks for inverse problem resolution. In the future we would like to study more thorougly the multilayer case and adapt our result to take into account the ReLU function. Another future direction is to adapt our analysis to the supervised setting and to provide a similar analysis with accelerated optimization methods.
|
2309.05332 | Gas-phase metallicity of local AGN in the GASP and MaNGA surveys: the
role of ram-pressure stripping | Growing evidence in support of a connection between Active Galactic Nuclei
(AGN) activity and the Ram-Pressure Stripping (RPS) phenomenon has been found
both observationally and theoretically in the past decades. In this work, we
further explore the impact of RPS on the AGN activity by estimating the
gas-phase metallicity of nuclear regions and the mass-metallicity relation of
galaxies at $z \leq$ 0.07 and with stellar masses $\log {\rm M}_* / {\rm
M}_\odot \geq 9.0 $, either experiencing RPS or not. To measure oxygen
abundances, we exploit Integral Field Spectroscopy data from the GASP and MaNGA
surveys, photoionization models generated with the code CLOUDY and the code
Nebulabayes to compare models and observations. In particular, we build CLOUDY
models to reproduce line ratios induced by photoionization from stars, AGN, or
a contribution of both. We find that the distributions of metallicity and [O
III]$\lambda$5007 luminosity of galaxies undergoing RPS are similar to the ones
of undisturbed galaxies. Independently of the RPS, we do not find a correlation
between stellar mass and AGN metallicity in the mass range $\log {\rm M}_* /
{\rm M}_\odot \geq 10.4$, while for the star-forming galaxies we observe the
well-known mass-metallicity relation (MZR) between $ 9.0 \leq \log \ {\rm M}_*
/{\rm M}_\odot \leq 10.8$ with a scatter mainly driven by the star-formation
rate (SFR) and a plateau around $\log {\rm M}_* / {\rm M}_\odot \sim 10.5$. The
gas-phase metallicity in the nuclei of AGN hosts is enhanced with respect to
those of SF galaxies by a factor of $\sim$ 0.05 dex regardless of the RPS. | Giorgia Peluso, Mario Radovich, Alessia Moretti, Matilde Mingozzi, Benedetta Vulcani, Bianca Poggianti, Antonino Marasco, Marco Gullieuszik | 2023-09-11T09:26:49Z | http://arxiv.org/abs/2309.05332v1 | Gas-phase metallicity of local AGN in the GASP and MaNGA surveys: the role of ram-pressure stripping
###### Abstract
Growing evidence in support of a connection between Active Galactic Nuclei (AGN) activity and the Ram-Pressure Stripping (RPS) phenomenon has been found both observationally and theoretically in the past decades. In this work, we further explore the impact of RPS on the AGN activity by estimating the gas-phase metallicity of nuclear regions and the mass-metallicity relation of galaxies at \(z\leq 0.07\) and with stellar masses \(\log{\rm M_{*}}/{\rm M_{\odot}}\geq 9.0\), either experiencing RPS or not. To measure oxygen abundances, we exploit Integral Field Spectroscopy data from the GASP and MaNGA surveys, photoionization models generated with the code Cloudy and the code Nebulabayes to compare models and observations. In particular, we build Cloudy models to reproduce line ratios induced by photoionization from stars, AGN, or a contribution of both. We find that the distributions of metallicity and [O iii] \(\lambda 5007\) luminosity of galaxies undergoing RPS are similar to the ones of undisturbed galaxies. Independently of the RPS, we do not find a correlation between stellar mass and AGN metallicity in the mass range \(\log{\rm M_{*}}/{\rm M_{\odot}}\geq 10.4\), while for the star-forming galaxies we observe the well-known mass-metallicity relation (MZR) between \(9.0\leq\log\,{\rm M_{*}}/{\rm M_{\odot}}\leq 10.8\) with a scatter mainly driven by the star-formation rate (SFR) and a plateau around \(\log{\rm M_{*}}/{\rm M_{\odot}}\sim 10.5\). The gas-phase metallicity in the nuclei of AGN hosts is enhanced with respect to those of SF galaxies by a factor of \(\sim 0.05\) dex regardless of the RPS.
Galaxy environments -- Active Galactic Nuclei -- Galaxy chemical evolution 0000-0002-4882-8862]Giorgia Peluso
0000-0002-4882-8862]Mario Radovich
0000-0002-4888-7888]Alessia Moretti
0000-0002-4888-7888]Matilde Mingozzi
0000-0002-4888-7888]Benedetta Vulcan
0000-0002-4888-7888]Bianca M. Poggianti
0000-0002-4888-7888]Antonino Marasco
0000-0002-4888-7888]Marco Gullieuszik
## 1 Introduction
The chemical evolution of a galaxy is regulated by a plethora of processes, from stellar winds and supernovae explosions within the galaxy body (e.g., Larson, 1974; Larson & Dinerstein, 1975; Maiolino & Mannucci, 2019, for a review) to the exchange of material with its environment (e.g., Ellison et al., 2009; Peng & Maiolino, 2014). The global gas-phase metallicity is well-known to be strongly correlated with the assembled stellar mass of a galaxy (e.g., Lequeux et al., 1979) through the so-called mass-metallicity relation (MZR) which has been shown to hold from low \(z\)(e.g., Tremonti et al., 2004; Perez-Montero et al., 2013) to high \(z\)(up to \(z\sim 6.5\) based on recent JWST measurements, Shapley et al., 2023; Curti et al., 2023, 20). At a given stellar mass, Mannucci et al. (2010) found for the first time an anti-correlation between the star-formation rate (SFR) and the metallicity which is the so-called Fundamental MZR (FMZR), while Peng & Maiolino (2014) find that satellite galaxies in denser environments, in terms of local density, are more metal-rich than galaxies in lower-density environments.
In addition to stellar evolution and environmental effects, also the presence of a central Active Galactic Nucleus (AGN) potentially can have an impact on the galaxy metallicity (e.g., Groves et al., 2006) and a relation between the Narrow-Line Region (NLR) metallicity and the (host galaxy) stellar mass has been investigated to test this hypothesis (e.g., Coil et al., 2015; Thomas et al., 2019; Dors et al., 2020; Perez-Diaz et al., 2021; Armah et al., 2023). While some studies do not find a correlation between these two quantities (Dors et al., 2020; Perez-Diaz et al., 2021), others do find a relation both in the local universe (e.g., Thomas et al., 2019; Armah et al., 2023) and at higher redshifts (e.g., Matsuoka et al., 2018, at \(z\sim 3\)).
The effect of the AGN on the metal content of the galaxy's central regions is also highly debated: regard
less of the stellar mass of the host galaxy, some works find that the AGN leads to an enrichment of metals, with AGN host galaxies showing higher central metallicity than star-forming galaxies of similar mass (e.g., Coil et al., 2015; Thomas et al., 2019; Perez-Diaz et al., 2021), while other works measure lower metallicity in AGN than in star-forming regions (e.g., Do Nascimento et al., 2022; Armah et al., 2023).
The origin of the metal enrichment may be explained by dust destruction in the Broad Line Region (BLR) which releases metals into the interstellar medium (ISM) (Maiolino & Mannucci, 2019) or _in-situ_ top-heavy IMF star-formation in the accretion disk around the supermassive black hole (SMBH) (e.g., Nayakshin & Sunyaev, 2005; Wang et al., 2011). In the latter scenario, the AGN would foster rapid star formation and quick enrichment of the ISM, which in fact has commonly been observed to be very metal-rich (Maiolino & Mannucci, 2019). AGN-driven outflows of high metallicity gas, observed to be expelled on kpc scales from the BLR (e.g., D'Odorico et al., 2004; Arav et al., 2007), would then enrich also the NLR. Another contribution to the metal enrichment of the gas surrounding the BLR may also come from _in-situ_ star-formation inside the AGN-driven outflows, which has been recently detected by several works (e.g. Maiolino et al., 2017; Gallagher et al., 2019).
On the other side, a possible way to explain the low AGN metallicities measured by some other works is that AGN-driven winds halt the production of metals by quenching star formation in the circumnuclear regions around the galaxy center (Choi et al., 2022). In support of this hypothesis, Armah et al. (2023) find that the AGN X-ray luminosity-NLR metallicity relation anti-correlates with the Eddington ratio, which indicates that the low-luminous AGN (and therefore likely with the weakest feedback) are more actively undergoing ISM enrichment through star formation, as opposed to the most luminous X-ray AGN. Similarly, EAGLE simulations predict that the scatter from the MZR at \(\rm{M_{*}>10^{10.4}M_{\odot}}\) depends on the mass of the central black hole, and in particular that black hole mass and gas-phase metallicity are anti-correlated (Van Loon et al., 2021). AGN feedback can also play a role by removing both gas and metals from the nucleus of the galaxy and dispersing this material to larger radii (Choi et al., 2022).
In this context, a dedicated study of the gas-phase metallicity in the nuclear regions of galaxies hosting an AGN (AGN metallicity, hereafter) and its scaling relation with the host galaxy stellar mass in different environments is still missing. A possible link between the AGN metallicity and the environment may have roots in the fact that environmental processes such as the ram pressure stripping (RPS) phenomenon have been proven to quench rapidly star formation in galaxies falling into clusters (Boselli & Gavazzi, 2006), and that AGN feedback may aid in the quenching of star formation together with ram pressure (Ricarte et al., 2020). The ROMU-LUS C cosmological simulations of a high-resolution galaxy cluster by Ricarte et al. (2020) find that RPS triggers enhanced gas accretion onto the black hole, which then produces heating and outflows due to AGN feedback. Growing evidence has been found, both observationally (Poggianti et al., 2017; Peluso et al., 2022) and theoretically (Tonnesen et al., 2009; Akerman et al., 2023), in support of the hypothesis that RPS is able to trigger or enhance the AGN activity in cluster galaxies. Recent studies have clearly identified AGN-driven outflows (Radovich et al., 2019) and AGN feedback in action (George et al., 2019) in strongly stripped galaxies.
Another debated topic regards the choice of the more suitable method to derive the NLR metallicity (e.g., Dors et al., 2015). Similarly to what is typically adopted for the H ii regions, the direct \(T_{\rm e}\)-method (e.g., Dors et al., 2020), the strong emission-line (SEL) calibrators (e.g., Carvalho et al., 2020; Storchi-Bergmann et al., 1998) and photoionization models (e.g., Thomas et al., 2018, 2018) have been exploited in the literature to measure the NLR metallicity. However, although AGN have high ionization degree, their high (e.g., Groves et al., 2006) metallicity leads to faint auroral lines (such as the [O iii] \(\lambda\)4363), hampering the use of the \(T_{\rm e}\)-method.
Many SEL calibrations for the NLR have been computed in the last decades in the literature, derived either with photoionization models (e.g., Carvalho et al., 2020; Storchi-Bergmann et al., 1998) or with the direct method (Flury & Moran, 2020; Dors, 2021). However, calibrators obtained from the same set of measurements (either direct or indirect) in case of ionization from star formation and AGN are still not available in the literature.
Some works (e.g., Thomas et al., 2019; Perez-Diaz et al., 2021) have developed consistent AGN and stellar photoionization models and used them to determine the metallicity in both star-forming (SF) and AGN-ionized regions. Observed and predicted line ratios are compared making use of the Bayesian inference with codes such as Nebulabayes (Thomas et al., 2018) or H ii-Chi-Mistry (Perez-Montero, 2014; Perez-Montero et al., 2019).
In this paper, we adopt an approach similar to Thomas et al. (2018) to compute for the first time the gas-phase metallicity of the central regions of a sample of active galaxies affected by RPS. The aim is to look for signs of metal enrichment or decrement in the NLR,
with respect to the case of ionization from star formation. To do so, we draw galaxies from the GASP (Gas Stripping Phenomena in galaxies, Poggianti et al., 2017) survey and from the MaNGA (Mapping Nearby Galaxies at Apache Point Observatory Bundy et al., 2015) survey. GASP is an ESO Large Programme carried out with the spectrograph MUSE to study galaxies in the local universe affected by RP in clusters. The use of Integral Field Spectroscopy (IFS) allows us to derive the global metallicity by exploiting different extraction apertures and to spatially separate regions photo-ionized by stars or by the AGN.
The paper is divided into the following sections. Section 2 presents the data sample and Section 3 presents the photo-ionization models that are used to compute the metallicity using the SEL method by comparing observed and predicted line ratios with the code Nebu-labayes, as described in details in Section 4. Our results are presented in Section 5: we investigate the effect of the aperture on our AGN metallicity measurements, and study how AGN metallicity correlates with different galaxy properties (stellar mass, AGN luminosity) using both RP stripped galaxies and a control sample of field galaxies that are not disturbed by the environment. In Section 6, we sum up and discuss the results.
We adopt a Chabrier et al. (2003) initial mass function in the mass range 0.1-100 M\({}_{\odot}\). We assume a standard \(\Lambda\)CDM cosmology with \(\Omega_{m}\) =0.3, \(\Omega_{\lambda}\) = 0.7 and \(H_{0}\) = 70 km s\({}^{-1}\) Mpc\({}^{-1}\).
## 2 Datasets and Galaxy Samples
The goal of this Section is to build four different samples to study the gas-phase metallicity of galaxies in different physical conditions, exploiting ancillary data from the GASP and MaNGA surveys. To do so, we use the samples already presented in Peluso et al. (2022) (P22 hereafter) and Vulcani et al. (2018). In particular, we build a RPS sample of galaxies either hosting an AGN (AGN-RPS) or not (SF-RPS) and a control sample of galaxies located in the field (AGN-field sample and SF-field sample, i.e. AGN-FS and SF-FS hereafter), thus undisturbed by the RPS. All the galaxies are late-type and have ongoing star-formation in the galactic disk. To classify the ionization mechanism acting on the gas, we make use of the so-called BPT diagnostics (Baldwin et al., 1981; Veilleux and Osterbrock, 1987; Kauffmann et al., 2003; Kewley et al., 2001, 2006). Specifically, we use the BPT diagram involving the line ratios [N ii]\(\lambda\)6584/H\(\alpha\) over [O iii]\(\lambda\)5007/H\(\beta\) (e.g., [N ii]-BPT)1. In this case, the Kewley et al. (2001) relation based on photoionization models is used to delimit the region where Seyfert/LINER spaxels are located, and the empirical Kauffmann et al. (2003) relation to isolate star-forming spaxels. The region in between the two demarcation lines is populated by spaxels with line ratios usually classified as Composite (SF+AGN). We finally use the Sharp et al. (2010) relation to further distinguish Seyfert from LINER line ratios.
Footnote 1: The only exception is the GASP galaxy JW100 for which we use the [S ii]-BPT, involving the [S ii]\(\lambda\lambda\)6716,6731/H\(\alpha\) instead of [N ii]\(\lambda\)6584/H\(\alpha\), because at the galaxy’s redshift the [N ii] line is contaminated by a sky line
### GASP sample
The GASP survey is a program focused on the study of gas removal processes due to the interaction between the intra-cluster medium (ICM) and the ISM. The survey observed 114 galaxies at \(0.04<z<0.07\) located in clusters, groups and the field, with the integral-field spectrograph MUSE, mounted at the Very Large Telescope (VLT), which has a field of view of \(1^{\prime}\times 1^{\prime}\) and covers a spectral range from 4800 to 9300 A (rest-frame) with a median resolution FWHM \(\sim\) 2.6 A. GASP observations were taken in Wide-Field Mode with natural seeing (WFM- noAO) with an average seeing of \(\sim 1^{\prime\prime}\). More details on the sample selection and data analysis can be found in Poggianti et al. (2017).
Stellar masses range from 10\({}^{9}\) to 3.2 \(\times 10^{11}\)M\({}_{\odot}\) and have been computed with the code SINOPSIS (Fritz et al., 2017) assuming a Chabrier et al. (2003) IMF. In brief, the code SINOPSIS uses a stellar population synthesis technique that reproduces the observed optical spectra of galaxies performing a spectral fitting of the stellar content and extinction, to derive the spatially resolved properties of the stellar populations. As in Vulcani et al. (2018), stellar masses are obtained by summing the stellar mass computed with SINOPSIS inside each spaxel within the galaxy. The emission lines are fitted with the code KUBEVIZ (Fossati et al., 2016), from the continuum subtracted and extinction-corrected MUSE spectrum, as described in Poggianti et al. (2017).
Integrated galaxy global properties such as inclination, position angle, and effective radius (\(R_{\rm e}\)) are measured from I-band MUSE photometry, as described in detail in Franchetto et al. (2020). In particular, the effective radius \(R_{\rm e}\) was computed from the luminosity growth curve \(L(R)\) of the galaxies, obtained by trapezoidal integration of their surface brightness profiles.
For our analysis, we select the MUSE spaxels in which the following emission lines have S/N \(>\) 3: H\(\beta\), [O iii] \(\lambda\)5007, H\(\alpha\), [N ii] \(\lambda\)6584, [S ii] \(\lambda\)6716 and [S ii] \(\lambda\)6731. We exclude the [O i] \(\lambda\)6300 emission line because the flux is often faint (i.e., S/N \(<\) 3 in most of the spaxels of the galaxies) and photoionization models struggle to predict the emission of this line in agreement with the observations (see e.g., Law et al., 2021; Dopita et al., 2013; Dopita, 1997). Another reason to exclude the [O i] line is given by the [O i] excess (Poggianti et al., 2019), typically observed in regions located in the tails of RP-stripped galaxies. Though the mechanism driving the [O i] excess is not yet fully understood, it should mainly affect the outer regions of the emitting clouds, where [O i] is formed.
From the GASP-RPS sample in P22, we select the 11 RP-stripped galaxies with Seyfert or LINER-like nuclei (AGN-RPS) according to the spatially-resolved BPT diagnostics and 39 star-forming RP-stripped galaxies without AGN activity (SF-RPS). We excluded four SF galaxies (JO95, JO156, JO153 and JO149) from the RPS sample in P22 as their irregular I-band morphology prevented a good estimate of their structural parameters (Franchetto et al., 2020), which were necessary to extrapolate the nuclear metallicities as described in details in Section 5.
Moreover, from the GASP control sample in Vulcani et al. (2018), we consider 15 galaxies located in the field and undisturbed by RP (i.e., SF-FS). We exclude only one galaxy (P19482) from the original sample in Vulcani et al. (2018) which was found to be located in a filament in a subsequent work (Vulcani et al., 2021).
### MaNGA sample
Mapping Nearby Galaxies at Apache Point Observatory (MaNGA, Bundy et al., 2015) is an integral-field spectroscopic survey using the BOSS Spectrograph (Smee et al., 2013) mounted at the 2.5 m Sloan Digital Sky Survey (SDSS) telescope (Gunn et al., 2006), covering a spectral range from 3600 to 10300 A at R \(\sim\) 2000. Briefly (we refer the reader to P22 for more details) we exploit the MaNGA Data Release 15 (DR15, Bundy et al., 2015) and, in particular, we use the Pipe3D-v2\(4\)32(Sanchez et al., 2016, 2018) catalog to select star-forming (i.e., with specific star-formation rate, sSFR, \(>10^{-11}\)yr\({}^{-1}\)) galaxies and the visual morphological classification from the MaNGA Value Added Catalogs3(Hernandez-Toledo et al., 2010) to select late-type galaxies with 0 \(<\) TType \(<\) 12. To ensure that galaxies are not affected by RPS, we consider only galaxies located in halo masses lower than \(M_{\rm halo<10^{13}}\) M\({}_{\odot}\), according to the Tempel et al. (2014) environmental catalog, that will be referred to as filed galaxies. In this way, we obtain the MaNGA control sample of 782 galaxies presented in P22 with similar properties (such as morphology and sSFR) to the GASP RPS sample but most likely undisturbed by environmental processes.
Footnote 3: [https://www.sdss.org/dr16/data_access/value-added-catalogs/](https://www.sdss.org/dr16/data_access/value-added-catalogs/)
Footnote 7: vac.id=manga-visual-morphologies-from-sdss-and-desi-images
The P22 sample covers a redshift range \(0.0024<z<0.1439\), within which the MaNGA spatial resolution corresponds to a physical size that goes from 0.13 kpc to 6.31 kpc. In fact, the IFU fiber size is 2'' and the reconstructed PSF inside the IFU has an FWHM of 2.5'', which corresponds to the spatial resolution of the observations (Law et al., 2016).
To have approximately a similar spatial resolution in MaNGA and GASP (i.e., \(\sim\) 1 kpc), we select the 530 MaNGA galaxies with redshift \(z\leq 0.04\). In this way, 30% of the galaxies are at \(z\sim 0.025\), where the spatial resolution corresponds to \(\sim\) 1 kpc, reaching at most 1.98 kpc of resolution (\(z=0.04\)), which is still within a factor of two with respect to the resolution of GASP.
Among the 530 galaxies, we include in our sample only those (444/530) belonging to the Primary+Color-Enhanced sample (Bundy et al., 2015), in order to have a smooth distribution in redshift, while ensuring complete coverage of the i-band magnitudes. The Primary+Color-Enhanced sample covers uniformly the galaxy up to 1.5 times the effective radius (1.5 \(R_{\rm e}\)), which perfectly suits our purposes, as we aim at characterizing the galaxy's central regions.
Finally our sample is further reduced to the 429 galaxies for which we were able to extract the aperture-corrected stellar mass from the Principal Component Analysis (PCA) catalog (Pace et al., 2019, 2019). In particular, the aperture correction needed to take into account the galaxy mass residing in the region extending outwards with respect to the 1.5 \(R_{\rm e}\) aperture is recovered with the Color-Mass-To-Light Relations method, which employs relations (as the one in Pace et al., 2019) between the mass-to-light ratio and the photometric colors of the galaxy's light outside the MaNGA IFU.
The final sample spans the stellar mass range 9.0 \(\leq\) log(M\({}_{*}\)/M\({}_{\odot}\)) \(\leq\) 11.3. Among the 429 galaxies, 52 are Seyfert/LINER (i.e., AGN-FS) according to the
spatially-resolved BPT classification with the [N ii]/H\(\alpha\) versus [O iii]/ H\(\beta\) diagnostic (NII-BPT, Baldwin et al., 1981). 377 galaxies are instead classified as star-forming and were added to the SF-FS. As galaxies part of the SF-FS are observed either by the GASP and MaNGA surveys, we are able to check the consistency of our estimates of metallicity or stellar masses ensuring that our measurements are not affected by systematic effects (e.g., different data reduction and instruments) caused by the use of two surveys.
We use the online tool MARVIN4(Cherinka et al., 2019) to download both the de-projected coordinates of our targets (_spx_ellcoo_) and the emission line fluxes (_gflux_). The de-projected coordinates are computed using the ellipticity (\(\epsilon\) = 1-b/a) and position angle (\(\theta\)) measured from the r-band surface brightness. The same emission line fluxes listed in Section SS2.1 are drawn from the drpall-v2.4_3 and have S/N \(>\) 1.5, which is the value typically adopted in MaNGA (Belfiore et al., 2019). The emission lines are fitted with a Gaussian function and are corrected for stellar absorption, since the Data Analysis Pipeline (DAP; Westfall et al., 2019; Belfiore et al., 2019) simultaneously fits the continuum and emission lines with the latest version of the pPXF software package (Cappellari, 2017). All lines are also corrected for Galactic extinction, using the Schlegel et al. (1998) maps (Westfall et al., 2019) and the reddening law of O'Donnell et al. (1994). Following the same approach used in GASP, we correct the emission lines for host galaxy dust attenuation using the Cardelli et al. (1989) law and assuming an intrinsic Balmer decrement I(H\(\alpha\))/I(H\(\beta\)) = 2.86, appropriate for an electron density \(n_{\rm e}\)=100 cm\({}^{-3}\) and electron temperature \(T_{\rm e}=10^{4}\)\(K\)(Osterbrock, 2006).
Footnote 4: [https://www.sdss.org/dr16/manga/marvin/](https://www.sdss.org/dr16/manga/marvin/)
## 3 Photoionization Models
Models are generated with Cloudy v17.02 (Ferland et al., 2017) in case of ionization from stars (H ii models, hereafter) and AGN (AGN models, hereafter), so that the metallicity is measured in a homogeneous way from the central AGN region to the star formation-dominated outskirts of galaxies with AGN activity. To compute the metallicity in Composite (AGN+SF) regions, we mix the H ii and AGN models as described in detail in Section SS4.
The files used as input by Cloudy are built using the CloudyFSPS library (Byler, 2018), modified to handle both H ii and AGN models. All models span the following parameter space:
* The ionization parameter [\(\log(U)\)] ranges between \(-4\leq\log(U)\leq-1\) with a step of 0.5 dex;
* Gas-phase abundances are those in CloudyFSPS that are based on the solar values from Dopita et al. (2000) (see Byler et al., 2017). With the exception of nitrogen and helium, abundances scale with the gas-phase metallicity (\(\log Z=-1,-0.6,-0.4,-0.3,-0.2,\)\(-0.1,0.0,+0.1,+0.2,+0.3,+0.4,+0.5\)), corresponding to oxygen abundance ranging between \(7.69\leq 12+\log({\rm O/H})\leq 9.19\) (12 + log (O/H) = 8.69 for the solar value). For nitrogen and helium, the relations with \(\log Z\) are those defined in Dopita et al. (2000) to take into account the effects of non primary nucleosynthesis. The effects of abundance depletion by grains is also taken into account in CloudyFSPS, following Dopita et al. (2000).
We run our grids of Cloudy models, iterating till the temperature is above 100 K or until convergence: since in the outer regions the ionization rate may fall below the galactic background rate, cosmic ray background emission (Ferland, 1984) was added as a secondary ionization source. We explored the effect of dust on the line ratios studied here by comparing models with and without dust grains. To this end we assumed for the grains the default size distribution and abundances in the diffuse interstellar medium of our galaxy (van Hoof et al., 2001; Van Hoof et al., 2004; Ferland et al., 2013), described by the grains ISM command in Cloudy. Consistently with Byler et al. (2017) we find that its effect is minimal, with dusty models producing slightly higher [O iii] /H\(\beta\) (i.e. \(\sim 0.19\) dex, on average) at fixed [N ii] /H\(\alpha\) for high metallicities and ionization parameters.
We also explored the effect of varying the dust-to-metal abundance, without observing any significant effect.
Finally, we select models with a gas density of n\({}_{H}\) = 10\({}^{2}\) cm\({}^{-3}\) since they fully recover the observed line ratios in our GASP and MaNGA samples, as shown in Figure 1. In particular, in Figure 1 we show the line ratios [O iii] /[S ii] and [N ii] /[S ii] of the SF (top left), Composite (top right), AGN (bottom) models and the observed line ratios inside the spaxels in MaNGA and GASP classified by the BPT correspondingly. The purple-shaded curves outline the density distribution of the observations, which is fully covered by the model grid.
Figure 1: [O iii] /[S ii] vs [N ii] /[S ii] line ratios in case of ionization from SF (top left), AGN+SF (top right), and AGN (bottom). The grey points are the observed line ratios inside the spaxels of the MaNGA and GASP samples together. The distribution of the observed points is outlined by density curves filled with different shades of purple and shown by the grey histograms in the top and right insets. Darker colors indicate regions where the density of data is higher. The black solid lines are the Cloudy models. The H ii models have stellar ages \(t_{*}=4\) Myr, Composite models have the ionization parameter of the stars \(\log U_{\rm{H{\sc ii}}}\) is fixed to -3.0 and \(f_{AGN}\) is 0.2, AGN models have \(\alpha\) = -2.0 (see text for details).
H ii models are generated following the same prescription as in Byler et al. (2017). The python library python-fsps is used to generate the ionizing continuum produced by a Single Stellar Population (SSP). To this end, we use the SSPs produced by the Flexible Stellar Population Synthesis code (FSPS, Conroy et al., 2009) and the MESA Isochrones and Stellar Tracks (MIST; Choi et al., 2016; Dotter, 2016). In Cloudy, the stellar continuum models produced by FSPS are read by the Table STAR command, that also takes as input the stellar age and metallicity5 to be used. For each Cloudy model, the gas phase metallicity equals to the stellar metallicity.
Footnote 5: Stellar metallicities in MIST are defined within \(-2.5\leq\log(Z/Z_{\odot})<+0.5\).
Unlike the Byler et al. (2017) models, we use the version Cloudy v17.02 due to several improvements in the atomic database introduced with respect to Cloudy v13 (Ferland et al., 2013), in particular concerning the rate coefficient for the S\({}^{2+}\) - S\({}^{+}\) dielectronic recombination (Ferland et al., 2017; Badnell et al., 2015; Belfiore et al., 2022).
We test models with stellar ages ranging between 1 Myr \(\leq t_{*}\leq\) 7 Myr (similary to Byler et al., 2017), and we fix \(t_{*}=\) 4 Myr (see also Mingozzi et al., 2020), as models with stellar ages \(t_{*}\leq\) 4 Myr are perfectly capable to reproduce line ratios typically observed in H ii regions (in agreement with e.g., Dopita, 1997), while models with \(t_{*}\geq\) 4 Myr generate line ratios (such as [O iii] /H\(\beta\) and [N ii] /H\(\alpha\)) too weak to reproduce the entire range of the observed MaNGA and GASP line ratios of our sample.
### AGN models
For AGN photoionization models, we adopt as ionizing source a simple power law continuum (command table power law in Cloudy):
\[\mathrm{S}_{\nu}\propto\begin{cases}\nu^{\alpha}&h\nu_{1}<h\nu<h\nu_{2}\\ \nu^{5/2}&h\nu<h\nu_{1}\\ \nu^{-2}&h\nu>h\nu_{2}\end{cases} \tag{1}\]
where \(h\nu_{1}=9.12\times 10^{-3}\) Ryd and \(h\nu_{2}=3676\) Ryd define the spectral breaks at 10\(\mu\)m and 50 keV respectively. The slope of the continuum, from the infrared to X-ray wavelength ranges, is set equal to \(\alpha\) = - 2.0, as in this way the models were able to perfectly reproduce the observations. According to the literature, the NLR density is relatively high (i.e., \(n_{\mathrm{e}}\approx 300-500\) cm\({}^{-3}\), Storchi-Bergmann et al., 1998; Feltre et al., 2016; Armah et al., 2023) with respect to the H ii regions in the galaxy disc (10 cm\({}^{-3}\), e.g., Dopita et al., 2013). However we note that the regions classified as AGN by the BPT can extend well-beyond the sub-kpc scale of the NLR (i.e., the so-called extended narrow line region, ENLR, see e.g. Congiu et al., 2017; Chen et al., 2019) thus we consider \(n_{\mathrm{H}}=10^{2}\) cm\({}^{-3}\) as an average value between the high-density (i.e., 500 cm\({}^{-3}\)) and the low-density (i.e., 10 cm\({}^{-3}\)) regime.
### Composite models
We combine the H ii and the AGN models following a similar approach as in Thomas et al. (2018).
The mixed emission is parametrized by f\({}_{AGN}\), defined as:
\[f_{AGN}=\frac{R_{AGN}}{R_{\mathrm{HII}}+R_{AGN}}\]
where \(R\) is the flux of the reference line (i.e. H\(\beta\)), thus \(R_{AGN}\) is the H\(\beta\) flux that arises from the AGN and \(R_{\mathrm{HII}}\) is the the H\(\beta\) flux that arises from the Hii regions. In other words, \(f_{AGN}\) is the fraction of the H\(\beta\) flux from the AGN with respect to the total H\(\beta\) flux (i.e. \(R_{\mathrm{HII}}+R_{AGN}\)) of a mixed spectrum, where the emission comes from both stars and AGN.
We obtain the Composite grids with the following steps:
1. we mix the H ii and AGN models with the same metallicity and gas density;
2. the mixed emission line ratios are computed as: \[\left(\frac{L}{R}\right)_{Comp}=\left(\frac{L}{R}\right)_{AGN}\times f_{AGN }+\left(\frac{L}{R}\right)_{\mathrm{HII}}\times(1-f_{AGN})\] where \(L\) is the flux of a generic line.
The \(f_{AGN}\) is a parameter of the Composite models, which ranges between 0.2 (i.e., 80 % of ionization due to the stars) to 1 (i.e., 100 % of ionizing photons coming from the AGN), with a step of 0.2.
In the Composite models, \(\log(U)\) indicates the ionization parameter of the AGN emission, while the ionization parameter of the stars \(\log U_{\mathrm{HII}}\) is fixed to -3.0 (i.e., median value observed in pure SF regions), similarly to Thomas et al. (2018).
## 4 Methods
In this section, we show how we set our Nebulabayes (Thomas et al., 2018) analysis and derive the gas-phase metallicity and the ionization parameter. In brief, the
code takes as input a set of emission lines from photoionization models and a set of observed emission lines with their relative errors. The line fluxes are then divided for a reference line, specified by the user. By comparing observations and predictions using the Bayes theorem, the code finds the best model to fit the observable. Nebulabayes is provided with models, generated with the code Mappings 5.1(Sutherland and Dopita, 2017), for both the H ii and AGN-ionized regions. However, in Appendix A we discuss in detail the reason which lead us to generate and use our own Cloudy models (presented in Section SS3).
To obtain the metallicity and ionization parameter computed spaxel-by-spaxel, we use our SF/Composite/AGN models inside the spaxels within the galaxy classified by the BPT diagrams correspondingly and compare the predicted and observed emission lines [O iii] and [N ii] normalized for the reference line [S ii]. By using the plane [O iii] /[S ii] (sensitive to \(\log U\)) versus [N ii] /[S ii] (sensitive to \(\log Z\)), we are able to distinguish very well models with different values of ionization parameter and metallicity, as shown in Figure 1. In this Figure, we plot the H ii, AGN, and Composite models on the plane [O iii] /[S ii] vs. [N ii] /[S ii] demonstrating the ability of such lines to unfold the grids.
The [O iii] /[S ii] ratio is sensitive to the variation in \(\log(U)\) because of the different ionization potentials (IP) needed to create the O++ and S+ ions (35.12 eV and 10.36 eV, respectively). Instead, the ions emitting the [N ii] and [S ii] lines have similar IP and thus the ratio [N ii] /[S ii] has little dependence on \(\log(U)\). However, the [N ii] /[S ii] is a good indicator for Z as the growth of N/H scales with Z\({}^{2}\)(Hamann et al., 1993) while S/H is \(\sim\) Z (e.g. Dors et al., 2023).
Particular care is necessary when selecting the emission lines to use in a Nebulabayes analysis (see Thomas et al., 2018, for details). Among all the emission lines covered by our observational samples we notice that some widely used combinations are particularly affected by the degeneracy between the ionization parameter and the metallicity (Dopita et al., 2013, and diagnostics therein). In particular, Figure 2 shows the [N ii] /H\(\alpha\) versus [O iii] /H\(\beta\) for our observational sample with overlaid our own photoionization models, showing that the models are correctly reproducing the observations. However, we also note the well-known folding in the [N ii] -BPT and [S ii] -BPT, which in our models happens around 12 + log (O/H) = 8.6. It follows that, by using the line ratios [O iii] /H\(\alpha\), [N ii] /H\(\alpha\) and [S ii] /H\(\alpha\) to constrain the parameters, we obtain that Nebulabayes does not converge to a solution for metallicities around 12 + log (O/H) = 8.59 (similarly to Mingozzi et al., 2020), as it is clear from the blue histogram in Figure 3, where we show the results for the MaNGA galaxies. Instead, the choice of using the line ratios [O iii] /[S ii] and [N ii] /[S ii] produces the smooth metallicity distribution shown in Figure 3 as a black histogram. We stress that in the latter case the Balmer lines (H\(\alpha\), H\(\beta\)) are not used to constrain the parameter space, but only to estimate the extinction correction as described in detail in Section SS2.
As already emphasized by previous studies that used a modeling technique similar to ours (Thomas et al., 2019; Perez-Diaz et al., 2021; Mingozzi et al., 2020), we conclude that the degeneracy could have been broken with the fundamental addition of the [O iii] \(\lambda\)4363 auroral line and/or a constrain on the ionization parameter through known relations (e.g., Perez-Montero, 2014; Diaz et al., 1991) between \(\log(U)\) and 12 + log (O/H) (see Appendix B). However, we remind the reader that in this work it was not possible to include the auroral line [O iii] \(\lambda\)4363, because it is too faint for MaNGA and outside the MUSE spectral range for GASP targets, and other lines used in relations to constrain \(\log(U)\), such as the [O ii] \(\lambda\)3727, [S iii] \(\lambda\)9069 and [S iii] \(\lambda\)9532 lines, that are outside the MUSE wavelength range at the target's redshift.
In conclusion, by applying the method presented in this Section, we are able to estimate spatially-resolved maps of the parameters 12 + log (O/H) and \(\log(U)\) in case of ionization by stars, an AGN or a mixed contribution of them using the H ii, AGN and Composite models described in Section 3.
## 5 Results and Discussion
We estimate the metallicity of the galaxy's central regions, from spatially-resolved maps of the oxygen abundance, inside an aperture that scales with the galaxy's mass (or mass-scaled aperture), in order to address the following question:
1. Does a relation between the (host galaxy) stellar mass and metallicity of galaxies with AGN activity exist in RPS galaxies?
To draw the mass-scaled aperture, we compute the projected distances from the galaxy center, using its structural parameters such as the inclination and position angle, and select the spaxels within a projected distance of 0.5 \(R_{\rm e}\).
In case of AGN host galaxies, we compute the median value of all the 12 + log O/H inside the AGN, Composite and SF spaxels contained by the aperture, using the corresponding models (see Section 3). In the case of
SF galaxies without AGN activity, we discard the emission classified as Composite, which in general is present inside a low fraction of spaxels (i.e., \(\sim\)5% in the SF GASP sample and \(\sim 13\%\) in the SF MaNGA sample), since in this case we cannot assume that its origin is the mixed AGN+SF contribution implied to generate our Composite models. Then, we compute the metallicities inside the SF spaxels using the H ii models.
By using the same approach, we also estimate the median metallicity inside a fixed aperture of radius \(r\sim 1\) kpc (always dominated by AGN emission in case of AGN hosts) to address another open question:
* Does the AGN in RPS galaxies show signs of metal enrichment or metal decrement with respect to the same physical region at the center of star-forming galaxies of similar masses?
To understand if the results depend on the RPS, we answer the same questions for the galaxies part of the control samples (SF-FS, AGN-FS) which are located in the field and are undisturbed by RP. The results in the field galaxies are interesting on their own as it is still highly debated in the literature whether an NLR metallicity - stellar mass relation exists (e.g., Thomas et al., 2019; Dors et al., 2020; Perez-Diaz et al., 2021) and if AGN are more or less metal-enriched than star-forming regions (e.g., Perez-Diaz et al., 2021; Armah et al., 2023), since discrepant results have been found
Figure 3: Histograms of the \(12+\log\) (O/H) values inside all the spaxels classified as SF by the NII-BPT, in the MaNGA sample. The black histogram shows a uniform distribution in metallicity, obtained when using the [O iii] and [N ii] lines, normalized for [S ii]. The blue histogram shows a strong bimodality, with a gap around 12 + log (O/H) \(\sim\) 8.6, and is obtained when normalizing with H\(\beta\) the set of lines: H\(\beta\), [N ii], [O iii], H\(\alpha\), [S ii]. The bimodality is caused by the log(\(U\)) - log \(Z\) degeneracy of the models observed in the NII-BPT shown in Figure 2 (see Section 4).
Figure 2: H ii models (black lines) in the NII-BPT (_top panel_) and in the SII-BPT diagram (_bottom panel_). The H ii models, generated with Cloudy, have n\({}_{\rm H}\) = 100 cm\({}^{-3}\) and stellar ages \(t_{*}\)= 4 Myr. Density curves, filled with different shades of purple, are drawn to show the distribution of the observed line ratios of the SF spaxels in the MaNGA and GASP samples together. Red lines are the Kauffmann et al. (2003) and Kewley et al. (2001) relations defining the SF regions in the NII-BPT and in the SII-BPT respectively. The black line in the NII-BPT is the Kewley et al. (2001) relationship which distinguishes Composite and Seyfert/LINER. The H ii grids fold, due to the degeneracy between the metallicity and the ionization parameter, around 12 + log (O/H) = 8.6 - 8.7.
even without making a distinction based on the galaxy's environments, as discussed in the Introduction.
### The effect of different extraction apertures on spatially-resolved metallicity maps
To illustrate how the choice of the aperture affects the AGN metallicity, we selected two galaxies from the GASP and MANGA samples, shown in Figure 4: both galaxies host Seyfert2-like nuclei according to the BPT and have similar stellar masses. The top panel of Figure 4 shows the NII-BPT diagrams of the field galaxy '8993-12705' (\(z=0.030\), log \(\,{\rm M_{*}/M_{\odot}}\) = 10.96) and a zoom on the cluster galaxy JO201 (\(z=0.0446\), log \(\,{\rm M_{*}/M_{\odot}}\) = 10.79), which is experiencing strong RPS as discussed in Poggianti et al. (2017). The other panels of the same figure show the galaxy maps color-coded according to the NII-BPT classification, the metallicity and ionization parameter. On the NII-BPT color-coded map, we overlay the yellow projected aperture extending up to 0.5 \(R_{\rm e}\) and the green on-sky aperture extending up to 1 kpc from the galaxy center. The 1 kpc aperture includes a higher or lower fraction of the galaxy's total light depending on the stellar mass, as opposed to the \(r\sim 0.5\)\(R_{\rm e}\) aperture. However, the 1 kpc aperture has the advantage to include predominantly AGN spaxels, while the \(0.5R_{\rm e}\) aperture in some galaxies includes a non-negligible fraction of SF/Composite spaxels. In this sense, the 1 kpc aperture is a better-suited choice to reduce the dependence of the AGN metallicity estimates on processes that are not linked to the presence of the AGN, as shown at the end of Section SS5.3.
To show the range of ionization parameter and metallicity spanned by the emission in the 'nuclear regions' of our galaxy samples, in Appendix C we show the [N ii]-BPT diagrams obtained with the emission line ratios within the extraction apertures \(r<0.5\)\(R_{\rm e}\) and \(r<1\) kpc color-coded according to the corresponding values of log(\(U\)) and 12 + log (O/H), for all the GASP and MaNGA galaxies.
Finally, we briefly comment on the metallicity and ionization parameter maps of the two galaxies shown in Figures 4 (c) and (d). Further details on the metallicity profiles of our AGN sample will be discussed in a separate paper.
The galaxy '8993-12705' shows a strong inward increase in metallicity, which rapidly increases from 12 + log (O/H) \(\sim\) 8.8 in the outer star-forming regions to 12 + log (O/H) \(\sim\) 9.2 in the galaxy center. The transition from lower to higher metallicities is co-spatial with the increase of the AGN ionization parameter, which jumps from log(\(U\)) \(\sim\) -2.5 to log(\(U\)) \(\sim\) -1.3. Star-forming regions show an average value of log(\(U\)) \(\sim\) -3.2 (see also Thomas et al., 2019). On the other side, the metallicity in JO201 peaks around 12 + log (O/H) \(\sim\) 9.0 in the galaxy center in correspondence to the AGN, and we note that also the gas in the right lower side of the stripped tail shows similar values as well. We also observe two high-metallicity and high-ionization parameter elongated regions symmetrically oriented with respect to the galaxy center. The peak of the AGN ionization parameter (log(\(U\)) \(\sim\) \(-\)1.6) and of the metallicity (12 + log (O/H) \(\sim\) 9.0) is presumably tracing the actual position of the AGN, more precisely than the NII-BPT classification map in which the AGN-like region is extending well beyond the NLR.
### Gas-phase metallicity of the AGN in RP stripped and undisturbed galaxies
Figure 5 shows the metallicity as a function of the host galaxy stellar mass of the AGN-RPS (squares) and AGN-FS (circles) samples. To compute metallicities, we consider the median value of 12 + log (O/H) in all the spaxels within \(r\sim 0.5\)\(R_{\rm e}\) from the galaxy center as a representative value for each galaxy. We have verified that the mass-scaled aperture was always larger than the PSF (e.g., on-sky aperture with diameter \(d~{}\sim 2.5\arcsec\) in MaNGA, and \(d~{}\sim 1\arcsec\) in GASP), and therefore includes a well-resolved galactic region. In support of the robustness of our results to a different choice of the extraction aperture, Figure 3 in Franchetto et al. (2020) clearly shows that the mean (or median) value inside 0.5 \(R_{\rm e}\) is consistent with the median values computed inside smaller apertures or at fixed galactocentric radii in our galaxies (see also Moustakas and Kennicutt, 2006).
Points in Figure 5 are color-coded according to the integrated luminosity of the emission line [O iii] \(\lambda\)5007 (i.e., \(L\)[O iii] hereafter) inside the fixed aperture of \(r\sim 1\) kpc, which is a proxy of the bolometric luminosity of the central AGN (e.g., Berton et al., 2015). We calculated \(L\)[O iii] only for galaxies with at least 10 spaxels within \(r\sim 1\) kpc powered by the AGN according to the BPT diagram, with S/N \(>3\) in GASP or S/N \(>1.5\) in MaNGA for the lines listed in Section SS2. This selection restricts our sample to 9/11 AGN in GASP and 48/52 AGN in MaNGA. AGN with no reliable \(L\)[O iii] are shown as dashed white-colored symbols. The \(12+\log\)(O/H) and stellar mass distributions of the AGN-RPS and AGN-FS are shown as grey and white histograms, respectively, in the subpanels.
The two samples span the same range of \(L\)[O iii] and 12 + log (O/H), where the minimum values are \(2.5\times 10^{38}\) L\({}_{\odot}\) and 8.77, respectively, and the maximum values are \(1.2\times 10^{42}\) L\({}_{\odot}\) and 9.22.
Figure 4: NII-BPT diagram and maps of the galaxy ’8993-12705’ (part of the AGN-FS, on the left) and JO201 (part of the AGN-RPS, on the right). (_top panel_) NII-BPT diagram for all the spaxels in the galaxies, where in the case of JO201 we also include the spaxels of the stripped tail. SF spaxels are in red, Composite spaxels are in orange, LINER spaxels are in light blue and Seyfert spaxels are in green. Darker color shades indicate more intense line ratios, and viceversa. The black line is the Kauffmann et al. (2003) relation and the dotted black line is the Kewley et al. (2001) relation. (_middle panel_) Galaxy map color-coded according to the NII-BPT classification on which we draw the \(r\sim 1\) kpc (bright green circle) and \(r\sim 0.5R_{\rm e}\) (yellow circle) apertures. The \(r\sim\)1 kpc aperture is clearly dominated by AGN-only spaxels, while the \(r\sim 0.5R_{\rm e}\) includes a small fraction of SF/Composite spaxels. The typical PSF size is shown in the top-left corner, with a grey circle. (_bottom panels_) Galaxy map color-coded to the values of 12 + log (O/H) and log(\(U\)). The black contours, overlaid on the maps, divide regions classified as AGN/Composite/SF by the NII-BPT. The oxygen abundance 12 + log (O/H) varies between 8.4 and 9.2, the ionization parameter ranges between -4.0 and -1.0.
A 2D Kolmogorov-Smirnov (KS) test could not exclude that the AGN-FS and AGN-RPS samples are drawn from the same parent distribution. This result suggests that the RPS is not playing a crucial role in shaping the metallicity within \(r<0.5\)\(R_{\rm e}\) and the [O iii] luminosity of the AGN (\(r<1\) kpc) in AGN hosts.
Galaxies of the AGN-RPS sample have slightly higher median \(L\)[O iii] = 41.24\({}^{+0.65}_{-1.28}\) L\({}_{\odot}\) than the AGN-FS with median \(L\)[O iii] = 40.19\({}^{+0.88}_{-0.57}\) L\({}_{\odot}\). However, the values are consistent within the 16th and 84th percentiles of the \(L\)[O iii] and 12 + log (O/H) distributions. The higher \(L\)[O iii] luminosities of the AGN-RPS sample are presumably due to the preponderance of Seyfert-like nuclei in this sample. In fact, the AGN-RPS has \(\sim\) 50 % (6/11) of Seyfert 2 galaxies, while in the AGN-FS the Seyfert fraction is 16% (9/53).
Next, we study the relationship between the stellar mass and the AGN metallicity in the AGN-RPS and AGN-FS samples, joined together. The Spearman correlation coefficient is R \(\sim\) 0.27 with a p-value of 0.034, thus the test is not able to conclude that the two quantities are correlated. We argue that this can partially depend on the fact that the galaxies span a very limited range in stellar mass, as AGN are known to be located only in the most massive systems (e.g., Sanchez et al., 2018; Peluso et al., 2022). We also do not see a clear relationship between the stellar mass and \(L\)[O iii] from Figure 5 and, accordingly to that, the Spearman test gives a correlation coefficient of R \(\sim\) 0.25 with a p-value of 0.07.
### Comparison between metallicities of the nuclear regions in AGN and SF galaxies
To test previous literature findings (Armah et al., 2023; Thomas et al., 2019), we investigate the difference between the metallicity in the nuclear regions of AGN and SF galaxies. Even though the AGN-RPS and AGN-FS show the same MZ distributions (see Section 5.2), in the first part of this section we still present the results separately for the two samples.
Figure 6 shows the MZR of the SF and AGN galaxies, with different symbols for RP-stripped (squares) and non-RP-stripped (circles) galaxies. The metallicity is computed as the median of all the values of 12 + log (O/H) within \(r<0.5\)\(R_{\rm e}\). The AGN galaxies are shown as grey symbols, while the SF galaxies are color-coded according to their SFR (H\(\alpha\))\({}_{1.5\,R_{\rm e}}\), which is the SFR within 1.5 \(R_{\rm e}\) computed with the Kennicutt (1998) relation, SFR (M\({}_{\odot}\)yr\({}^{-1}\)) = 4.6 \(\times 10^{-42}\)\(L_{\rm H\alpha}\) (erg s\({}^{-1}\)), using the reddening-corrected H\(\alpha\)-flux. This is the FoV of the MaNGA SF galaxies (see Section SS2.2), while for the GASP SF galaxies we computed the SFR (H\(\alpha\))\({}_{1.5R_{\rm e}}\) by excluding the spaxels beyond 1.5\(R_{\rm e}\)
To fit the mass-metallicity relation of star-forming galaxies (SF MZR), shown in Figure 6 as the blue dotted line, we join the SF-RPS and SF-FS and we exploit the SF-FS galaxies to obtain the fit also at high stellar masses where the AGN are located. In fact, the SF-RPS sample has only 3/37 galaxies with log M\({}_{*}\)/M\({}_{\odot}>10.5\), since (as seen in P22) the GASP-RPS AGN fraction is 51% in the mass bin log (M\({}_{*}\)/M\({}_{\odot}\)) \(>10\), while the SF-FS has 33/391 galaxies (i.e., 9%) with log (M\({}_{*}\)/M\({}_{\odot}\)) \(>10.8\).
It is worth noticing, though, that the SF-RPS show on average lower metallicities than the SF-FS, but the lowest metallicities of the SF-RPS are consistent with the scatter expected from the Fundamental MZR (Mannucci et al., 2010) as discussed in detail in Appendix D.
To fit the SF MZR, we use the parametrized function from Curti et al. (2020) (see also Mingozzi et al., 2020):
\[12+\log(\rm O/H)=Z_{0}-\gamma/\beta\times\log\left[1+\left(\frac{M}{M_{0}} \right)^{-\beta}\right] \tag{2}\]
where Z\({}_{0}\) is the asymptotic value of metallicity at which the relation saturates, M\({}_{0}\) is the characteristic turnover mass above which the metallicity asymptotically approaches the upper metallicity limit (\(Z_{0}\)) and \(\beta\) quantifies how rapidly the curve approaches its saturation value. For M\({}_{*}<\) M\({}_{0}\), the SF-MZR is a power law of index \(\gamma\). We fix the turnover mass M\({}_{0}\) = 10\({}^{10.1}\)M\({}_{\odot}\). To obtain the best-fit parameters, we use the non-linear least squares (NLS) method which minimizes the residuals, weighted for the uncertainty on the datapoints (\(\sigma_{y}\)). \(\sigma_{y}\) is the lowest value between \(\sigma_{-}\) and \(\sigma_{+}\), where \(\sigma_{-}\) and \(\sigma_{+}\) are the average values of the 16th and 84th percentiles of the metallicity PDF among all the spaxels within 0.5\(R_{\rm e}\).
We obtain the following best-fit parameters: Z\({}_{0}\) = 9.045 \(\pm\) 0.001, \(\gamma=0.754\pm 0.008\) and \(\beta=1.121\pm 0.064\). The one standard deviation error on the parameter estimates is the squared variance (i.e., the diagonal) of the covariance matrix. We observe a plateau in the SF MZR at log(M\({}_{*}\)/M\({}_{\odot}\)) \(>10.5\) (similarly to Tremonti et al., 2004) where the metallicity is \(<\)12 + log (O/H)\(>_{0.5\ R_{\rm e}}\)\(\sim\) 9.0 independently of the stellar mass.
Figure 7 shows the residuals of the AGN metallicities from the SF MZR, \(\Delta\) (O/H)\({}_{r<0.5\ R_{\rm e}}\), which is the difference between the metallicity of the AGN and the one computed with equation (2). AGN hosts predominantly lie above the SF-MZR, suggesting that the presence of the AGN is enhancing the oxygen abundance in the galactic nuclei. As expected from the results presented in Section SS5.3, this result is independent of the
presence of RPS, since the AGN-RPS sample shows a similar enhancement in metallicity as the AGN-FS sample. Overall, we find that the median offset of the combined AGN sample (RPS and FS) from the SF MZR is \(\Delta\) (O/H) \({}_{r<0.5~{}R_{\rm e}}\) = 0.047 dex, which is consistent with previous findings (e.g., Thomas et al., 2019).
Interestingly, two galaxies (JO206 and JO171) from the AGN-RPS show a metallicity that is lower than that found in SF galaxies, which are the outliers in Figure 7. The AGN with the lowest metallicity, JO171, is a very peculiar object as it is a Hoag-like post-merger whose central metallicity is not directly linked with the total mass (Moretti et al., 2018). While for the galaxy JO206 the interpretation is less clear and would require further analysis, for example by exploring the possible presence of metal-poor inflows of gas (as recently seen in e.g., Perez-Diaz et al., 2023, in the IR regime) or a particularly strong AGN feedback (Armah et al., 2023).
To have an estimate of the AGN metallicity without a significant contribution from gas ionized by stars, we also computed the metallicities inside the fixed aperture of radius \(r\sim 1\) kpc from the galaxy center, which is
Figure 5: MZR of the AGN-RPS (squares) and AGN-FS (circles) color-coded according to their \(L\)[O iii]. We show as dashed white symbols those AGN for which we could not estimate \(L\)[O iii] (see text for details). The 12 + log (O/H) is computed within the mass-scaled aperture (\(r\sim 0.5~{}R_{\rm e}\)) and \(L\)[O iii] is computed within the fixed aperture (\(r\sim 1\) kpc). The error bars are the average values of the 16th and 84th percentiles of the PDF among all the spaxels within \(0.5R_{\rm e}\). The white and grey histograms (in the top and left insets) show the mass and metallicity distributions of the AGN-FS and the AGN-RPS. The two samples have similar ranges of oxygen abundances and \(L\)[O iii].
always dominated by the emission from Seyfert/LINER-classified spaxels in case of AGN hosts.
Figure 8 (left panel) shows the metallicities within 1 kpc for galaxies with \(\log(\rm M_{*}/M_{\odot})>10.4\). We consider separately galaxies in stellar mass bins of 0.2 dex width (i.e., stripes of different colors), since we want to avoid the dependence of the metallicity estimates on the portion of the galaxy covered by the fixed aperture, which changes with the stellar mass.
We compute the median 12 + log (O/H) of AGN metallicities inside each mass bin (12+log O/H\({}_{\rm AGN,1kpc}\)
\begin{table}
\begin{tabular}{l c c c} \hline \(\log(\rm M_{0}/M_{\odot})\) & (12+ log O/H)\({}_{\rm AGN,1kpc}\) & (12+ log O/H)\({}_{\rm SF,1kpc}\) & \(\Delta\) (O/H)\({}_{r<1\rm kpc}\) \\ \hline
10.50 & \(9.069^{+0.152}_{-0.045}\) & \(8.989^{+0.152}_{-0.045}\) & \(0.080^{+0.065}_{-0.055}\) \\
10.70 & \(9.033^{+0.152}_{-0.045}\) & \(8.989^{+0.152}_{-0.045}\) & \(0.044^{+0.108}_{-0.059}\) \\
10.90 & \(9.119^{+0.152}_{-0.045}\) & \(9.014^{+0.152}_{-0.045}\) & \(0.104^{+0.059}_{-0.113}\) \\
11.10 & \(9.069^{+0.152}_{-0.045}\) & \(9.014^{+0.152}_{-0.045}\) & \(0.054^{+0.153}_{-0.046}\) \\ \hline \end{tabular}
\end{table}
Table 1: Columns: 1) central mass of the mass bins (\(\log\rm M_{0}/M_{\odot}\)) in which there are more than 5 AGN galaxies; 2,3) median metallicities of the AGN and SF galaxies inside the mass bin, with the 16th/84th percentiles of the distribution (12+ log O/H\({}_{\rm AGN,1kpc}\) and 12+ log O/H\({}_{\rm SF,1kpc}\) respectively); (4) values of \(\Delta\) (O/H)\({}_{r<1\rm kpc}\) which are obtained as the difference between (12+ log O/H)\({}_{\rm AGN,1kpc}\) and (12+ log O/H)\({}_{\rm SF,1kpc}\); the errors are computed propagating the errors on (12+ log O/H)\({}_{\rm AGN,1kpc}\) and (12+ log O/H)\({}_{\rm SF,1kpc}\).
Figure 6: Mass-metallicity relation of the SF (colored points) and AGN (grey points) galaxies, with different symbols for the RPS (squares) and non-RPS (circles) samples. Metallicity is computed as the median value in all the spaxels (AGN/SF/Composite) within 0.5 \(R_{\rm e}\). The blue dotted curve is the best fit for the SF galaxies. SF galaxies are color-coded according to their SFR within 1.5 \(R_{\rm e}\), a proxy for the total SFR. Overall, AGN galaxies have higher metallicities than SF galaxies.
filled-colored circles) only when there are more than five AGN galaxies inside that bin.
Qualitatively, results do not change depending on the chosen aperture and, as for the mass-scaled aperture, the AGN galaxies show higher metallicities than SF galaxies. Figure 8 (right panel) shows \(\Delta\)(O/H)\({}_{1kpc}\) which is the difference between the metallicities of the AGN (12+log (O/H)\({}_{\rm AGN,1kpc}\)) and SF (12+log (O/H)\({}_{\rm SF,1kpc}\)) galaxies with similar stellar masses, which basically quantifies how much the NLR is enriched in metals with respect to a region with the same physical extension but at the center of star-forming galaxies. The red dotted line remarks the level at which \(\Delta\)(O/H)\({}_{1kpc}\) = 0. In Table 1 we list the central mass of the bin, (12+log O/H)\({}_{\rm SF,1kpc}\), (12+log O/H)\({}_{\rm AGN,1kpc}\) and \(\Delta\)(O/H)\({}_{r<1kpc}\) in each mass bin. The errors on \(\Delta\)(O/H)\({}_{r<1kpc}\) are calculated considering the errors on (12+log O/H)\({}_{\rm AGN,1kpc}\) and (12+log O/H)\({}_{\rm SF,1kpc}\). The offset \(\Delta\)(O/H)\({}_{r<1kpc}\) is positive in each bin of mass and ranges between 0.044 dex and 0.065 dex depending on the stellar mass, which is consistent within the errors with the offset of 0.06 dex measured by Thomas et al. (2019). The aperture used by Thomas et al. (2019) to integrate the metallicity is comparable in extension with the fixed aperture of 1 kpc at our targets' redshift, as discussed in detail in the following sections.
### Comparison with the literature
By using a similar approach to ours, Perez-Diaz et al. (2021) find that AGN (both Seyferts and LINERs) galaxies do not follow a mass-metallicity relation and that Seyfert 2 have slightly higher chemical abundances than SF galaxies, in the mass range \(9\leq\) log (M\({}_{*}\)/M\({}_{\odot}\)) \(\leq 12\). However, they also find that LINER galaxies have lower abundances than SF galaxies. Perez-Diaz et al. (2021) use Bayesian inference to compare Cloudy v17.01 models and observations by exploiting the code HCM (Perez-Montero, 2014; Perez-Montero et al., 2019) in a sample of 143 SF, LINER and Seyfert galaxies observed with the Palomar Spectroscopic Survey. One of the main differences with our analysis is that Perez-Diaz et al. (2021) consider galaxies independently from their environments. On the contrary, we consider here the effects of AGN in determining
Figure 7: Residuals of the AGN-RPS (squares) and AGN-FS (circles) metallicity from the SF MZR, as a function of the galaxy stellar mass, color-coded according to \(L\)[O iii] as in Figure 5. To compute the error bars on \(\Delta\) (O/H), we consider the errors on the AGN metallicity and the errors on the SF MZR, computed as described in Section §5.3. The horizontal black solid line remarks the level of \(\Delta\)(O/H) = 0. AGN hosts show \(\Delta\)(O/H)\(>0\) on average, except for 2/11 galaxies in the AGN-RPS sample that have lower metallicity than SF galaxies.
the metallicity of their host galaxy in the dense cluster environment, even if our sample is biased towards those showing optical signatures of RPS. The field sample of galaxies is, instead, complete. Being aware of that, we find a consistent offset between SF and AGN metallicities to that found in Thomas et al. (2019), which uses the code Nebulabayes and SDSS data to compute the MZR in a sample of 7,669 Seyfert 2 galaxies and 231,429 SF galaxies. They also find that the active galaxies follow a mass-metallicity relation in the mass range \(10.1\leq\log\ {\rm M_{*}/M_{\odot}}\leq 11.3\) since the nuclear metallicity in Sy2s increases of \(\sim 0.1\) dex over a stellar mass range of 1.3 dex. It is worth noticing, though, that the value 0.1 dex is of the same order than the errors on the metallicity estimates derived with Nebulabayes (see e.g., Table 1 of this paper). The offset of the oxygen abundance in Sy2s with respect to the MZR of the star-forming galaxies is \(\sim 0.09\) dex, but reduces to \(\sim 0.06\) dex when considering the contribution to the offset coming from the fact that the metallicity in the Seyfert 2 and star-forming samples was constrained using different emission lines. The scatter of 0.06 dex is consistent (within the error bars) with the scatter measured in this work using the \(r\sim 1\) kpc aperture (i.e., 2 kpc in diameter), which indeed ranges between 0.04 and 0.07 dex. Our fixed aperture has a diameter of \(\sim 2.5^{\prime\prime}\) at our target's redshifts, which is fairly similar to the Sloan fiber's diameter of \(3^{\prime\prime}\) used by Thomas et al. (2019).
Nonetheless, other works find opposite results. Armah et al. (2023) find lower values of 12 + log (O/H) abundances (with a mean difference of 0.2-0.5 dex) in AGN hosts than in SF galaxies, using an unbiased sample of Seyfert nuclei in the local universe (\(z\leq 0.31\)) from the BAT AGN Spectroscopic Survey (BASS, Oh et al., 2022), which select AGN from their hard X-ray band emission (14-195 keV). These authors compute the AGN metallicities using the Carvalho et al. (2020) and Storchi-Bergmann et al. (1998) calibrators, based on photoionization models. By using a similar approach, Do Nascimento et al. (2022) study the metallicity profiles of a sample of 107 Seyfert galaxies using the spatially resolved data from the SDSS-IV MaNGA and the Carvalho et al. (2020) and Storchi-Bergmann et al. (1998) calibrators. They compute the integrated AGN metallicity within the central \(2.5^{\prime\prime}\) and compare it with the value extrapolated from the radial oxygen abundance profile of H ii regions in the galaxy disc. The oxygen abundance in the H ii regions is obtained with the calibrator from Perez-Montero and Contini (2009). We find 9 AGN galaxies in common with the Do Nascimento et al. (2022)'s sample (which is indeed the number of Seyfert galaxies in our AGN-FS drawn from the MaNGA survey, see also Section SS5.2). We measure the inte
Figure 8: (left panel) Mass-metallicity relation of the AGN-RPS galaxies (edge-colored squares), AGN-FS galaxies (edge-colored circles), SF-RPS galaxies (edge-black squares) and SF-FS galaxies (edge-black circles) with stellar masses \(\log({\rm M_{*}/M_{\odot}})\geq 10.4\), where the metallicities are the median values of 12 + log (O/H) within 1 kpc from the galaxy centers. The filled-colored circles are the median metallicity (12 + log (O/H)\({}_{AGN,1kpc}\)) of the AGN inside the mass bin (i.e., strips of different colors) with the errors given by the 16th/84th percentile of the distribution. The filled-colored stars are the median metallicities (12 + log (O/H)\({}_{SF,1kpc}\)) of the SF galaxies inside the mass bin. (right panel) Difference between (12 + log (O/H)\({}_{AGN,1kpc}\)) and (12 + log (O/H)\({}_{SF,1kpc}\)) as a function of stellar mass. Within the same physical region, galaxies hosting AGN are more enriched in metals than those without AGN activity.
grated metallicity inside an on-sky aperture of 2.5'' centered on the galaxy, as in Do Nascimento et al. (2022), but using the metallicity maps obtained in this work. We find that the median difference between the 12 + log (O/H) measured in Do Nascimento et al. (2022) and ours is \(-\)0.4 dex when considering their estimates with the Carvalho et al. (2020) (C20) calibrator, and \(-\)0.43 dex when considering their oxygen abundances computed with Storchi-Bergmann et al. (1998) (SB98). This is larger than the average difference between the NLR metallicity and the extrapolated value found by the authors, which ranges between 0.16 to 0.30 dex. By estimating the metallicities with the SB98 and C20 calibrators inside all the AGN spaxels in our MaNGA and GASP samples, we find that the values of 12 + log(O/H) computed with our method and with these other calibrators are well-correlated with each other (i.e., \(r=0.48\) in case of the 12 + log(O/H)\({}_{\rm SB98}\) and \(r=0.57\) in case of the 12 + log (O/H)\({}_{\rm C20}\)). However, following the approach of Perez-Diaz et al. (2021), we find an offset of 0.387 dex with RMSE = 0.12 dex between the 12 + log (O/H)\({}_{\rm C20}\) and 12 + log (O/H)\({}_{\rm Nebulabayes}\), while we find an offset of 0.391 dex with RMSE = 0.11 dex between the 12 + log (O/H)\({}_{\rm SB98}\) and 12 + log (O/H)\({}_{\rm Nebulabayes}\). Therefore, we conclude that the Carvalho et al. (2020) and Storchi-Bergmann et al. (1998) calibrators give systematically lower values of metallicity than the method applied throughout this work and this is the reason for the discrepancy between our findings and those in Do Nascimento et al. (2022). We stress that even higher offsets are found in the literature when comparing different methods: for example, the offset found by Perez-Diaz et al. (2021) when comparing their method with the code NebulaBayes (but coupled with the Mappings models, instead of the Cloudy models adopted by us) is 0.8 dex. Perez-Diaz et al. (2021) attributed this offset to the different power-law slope adopted in the Mappings and their models ( \(\alpha\) = -2.0 and \(\alpha\) = -0.8 respectively). However, in our case \(\alpha\) = -2.0 produces a significantly lower offset. We therefore conclude that a detailed treatment of the possible effects that lead to these discrepancies involves a complex combination of the assumptions underlying each model, whose discussion is beyond the scope of this paper.
## 6 Summary
In this paper, we have investigated the effect of RPS on the AGN metallicity of 11 Seyfert/LINER galaxies, by comparing their mass-metallicity distribution with that of 52 Seyfert/LINER galaxies undisturbed by RP. We also studied the impact of the presence of a central AGN on the metal content of galactic nuclei, both in case of RPS and not, by exploring the difference between the metallicity at the center of AGN and SF galaxies. To do so, we exploit IFU data from the GASP and MaNGA surveys, and we measure their metallicities using the Nebulabayes code and a set of AGN, Composite and H ii photoionization models generated with the version of the code Cloudy v17.02.
Our main findings are summarized as follows:
* AGN galaxies either experiencing RPS or not generally have the same distribution in the mass-metallicity diagram and span the same range of \(L\)[O iii] luminosity. This result suggests that the stripping is not impacting significantly the integrated metallicity and [O iii] luminosity of the central AGN, at least when looking at a relatively large sample of galaxies;
* The AGN-RPS and AGN-FS galaxies do not seem to follow a mass-metallicity relation, as shown in Figure 5, within the short range of stellar masses they cover.
* Thanks to the use of IFU data, we were able to test our results by integrating the metallicities inside different extraction apertures. Independently from the extraction aperture and the RPS, AGN galaxies show on average enhanced metallicity with respect to SF galaxies at fixed stellar mass. The difference between the metallicity at the centers of AGN and SF galaxies reaches values up to 0.2 dex when using the aperture with \(r\sim 0.5~{}R_{\rm e}\), while the median difference between metallicities computed with the 1 kpc aperture ranges from 0.04 dex to 0.07 dex, depending on the host galaxy's stellar mass.
In summary, our results show that the presence of the AGN implies higher metallicities in the nuclei of galaxies but that the RPS is not playing a role in changing either the AGN metallicity or [O iii] luminosity.
## 7 Acknowledgements
Based on observations collected at the European Organization for Astronomical Research in the Southern Hemisphere under ESO program 196.B-0578. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No. 833824). We acknowledge financial contribution from the grant PRIN MIUR 2017 n.20173ML3WW_001 (PI Cimatti). We thank the
anonymous referee for their comments that have helped us to improve the paper. This work made use of the KUBEVIZ software which is publicly available at [https://github.com/matteofox/kubeviz/](https://github.com/matteofox/kubeviz/). The development of the KUBEVIZ code was supported by the Deutsche Forschungsgemeinschaft via Project IDs: WI3871/1-1 and WI3871/1-2.
This work makes use of data from SDSS-IV. Funding for SDSS has been provided by the Alfred P. Sloan Foundation and Participating Institutions. Additional funding toward SDSS-IV has been provided by the U.S. Department of Energy Office of Science. SDSS-IV acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS website is www.sdss.org. This research made use of Marvin, a core Python package and web framework for MaNGA data, developed by Brian Cherinka, Jose Sanchez- Gallego, and Brett Andrews (Cherinka et al., 2019). SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrofisica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU)/University of Tokyo, Lawrence Berkeley National Laboratory, Leibniz Institut fur Astrophysik Potsdam (AIP), Max-Planck-Institufur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astrophysik (MPA Garching), Max-Planck-Institut fur Extra- terrestrische Physik (MPE), National Astronomical Observa- story of China, New Mexico State University, New York University, University of Notre Dame, Observatario Nacional/ MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United King- dom Participation Group, Universidad Nacional Autonoma de Mexico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. The MaNGA data used in this work are publicly available at [http://www.sdss.org/dr15/manga/manga-data/](http://www.sdss.org/dr15/manga/manga-data/).
|
2310.00329 | Chain decay and rates disorder in the totally asymmetric simple
exclusion process | We theoretically study the Totally Asymmetric Exclusion Process (TASEP) with
quenched jumping rates disorder and finite lifetime chain. TASEP is widely used
to model the translation of messenger RNAs by Ribosomes in protein synthesis.
Since the exact solution of the TASEP model is analytically and computationally
intractable for biologically relevant systems parameters, the canonical
Mean-Field (MF) approaches of solving coupled non-linear differential equations
is also computational expensive for the scale of relevant biological data
analysis. In this article, we provide alternative approach to computing the MF
steady state solution via a computationally efficient system of non-linear
algebraic equations. We further outline a framework for including correlations
progressively via the exact solution of small size TASEP system. Leading order
approximation in the biologically relevant entry rate limited regime shows
remarkable agreement with the full Monte-Carlo simulation result for a wide
range of system parameter space. These results could be of importance to the
kinetic rates inference in Ribo-Seq data analysis and other related problems. | Yahaya Ibrahim, Jérôme Dorignac, Fred Geniet, Carole Chevalier, Jean-Charles Walter, Nils-Ole Walliser, Andrea Pameggiani, John Palmeri | 2023-09-30T10:24:04Z | http://arxiv.org/abs/2310.00329v1 | # Chain decay and rates disorder in the totally asymmetric simple exclusion process
###### Abstract
We theoretically study the Totally Asymmetric Exclusion Process (TASEP) with quenched jumping rates disorder and finite lifetime chain. TASEP is widely used to model the translation of messenger RNAs by Ribosomes in protein synthesis. Since the exact solution of the TASEP model is analytically and computationally intractable for biologically relevant systems parameters, the canonical Mean-Field (MF) approaches of solving coupled non-linear differential equations is also computational expensive for the scale of relevant biological data analysis. In this article, we provide alternative approach to computing the MF steady state solution via a computationally efficient system of non-linear algebraic equations. We further outline a framework for including correlations progressively via the exact solution of small size TASEP system. Leading order approximation in the biologically relevant entry rate limited regime shows remarkable agreement with the full Monte-Carlo simulation result for a wide range of system parameter space. These results could be of importance to the kinetic rates inference in Ribo-Seq data analysis and other related problems.
## I Introduction
The totally asymmetric exclusion process (TASEP) is a paradigmatic model of non-equilibrium statistical mechanics [3; 5; 9]. It is a simple model that captures the essential features of many non-equilibrium systems, such as traffic flow [1; 11], diffusion in biological membranes, and protein synthesis. The TASEP is a one-dimensional lattice model with particles that can hop to their nearest neighbor site in the forward direction, but not in the backward direction. The particles are hard-core, meaning that no two particles can occupy the same site at the same time (excluded volume interaction). The TASEP is driven by a difference in the particle density between the two ends of the lattice [4]. TASEP has been studied extensively using both analytical and numerical methods [2; 5; 8]. It has been shown that the TASEP exhibits a variety of non-equilibrium phenomena, such as phase transitions and jamming. The TASEP can be used to model protein synthesis by considering the ribosomes as particles on a lattice, where the lattice sites represent codons on the messenger RNA (mRNA) [7]. The ribosomes can hop to the next codon on the mRNA in the forward direction, but not backwards. The TASEP has been used to model a variety of aspects of protein synthesis, such as the ribosome flow rate (current), the distribution of ribosomes on the mRNA, and the effect of collisions/jamming on protein synthesis [6].
Here, we study the TASEP with quenched jump rates disorder and finite degradation rate of the chain. We give an alternative framework of approximating the system correlations and outlined computationally iterative solution method thereby circumventing the coupled non-linear differential equations.
The article is organized as follows. In the next section, we outline the TASEP model and introduce notations. The main results are presented in section III while we discuss the results implication and conclude in section IV.
## II The TASEP model
We consider a one-dimensional lattice with a total of \(N\) lattice sites. We further denote the presence of a particle at a lattice site \(j\) with \(\sigma_{j}=1\) and its absence with \(\sigma_{j}=0\). A particle, with \(\ell\) lattice sites long footprint, jumps from site one site to another on the lattice with the following dynamical rules:
1. A particle enter the lattice at a rate \(\alpha\) with its _trailing edge_ on the first site (\(j=1\)), provided the first \(\ell\) sites are empty (\(\sigma_{1}=\cdots=\sigma_{\ell}=0\)).
2. Particle advance from site \(j\) to site \(j+1\) with a rate \(w_{j}\), provided the \(j+1\) through \(j+\ell\) sites are empty (\(\sigma_{j+1}=\cdots=\sigma_{j+\ell}=0\)).
3. From the lattice site \(j=N-\ell+1\), particle exit the lattice incrementally and unhindered with a rate \(w_{j}\).
4. When the trailing edge of the particle is at site \(N\), the particle exits with a rate \(\beta\).
to the master equation [5; 9]
\[\frac{d}{dt}\widetilde{P}\left(\mathcal{C},t\right)=\sum_{\mathcal{C}^{\prime }}W_{\mathcal{C}\leftarrow\mathcal{C}^{\prime}}\,\widetilde{P}\left( \mathcal{C}^{\prime}\right)-\sum_{\mathcal{C}^{\prime}}W_{\mathcal{C}^{\prime }\leftarrow\mathcal{C}}\,\widetilde{P}\left(\mathcal{C}\right)\, \tag{1}\]
where \(W_{\mathcal{C}\leftarrow\mathcal{C}^{\prime}}\) is the transition rate from state \(\mathcal{C}^{\prime}\) to state \(\mathcal{C}\) and \(\widetilde{P}\left(\mathcal{C},t\right)\) is the probability of finding the system in configuration \(\mathcal{C}\equiv\left(\sigma_{1},\cdots,\sigma_{N}\right)\) at time \(t\). \(\sigma_{j}=1\) if site \(j\) is occupied by the _trailing_ edge of the particle and \(\sigma_{j}=0\) if empty.
Throughout this article, we assume that the chain has a constant degradation rate \(\Omega\) and an exponential age distribution, \(\Omega\exp\left(-\Omega\,t\right)\), such that the age averaged probabilities are [12; 13]
\[P\left(\mathcal{C},\Omega\right)\equiv\int_{0}^{\infty}\widetilde{P}\left( \mathcal{C},t\right)\,\Omega\exp\left(-\Omega\,t\right)dt. \tag{2}\]
Therefore, the chain's age averaged master equation now reads
\[\Omega P\left(\mathcal{C},\Omega\right)-\Omega P(\mathcal{C},t=0)=\] \[\sum_{\mathcal{C}^{\prime}}W_{\mathcal{C}\leftarrow\mathcal{C}^{ \prime}}\,P\left(\mathcal{C}^{\prime},\Omega\right)-\sum_{\mathcal{C}^{\prime }}W_{\mathcal{C}^{\prime}\leftarrow\mathcal{C}}\,P\left(\mathcal{C},\Omega \right)\, \tag{3}\]
where we set the initial condition \(P(\mathcal{C},t=0)\) to an empty chain, i.e. \(P(\mathcal{C},t=0)=P\left(\sigma_{1}=\cdots=\sigma_{N}=0\right)\).
Solving for the \(P\left(\mathcal{C},\Omega\right)\) will allow us to obtain moments of the particle site occupation, \(\sigma_{j}\), such as the marginal probability \(\left\langle\sigma_{j}\right\rangle\equiv\sum_{\mathcal{C}}\,\sigma_{j}P\left( \mathcal{C},\Omega\right)\).
The configuration space, \(\mathcal{C}\), grows exponentially with the system size, \(N\). Therefore, solving for \(P\left(\mathcal{C}\right)\) becomes computationally expensive for physically relevant system sizes [1]
The marginals \(\left\langle\sigma_{j}\right\rangle\) and \(\left\langle\sigma_{j}\sigma_{j+\ell}\right\rangle\) satisfy the following system of algebraic equations [1; 10]
\[0=\overbrace{w_{j-1}\Big{(}\left\langle\sigma_{j-1}\right\rangle- \left\langle\sigma_{j-1}\sigma_{j-1+\ell}\right\rangle\Big{)}}^{\text{flux into site }j}\\ -\underbrace{w_{j}\Big{(}\left\langle\sigma_{j}\right\rangle- \left\langle\sigma_{j}\sigma_{j+\ell}\right\rangle\Big{)}}_{\text{flux out of site }j}\, \tag{4}\]
for \(1\leqslant j\leqslant N\), with fixed boundary conditions \(\sigma_{0}=1\) and \(\sigma_{N+1}=\sigma_{N+2}=\cdots=\sigma_{N+\ell}=0\) for the left and right boundaries respectively. Henceforth, we use interchangeably the notations \(w_{0}\equiv\alpha\) for the entry (initiation) rate and \(w_{N}\equiv\beta\) for the exit (termination) rate.
## III Results
\(\left\langle\sigma_{j-1}\sigma_{j+\ell-1}\right\rangle\) and \(\left\langle\sigma_{j}\sigma_{j+\ell}\right\rangle\) to close the system of equa
\begin{table}
\begin{tabular}{|c|l|} \hline Dimensionless ratio & Description \\ \hline \hline \(\Omega/w_{min}\in\left[0,\infty\right)\) & Score quantifying the chain degradation rate relative to the weakest jump rate in the bulk \\ \(\alpha/w_{min}\in\left(0,\infty\right)\) & This quantifies the onset of phase transition out of the Low Density phases \\ \(\Delta\equiv 1-w_{min}/w_{max}\in\left[0,1\right)\) & The jump rates disorder distribution width quantifies the chances of occurrence of a slow bond right after a fast bond and thereby inducing a traffic jam \\ \hline \hline \end{tabular}
\end{table}
Table 1: Relevant dimensionless ratios for the five kinetic rates: \(\alpha,w_{min},w_{max},\beta\) and the chain degradation rate \(\Omega\). Note that unless otherwise stated, we assume the exit rate \(\beta\) to be non-limiting (i.e. \(\beta\gg\alpha,w_{min},w_{max},\Omega\)).
tions. The canonical approach is to build a hierarchy of equations of marginals involving larger number of sites (i.e. the BBGKY hierarchy) and truncating the system at a desired level of accuracy. With this approach, increasing accuracy of the approximation comes with linear increase in the number of equations to be solved. Secondly, the effectiveness of the approach depends on the quality of the approximation of the truncating correlation function.
Here, we take an entirely different approach, we model the two-site correlation marginal \(\langle\sigma_{j}\sigma_{j+\ell}\rangle\) with the exact solution of few sites TASEP system. For \(N=2\) and \(N=3\) sites, the analytic calculation is tractable (see the Appendix). For larger system sizes, fast linear solvers could be used to solve for the correlation.
#### ii.1.1 \(\ell=1\) particles
Hence, the marginal probability that a site \(j+1\) is empty is \((1-\rho_{j+1})\) while the marginal probability that site \(j\) is occupied is simply \(\rho_{j}\). However, the probability of two particles colliding (i.e. site \(j+1\) is occupied given that site \(j\) is also occupied) is approximated by simple mutual independence of the sites [7]
\[\langle\sigma_{j}\sigma_{j+1}\rangle\to\rho_{j}\,\rho_{j+1}. \tag{5}\]
This is the canonical Mean-Field (MF) approximation. It's important to note that we expect weak contributions from these terms in the entry limited regime of the dynamics.
This MF approximation greatly simplifies the problem from an exponentially large system of \(\sim 2^{N}\) linear equations (eqns. 4) to a relatively small system of \(N\) non-linear coupled algebraic equations:
\[\Omega\,\rho_{j}=w_{j-1}\rho_{j-1}\left(1-\rho_{j}\right)-w_{j}\rho_{j}\left( 1-\rho_{j+1}\right)\, \tag{6}\]
for \(1\leqslant j\leqslant N\), and at the boundary, \(\rho_{0}=1\), while \(\rho_{N}=0\).
Figure 2: (Chain decay, \(\Omega/w_{min}>0\)) Densities, \(\rho_{j}\), and scatter plots for sample sequence drawn from \(w_{j}\in[3,80]\), \(\alpha=0.1\), \(\beta=30\), with \(\ell=9\), and \(N=100\) total lattice sites. (top left) \(\rho_{j}\) plot for \(\Omega=0\). The red dashed line is Eqn. (10) fit of the Monte Carlo simulation. (top right) Scatter plot of data points in the top left figure. (bottom left) \(\rho_{j}\) plot for \(\Omega=0.5\). The dashed red line is the Eqn. (10) fit. (bottom right) Scatter plot of data points in the bottom left figure.
\(\ell\geqslant 2\) (extended) particles
In similar manner, for extended particles that have cover more a single site, the marginal probability that a site \(j+\ell\) is empty is \(\left(1-\sum_{s=1}^{\ell}\rho_{j+s}\right)\) while the marginal probability that site \(j\) is occupied is simply \(\rho_{j}\). McDonalds's et. al. [7] provided an accurate approximation of the collision marginal
\[\langle\sigma_{j}\sigma_{j+1}\rangle\to\frac{\rho_{j}\,\rho_{j+\ell}}{\left(1 -\sum_{s=1}^{\ell}\rho_{j+s}\right)+\rho_{j+\ell}}. \tag{7}\]
There are \(\ell\) modes of occupying each lattice site, \(j\), and the empty mode [7]. Essentially, the joint probability, \(\langle\sigma_{j}\sigma_{j+1}\rangle\), is replaced by a product of the marginal probability of particle occupying site \(j\) and the conditional probability that site \(j+1\) is also occupied while site \(j\) is either empty or occupied by the leading edge of the particle.
Substituting the correlation \(\langle\sigma_{j}\sigma_{j+1}\rangle\) in eqn. (4), it follows that
\[\Omega\,\rho_{j}=w_{j-1}\rho_{j-1}\left(1-R_{j-1+\ell}\right)-w_{j}\rho_{j} \left(1-R_{j+\ell}\right)\, \tag{8}\]
for \(1\leqslant j\leqslant N-\ell\) and
\[R_{j+\ell}=\frac{\rho_{j+\ell}}{\left(1-\sum_{s=1}^{\ell}\rho_{j+s}\right)+ \rho_{j+\ell}},\quad 1\leqslant j\leqslant N-\ell\]
while at the boundaries \(\rho_{0}=1\) and \(R_{N-\ell+1}=\cdots=R_{N+\ell}=0\).
#### ii.1.3 Iterative method of solution
Instead of solving the \(N\) system of coupled differential equations, the age-averaged densities could be directly computed from \(N\) coupled algebraic equations:
\[\rho_{j}=\frac{w_{j-1}\rho_{j-1}\left(1-R_{j-1+\ell}+\rho_{j}\right)}{\Omega+ w_{j-1}\rho_{j-1}+w_{j}\left(1-R_{j+\ell}\right)} \tag{9}\]
where starting with an appropriate initial guess, say \(\rho_{j}=\min(\alpha,\{w_{j}\},\beta)/(1+\ell)\), the system will converge to the solution after few iterations. This provide a computationally efficient way to compute the densities and other statistics of interest.
### Entry limited (low density) expansion
Of special interest is the entry limited regime (or Low Density phase) in the parameter space due to its relevance to the biological process of ribosome translation of mRNA []. From eqn. (6), and expanding the densities in the small parameter \(\epsilon=\alpha/(\Omega+w_{1})\), we obtain the first order contribution in the small parameter, \(\epsilon\),
\[\rho_{j} = \frac{\alpha}{w_{j}}\prod_{s=1}^{j}\left(\frac{w_{n}}{\Omega+w_{ n}}\right)\ +\ \mathcal{O}\left(\epsilon^{2}\right)\, \tag{10}\]
for \(1\leqslant j\leqslant N-1\) (see the Appendix for details). Interestingly, this first order contribution captures the essential physics in this entry limited regime, showing excellent quantitative agreement with the full Monte Carlo simulations (see Figs. 2, and 4) for a broad range of values. Expansion to the second order in \(\epsilon\) could be found in the Appendix. For brevity, we keep here only the first order contribution.
#### ii.1.1 Reduced throughput
To quantify the effects of both the chain decay and jumping rates disorder, we define a (dimensionless) throughput ratio (TR) quantity as, \(\text{TR}=J_{\text{out}}/J_{\text{in}}\), where \(J_{\text{in}}\) and \(J_{\text{out}}\) are the entry and exit fluxes of particles respectively. In the low density limit, \((\alpha/w_{min}\ll 1)\),
\[\text{TR}=\prod_{n=1}^{N}\left(\frac{w_{n}}{\Omega+w_{n}}\right). \tag{11}\]
Notably, for stable chains with infinite lifetime (\(\Omega=0\)), \(\text{TR}=1\). While for chains with finite lifetime (\(\Omega>0\)), the TASEP has reduced throughput, \(0<\text{TR}<1\). For uniform hopping rates, \(w_{n}=\hat{w}\), \(TR=\exp\left(-N\log\left(1+\Omega/\bar{w}\right)\right)\). This implies that the particle current \(J\) decays exponentially with the chain length \(N\). For long chains, throughput is also exponentially sensitive to the ratio \(\Omega/w_{min}\), and less obviously to the disorder score \(\Delta=1-w_{min}/w_{max}\) (see Table 1).
#### ii.1.2 Approximation error distribution
The first order approximation reproduce the exact densities for wide range of the parameter space in the low density phase. We define the relative error
\[\text{Approximation-Error}=\left\{\frac{\rho_{j}^{MC}-\rho_{j}^{approx}}{\rho_ {j}^{MC}}\right\}. \tag{12}\]
We plot a sample distribution of the relative errors in percentages (see Fig. 5.
## IV Discussion and conclusions
We have studied the totally asymmetric exclusion process with quenched jump rates disorder and finite lifetime of the chain. We outlined an alternative approach to efficiently compute chain-age averaged densities from algebraic equations rather than solving relatively computationally expensive differential equations. In addition, we provide a straightforward framework to progressively include correlations by solving computationally inexpensive few sites TASEP systems. Meanwhile, in the biologically relevant Low Density phase, we found that the first leading order asymptotic expansion in the entry rate is accurate for wide range of the system parameters. Plots are shown in Figs (2 and 4) demonstrating the agreement with Monte-Carlo simulation results.
The success of the entry limited approximation is complimented by the fact that the finite degradation rate of the chain favors low density profile downstream (see Fig. 1). Therefore, profiles like that of Fig. (3) won't appear for finite degradation rate \(\Omega\neq 0\) and sufficiently long chain.
Even though TASEP is analytically complex problem with rich physics, simple analytic expressions accurate for wide range of parameter space could be found. This is in addition alternative path to including longer range correlations by exploiting the exact solution of few site TASEP. This approach could find applications in highly correlated systems.
## V Appendix
### Entry limited dynamics: Perturbative expansion in \(\epsilon\)
We re-write eqn. (4) from the main text to a dimensionless form
\[\rho_{j}=\lambda_{j-1}\rho_{j-1}\left(1-\rho_{j}\right)+\rho_{j}\rho_{j+1}- \chi_{j}\rho_{j}\rho_{j+1}\, \tag{13}\]
where we define dimensionless parameters
\[\lambda_{j}=\frac{w_{j}}{\Omega+w_{j+1}}\quad\text{and}\quad\chi_{j}=\frac{ \Omega}{\Omega+w_{j}}. \tag{14}\]
Note that \(\chi_{j}\)'s are bounded (\(0\leqslant\chi_{j}\leqslant 1,\forall\,i\)) since \(w_{j}>0\ \forall\,i\).
There are two interesting limiting cases: the stable infinite lifetime of the chain, \(\Omega\to 0\), \(\lambda_{j}\to
\(w_{j}/w_{j+1}\) and \(\chi_{j}\to 0\), and the short-lived chain limit \(\Omega\to\infty\); \(\lambda_{j}\to 0\) and \(\chi_{j}\to 1\), where the densities vanish \(\rho_{j}\to 0,\ \forall j\).
In this entry limited regime, \(\lambda_{0}\) is the small dimensionless parameter of interest (since \(\lambda_{0}\ll\lambda_{j}\) for any \(\Omega>0\)). Hence, we let \(\epsilon\equiv\lambda_{0}=\alpha/(\Omega+w_{1})\) and then expand the densities in powers of the small parameter \(\epsilon\):
\[\rho_{j}=\epsilon\,\rho_{j}^{(1)}+\epsilon^{2}\,\rho_{j}^{(2)}+\cdots \tag{15}\]
_First order contribution (\(\epsilon^{1}\)):_ The first order contribution to the densities are \(\rho_{1}^{(1)}=1\) and
\[\rho_{j}^{(1)}\ =\ \prod_{n=1}^{j-1}\left(\frac{w_{n}}{\Omega+w_{n+1}}\right)\, \quad\text{for}\quad 2\leqslant j\leqslant N. \tag{16}\]
Therefore, to linear order in the small parameter \(\epsilon\), the densities are \(\rho_{j}\approx\epsilon\rho_{j}^{(1)}\) and
\[\rho_{j}\ \approx\ \prod_{n=0}^{j-1}\left(\frac{w_{n}}{\Omega+w_{n+1}} \right)\, \tag{17}\]
for \(1\leqslant j\leqslant N\), where we recall that \(\epsilon=\alpha/(\Omega+w_{1})\) and \(w_{0}\equiv\alpha\). Therefore, the corresponding particles entry flux is \(J_{\text{in}}=\alpha\left(\Omega+w_{1}-\alpha\right)/(\Omega+w_{1})\).
_Second order contribution (\(\epsilon^{2}\)):_ The second order contributions accounts for the exclusion effects (and the extended size effects (see the Supporting Information)) where now the densities are
\[\rho_{j}^{(2)}=\rho_{j}^{(1)}\left(\rho_{j+1}^{(1)}-\sum_{m=1}^{j}\chi_{m}\, \rho_{m+1}^{(1)}-\rho_{1}^{(1)}\right)\, \tag{18}\]
for \(1\leqslant j\leqslant N-1\), while for \(j=N\),
\[\rho_{N}^{(2)}=\rho_{N}^{(1)}\left(\rho_{N}^{(1)}-\sum_{m=1}^{N-1}\chi_{m}\, \rho_{m+1}^{(1)}-\rho_{1}^{(1)}\right). \tag{19}\]
### Few sites exact TASEP solutions
We present the exact analytic solution of \(N=2\) and \(N=3\) TASEP systems (\(\ell=1\)). For brevity we introduce to shorthand
\[\tau_{\alpha}=\frac{1}{\alpha}\quad,\quad\tau_{j}=\frac{1}{w_{j}}\quad\text{ and}\quad\tau_{\beta}=\frac{1}{\beta}\]
It should be emphasized here that when using these expressions to approximate the correlations, \(\alpha\) and \(\beta\) should be interpreted as
\[\alpha\equiv w_{j-1}\rho_{j-1}\quad\text{and}\quad\beta\equiv w_{j+S-1}(1-\rho _{j+S}) \tag{20}\]
where \(S\) is the chosen small system size.
#### ii.2.1 Two (2) sites
For two sites, we have the distribution
\[\begin{bmatrix}\text{P}_{00}\\ \text{P}_{01}\\ \text{P}_{10}\\ \text{P}_{11}\end{bmatrix}=\frac{1}{\mathcal{Z}_{2}}\begin{bmatrix}\tau_{alpha} ^{2}\\ \tau_{alpha}\tau_{beta}\\ w_{1}^{-}\left(\tau_{alpha}+\tau_{beta}\right)\\ \tau_{beta}^{2}\end{bmatrix}\, \tag{21}\]
where \(\mathcal{Z}_{2}=\tau_{alpha}^{2}+\tau_{alpha}\tau_{beta}\tau_{beta}+\tau_{1} \left(\tau_{alpha}+\tau_{beta}\right)+\tau_{beta}^{2}\) and the current reads
\[J^{(2\text{-sites})}\equiv\alpha\text{P}_{00}+\alpha\text{P}_{01}=\frac{1}{ \mathcal{Z}_{2}}\left(\tau_{alpha}+\tau_{beta}\right). \tag{22}\]
#### ii.3.2 Three (3) sites
The \(N=3\) is a bit more involved but tractable.
\[\begin{bmatrix}\text{P}_{000}\\ \text{P}_{001}\\ \text{P}_{010}\\ \text{P}_{011}\\ \text{P}_{100}\\ \text{P}_{110}\\ \text{P}_{110}\\ \text{P}_{110}\\ \text{P}_{111}\\ \text{P}_{111}\end{bmatrix}=\frac{1}{\mathcal{Z}_{3}}\begin{bmatrix}\tau_{ \alpha}^{3}\Big{(}\tau_{\alpha}\tau_{\beta}+\tau_{\alpha}\tau_{1}+\tau_{\beta} \tau_{1}\Big{)}\\ \tau_{\alpha}^{2}\tau_{\beta}\Big{(}\tau_{\alpha}\tau_{\beta}+\tau_{\alpha} \tau_{1}+\tau_{\beta}\tau_{1}\Big{)}\\ \tau_{\alpha}\tau_{2}\Big{(}\tau_{\alpha}+\tau_{\beta}\Big{)}\Big{(}\tau_{ \alpha}\tau_{\beta}+\tau_{\alpha}\tau_{1}+\tau_{\beta}\tau_{1}\Big{)}\\ \tau_{\alpha}\tau_{2}\Big{(}\tau_{\alpha}\tau_{\beta}+\tau_{\alpha} \tau_{2}+\tau_{\beta}\tau_{2}\Big{)}\\ \tau_{1}^{2}\Big{(}\tau_{\alpha}+\tau_{\beta}\Big{)}\Big{(}\tau_{ \alpha}\tau_{\beta}+\tau_{\alpha}\tau_{2}+\tau_{\beta}\tau_{2}\Big{)}+\tau_{ \alpha}^{2}\tau_{1}\Big{(}\tau_{\alpha}\tau_{\beta}+\tau_{\alpha}\tau_{1}+\tau _{\beta}\tau_{1}\Big{)}\\ \tau_{\beta}\tau_{1}\Big{(}\tau_{\alpha}+\tau_{\beta}\Big{)}\Big{(}\tau_{ \alpha}\tau_{\beta}+\tau_{\alpha}\tau_{2}+\tau_{\beta}\tau_{2}\Big{)}\\ \tau_{2}^{2}\Big{(}\tau_{\alpha}+\tau_{\beta}\Big{)}\Big{(}\tau_{ \alpha}\tau_{\beta}+\tau_{\alpha}\tau_{1}+\tau_{\beta}\tau_{1}\Big{)}+\tau_{ \beta}^{2}\tau_{2}\Big{(}\tau_{\alpha}\tau_{\beta}+\tau_{\alpha}\tau_{2}+\tau _{\beta}\tau_{2}\Big{)}\\ \tau_{\beta}^{3}\Big{(}\tau_{\alpha}\tau_{\beta}+\tau_{\alpha}\tau_{2}+\tau _{\beta}\tau_{2}\Big{)}\end{bmatrix} \tag{23}\]
where the partition function
\[\mathcal{Z}_{3}=\Big{(}\tau_{\alpha}^{2}\left[\tau_{\alpha}+\tau_ {\beta}+\tau_{1}\right]+\tau_{2}\left(\tau_{\alpha}+\tau_{\beta}\right)\left( \tau_{\alpha}+\tau_{2}\right)\Big{)}\Big{(}\tau_{\alpha}\tau_{\beta}+\tau_{ \alpha}\tau_{1}+\tau_{\beta}\tau_{1}\Big{)}\\ +\Big{(}\tau_{\beta}^{2}\left[\tau_{\alpha}+\tau_{\beta}+\tau_{2} \right]+\tau_{1}\left(\tau_{\alpha}+\tau_{\beta}\right)\left(\tau_{\alpha}+\tau _{1}\right)\Big{)}\Big{(}\tau_{\alpha}\tau_{\beta}+\tau_{\alpha}\tau_{2}+\tau _{\beta}\tau_{2}\Big{)}. \tag{24}\]
The current then reads
\[J^{\text{(3-sites)}}=\frac{1}{\mathcal{Z}_{3}}\left(\tau_{\alpha}+\tau_{\beta} \right)\left(\tau_{\alpha}+\tau_{2}\right)\Big{(}\tau_{\alpha}\tau_{\beta}+\tau _{\alpha}\tau_{1}+\tau_{\beta}\tau_{1}\Big{)}+\frac{1}{\mathcal{Z}_{3}}\tau_{ \beta}^{2}\Big{(}\tau_{\alpha}\tau_{\beta}+\tau_{\alpha}\tau_{2}+\tau_{\beta} \tau_{2}\Big{)}. \tag{25}\]
|
2308.16701 | A unified approach to exotic cluster structures on simple Lie groups | We propose a new approach to building log-canonical coordinate charts for any
simply-connected simple Lie group $\G$ and arbitrary Poisson-homogeneous
bracket on $\G$ associated with Belavin--Drinfeld data. Given a pair of
representatives $r, r'$ from two arbitrary Belavin--Drinfeld classes, we build
a rational map from $\G$ with the Poisson structure defined by two
appropriately selected representatives from the standard class to $\G$ equipped
with the Poisson structure defined by the pair $r, r'$. In the $A_n$ case, we
prove that this map is invertible whenever the pair $r, r'$ is drawn from
aperiodic Belavin--Drinfeld data, as defined in~\cite{GSVple}. We further apply
this construction to recover the existence of a regular complete cluster
structure compatible with the Poisson structure associated with the pair $r,
r'$ in the aperiodic case. | Misha Gekhtman, Michael Shapiro, Alek Vainshtein | 2023-08-31T13:07:43Z | http://arxiv.org/abs/2308.16701v1 | # A unified approach to exotic cluster structures on simple Lie groups
###### Abstract.
We propose a new approach to building log-canonical coordinate charts for any simply-connected simple Lie group \(\mathcal{G}\) and arbitrary Poisson-homogeneous bracket on \(\mathcal{G}\) associated with Belavin-Drinfeld data. Given a pair of representatives \(r,r^{\prime}\) from two arbitrary Belavin-Drinfeld classes, we build a rational map from \(\mathcal{G}\) with the Poisson structure defined by two appropriately selected representatives from the standard class to \(\mathcal{G}\) equipped with the Poisson structure defined by the pair \(r,r^{\prime}\). In the \(A_{n}\) case, we prove that this map is invertible whenever the pair \(r,r^{\prime}\) is drawn from aperiodic Belavin-Drinfeld data, as defined in [13]. We further apply this construction to recover the existence of a regular complete cluster structure compatible with the Poisson structure associated with the pair \(r,r^{\prime}\) in the aperiodic case.
Key words and phrases:Poisson-Lie group, cluster algebra, Belavin-Drinfeld triple 2010 Mathematics Subject Classification: 53D17,13F60
## 1. Introduction
Shortly after cluster algebras were discovered by Fomin and Zelevinsky, important ties emerged between the new theory and Poisson geometry. As was first observed in [9] and then expounded upon in [10], cluster algebras carry natural Poisson structures compatible with cluster transformations. This, in turn, helps in uncovering cluster structures in rings of regular functions on Poisson varieties of interest in Lie theory. In particular, the cluster structure constructed in [2] for (double Bruhat cells of) a simply-connected simple Lie group \(\mathcal{G}\) was shown in [10, Ch. 4.3] to be compatible with the standard Poisson-Lie structure on \(\mathcal{G}\). This led to a question, posed in [11], of existence of what we called _exotic_ cluster structures on \(\mathcal{G}\), i.e. cluster structures non-isomorphic to the standard one and compatible with other Poisson-Lie brackets. Although the answer to this question is negative in general--an example to that effect was constructed in [11] in the case of \(SL_{2}\)--we conjectured that the answer is affirmative in the case of Poisson--Lie structures corresponding to quasi-triangular solutions of the classical Yang--Baxter equation classified by Belavin and Drinfeld in [1]. Up to an automorphism, each such solution, called an \(r\)-matrix, is parametrized by discrete data consisting of an isometry between two subsets of positive roots in the root system of the Lie algebra of \(\mathcal{G}\) and a continuous parameter that can be described as an element of the tensor square of the Cartan subalgebra that satisfies a system of linear equations governed by the discrete data. The discrete data determines a _Belavin-Drinfeld class_ of \(r\)-matrices and corresponding Poisson-Lie brackets, and continuous data specifies a particular \(r\)-matrix and bracket within this class. Given two such brackets on \(\mathcal{G}\) associated with representatives of two Belavin-Drinfeld classes, one can define a Poisson-Lie group \(\mathcal{G}\times\mathcal{G}\) equipped with the direct product Poisson structure and then construct
Introduction
The study of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\) is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schro equation_ (SDE) in \(SL_{n}\). The SDE is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\), which is a generalization of the _quasi-linear Schrodinger equation_ (SDE) in \(SL_{n}\).
for the standard cluster structure in terms of elements of \((\mathcal{G},\{\cdot,\cdot\}_{r,r^{\prime}})\) and whose coefficients serve as coefficients for _generalized exchange relations_ in a compatible generalized cluster structure on \((\mathcal{G},\{\cdot,\cdot\}_{r,r^{\prime}})\). We do not discuss these results in the current paper and reserve them for future publications.
The paper is organized as follows. Section 2 contains a brief overview of the necessary background, including Berenstein-Fomin-Zelevinsky factorization parameters in simple Lie groups and Poisson-Lie groups and Poisson-homogeneous structures on simple Lie groups arising from the Belavin-Drinfeld classification of quasi-triangular \(r\)-matrices. The main result of the paper--construction of the rational Poisson map described above--is presented in Section 3 (Theorem 3.1). Section 4 deals with the case \(\mathcal{G}=SL_{n}\). Here, we show that in the case of _aperiodic_ Belavin-Drinfeld data (the notion we introduced in [13]), the Poisson map of Section 3 has a rational inverse. Explicit formulas for the inverse are obtained in terms of minors forming an initial cluster for the _standard_ cluster structure on \(\mathcal{G}=SL_{n}\) (Theorem 4.4). These formulas allow us to construct a regular complete cluster structure compatible with the Poisson structure associated with a pair of representatives \(r,r^{\prime}\) from two arbitrary Belavin-Drinfeld classes satisfying the aperiodicity condition (Theorems 4.11, 4.14, and 4.17).
Our research was supported in part by the NSF research grants DMS #1702054 and DMS #2100785 and by the 2022, 2023 Mercator Research Fellowship, Heidelberg University (M. G.), NSF research grants DMS #1702115 and DMS #2100791 (M. S.), and ISF grant #876/20 (A. V.).
The authors would like to thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality during the programme "Cluster algebras and representation theory" (Fall 2021) where the work on this paper was conceived. This programme was supported by EPSRC grant no EP/R014604/1. In addition, M.G. thanks the members of the CCBC for their support during the stay in Cambridge. While working further on this project, we benefited from support from several institutions and programs: Mathematical Institute of the University of Heidelberg (M. S., A. V., Summer 2022), University of Haifa (M. G., M. S., Summer 2022), Michigan State University (A. V., Fall 2022), University of Notre Dame (A. V., Spring 2023), Max Planck Institute for Mathematics, Bonn (A. V., Spring 2023), and Research in Pairs Program at the Mathematisches Forschungsinstitut Oberwolfach (M. G., M. S., A. V., Summer 2023), where the project was completed. We are grateful to all these institutions for their hospitality and outstanding working conditions they provided. Special thanks are due to Vladimir Hinich and Anna Melnikov for valuable discussions.
## 2. Preliminaries
### Factorizations in Lie groups
Let \(\mathcal{G}\) be a semsimple complex Lie group of rank \(r\), \(\mathfrak{g}\) be its Lie algebra with the Cartan decomposition \(\mathfrak{g}=\mathfrak{n}_{+}\oplus\mathfrak{h}\oplus\mathfrak{n}_{-}\), \(e_{i},h_{i},f_{i}\), \(i\in[1,r]\) be the standard generators of \(\mathfrak{g}\). We denote by \(\mathfrak{b}_{+}=\mathfrak{n}_{+}\oplus\mathfrak{h}\) the Borel subalgebra of \(\mathfrak{g}\) and by \(\mathfrak{b}_{-}=\mathfrak{n}_{-}\oplus\mathfrak{h}\) the opposite Borel subalgebra. The corresponding subgroups in \(\mathcal{G}\) are denoted \(\mathcal{N}_{+}\), \(\mathcal{H}\), \(\mathcal{N}_{-}\), \(\mathcal{B}_{+}\), and \(\mathcal{B}_{-}\).
Let \(\mathcal{W}\) be the Weyl group of \(\mathcal{G}\); it is generated by simple reflections \(s_{1},\ldots,s_{r}\). A reduced word for \(w\in\mathcal{W}\) is a sequence of indices \(\mathbf{i}=(i_{1},\ldots,i_{m})\) of the shortest possible length such that \(w=s_{i_{1}}\cdots s_{i_{m}}\). Following [3], for any reduced word \(\mathbf{i}\) for the longest element \(w_{0}\in\mathcal{W}\) we can write a generic element \(N\in\mathcal{N}_{+}\) in a unique
way as a product \(N=x_{i_{1}}(t_{1})\cdots x_{i_{m}}(t_{m})\) where \(t_{i}\) are nonzero complex numbers and \(x_{i}(t)=\exp(te_{i})\). A similar factorization with \(x_{i}(t)\) replaced by \(\exp(tf_{i})\) holds for a generic element in \(\mathcal{N}_{-}\).
Let \(J\subset[1,r]\), \(\mathcal{W}_{J}\) be the subgroup of \(\mathcal{W}\) generated by reflections \(s_{j}\), \(j\in J\), \(\mathcal{W}^{J}=\{w\in\mathcal{W}\colon l(ws_{j})>l(w)\text{ for any }j\in J\}\) be the quotient. By [4, Prop. 2.4.4], every \(w\in\mathcal{W}\) has a unique factorization \(w=w^{J}\cdot w_{J}\) such that \(w^{J}\in\mathcal{W}^{J}\), \(w_{J}\in\mathcal{W}_{J}\), and \(l(w)=l(w^{J})+l(w_{J})\). We apply this result to \(w_{0}\) and rewrite the reduced word \(\mathbf{i}\) as the concatenation of the reduced words \(\mathbf{i}^{J}\) and \(\mathbf{i}_{J}\). Consequently, this yields a factorization of an arbitrary element \(N\in\mathcal{N}+\) as \(N=N^{\prime}N^{\prime\prime}\) where \(N^{\prime\prime}\) belongs to the unipotent subgroup \(\mathcal{N}^{J}_{+}\) that corresponds to \(J\). Similarly, an element \(\tilde{N}\in\mathcal{N}_{-}\) can be factored as \(\tilde{N}=\tilde{N}^{\prime\prime}\tilde{N}^{\prime}\) with \(\tilde{N}^{\prime\prime}\in\mathcal{N}^{J}_{-}\).
### Poisson-Lie groups
A reductive complex Lie group \(\mathcal{G}\) equipped with a Poisson bracket \(\{\cdot,\cdot\}\) is called a _Poisson-Lie group_ if the multiplication map \(\mathcal{G}\times\mathcal{G}\ni(X,Y)\mapsto XY\in\mathcal{G}\) is Poisson. Perhaps, the most important class of Poisson-Lie groups is the one associated with quasitriangular Lie bialgebras defined in terms of _classical R-matrices_ (see, e. g., [5, Ch. 1], [14] and [15] for a detailed exposition of these structures).
Let \(\mathfrak{g}\) be the Lie algebra corresponding to \(\mathcal{G}\), \(\langle\cdot,\cdot\rangle\) be an invariant nondegenerate form on \(\mathfrak{g}\), and let \(\mathfrak{t}\in\mathfrak{g}\otimes\mathfrak{g}\) be the corresponding Casimir element. For an arbitrary element \(r=\sum_{i}a_{i}\otimes b_{i}\in\mathfrak{g}\otimes\mathfrak{g}\) denote
\[[[r,r]]=\sum_{i,j}[a_{i},a_{j}]\otimes b_{i}\otimes b_{j}+\sum_{i,j}a_{i} \otimes[b_{i},a_{j}]\otimes b_{j}+\sum_{i,j}a_{i}\otimes a_{j}\otimes[b_{i},b _{j}]\]
and \(r^{21}=\sum_{i}b_{i}\otimes a_{i}\). A _classical R-matrix_ is an element \(r\in\mathfrak{g}\otimes\mathfrak{g}\) that satisfies _the classical Yang-Baxter equation (CYBE)_\([[r,r]]=0\) together with the condition \(r+r^{21}=\mathfrak{t}\). The Poisson-Lie bracket on \(\mathcal{G}\) that corresponds to \(r\) can be written as
\[\begin{split}\{f_{1},f_{2}\}_{r}&=\langle R_{+}( \nabla^{L}f_{1}),\nabla^{L}f_{2}\rangle-\langle R_{+}(\nabla^{R}f_{1}),\nabla^ {R}f_{2}\rangle\\ &=\langle R_{-}(\nabla^{L}f_{1}),\nabla^{L}f_{2}\rangle-\langle R _{-}(\nabla^{R}f_{1}),\nabla^{R}f_{2}\rangle,\end{split} \tag{2.1}\]
where \(R_{+},R_{-}\in\operatorname{End}\mathfrak{g}\) are given by \(\langle R_{+}\eta,\zeta\rangle=\langle r,\eta\otimes\zeta\rangle\), \(-\langle R_{-}\zeta,\eta\rangle=\langle r,\eta\otimes\zeta\rangle\) for any \(\eta,\zeta\in\mathfrak{g}\) and \(\nabla^{L}\), \(\nabla^{R}\) are the right and the left gradients of functions on \(\mathcal{G}\) with respect to \(\langle\cdot,\cdot\rangle\) defined by
\[\left\langle\nabla^{R}f(X),\xi\right\rangle=\left.\frac{d}{dt}\right|_{t=0}f(e ^{t\xi}X),\quad\left\langle\nabla^{L}f(X),\xi\right\rangle=\left.\frac{d}{dt} \right|_{t=0}f(Xe^{t\xi})\]
for any \(\xi\in\mathfrak{g}\), \(X\in\mathcal{G}\).
The classification of classical R-matrices for simple complex Lie groups was given by Belavin and Drinfeld in [1]. Let \(\mathcal{G}\) be a simple complex Lie group, \(\Phi\) be the root system associated with its Lie algebra \(\mathfrak{g}\), \(\Phi^{+}\) be the set of positive roots, and \(\Pi\subset\Phi^{+}\) be the set of positive simple roots. A _Belavin-Drinfeld triple_\(\mathbf{\Gamma}=(\Gamma_{1},\Gamma_{2},\gamma)\) (in what follows, a _BD triple_) consists of two subsets \(\Gamma_{1},\Gamma_{2}\) of \(\Pi\) and an isometry \(\gamma\colon\Gamma_{1}\to\Gamma_{2}\) nilpotent in the following sense: for every \(\alpha\in\Gamma_{1}\) there exists \(m\in\mathbb{N}\) such that \(\gamma^{j}(\alpha)\in\Gamma_{1}\) for \(j\in[0,m-1]\), but \(\gamma^{m}(\alpha)\notin\Gamma_{1}\).
The isometry \(\gamma\) yields an isomorphism, also denoted by \(\gamma\), between the Lie subalgebras \(\mathfrak{g}^{\Gamma_{1}}\) and \(\mathfrak{g}^{\Gamma_{2}}\) that correspond to \(\Gamma_{1}\) and \(\Gamma_{2}\). It is uniquely defined by the property \(\gamma e_{\alpha}=e_{\gamma(\alpha)}\) for \(\alpha\in\Gamma_{1}\), where \(e_{\alpha}\) is the Chevalley generator corresponding to the root \(\alpha\). The isomorphism \(\gamma^{*}\colon\mathfrak{g}^{\Gamma_{2}}\to\mathfrak{g}^{\Gamma_{1}}\) is defined as the adjoint to \(\gamma\) with respect to the form \(\langle\cdot,\cdot\rangle\). It is given by \(\gamma^{*}e_{\gamma(\alpha)}=e_{\alpha}\) for \(\gamma(\alpha)\in\Gamma_{2}\). Both \(\gamma\) and
can be extended to maps of \(\mathfrak{g}\) to itself by applying first the orthogonal projection on \(\mathfrak{g}^{\Gamma_{1}}\) (respectively, on \(\mathfrak{g}^{\Gamma_{2}}\)) with respect to \(\langle\cdot,\cdot\rangle\); clearly, the extended maps remain adjoint to each other. Note that the restrictions of \(\gamma\) and \(\gamma^{*}\) to the positive and the negative nilpotent subalgebras \(\mathfrak{n}_{+}\) and \(\mathfrak{n}_{-}\) of \(\mathfrak{g}\) are Lie algebra homomorphisms of \(\mathfrak{n}_{+}\) and \(\mathfrak{n}_{-}\) to themselves, and \(\gamma(e_{\pm\alpha})=0\) for all \(\alpha\in\Pi\setminus\Gamma_{1}\). Further, if \(\mathfrak{g}\) is simply connected \(\gamma\) can be lifted to \(\boldsymbol{\gamma}=\exp\gamma\); note that \(\boldsymbol{\gamma}\) is defined only on \(\mathcal{N}_{+}\) and \(\mathcal{N}_{-}\) and is a group homomorphism.
By the classification theorem, each classical R-matrix is equivalent to an R-matrix from a _Belavin-Drinfeld class_ defined by a BD triple \(\boldsymbol{\Gamma}\). The operator \(R_{+}^{\mathsf{F}}\) corresponding to a member of this class is given by
\[R_{+}^{\mathsf{F}}=R_{0}^{\mathsf{F}}+\frac{1}{1-\gamma}\pi_{>}-\frac{\gamma^ {*}}{1-\gamma^{*}}\pi_{<},\]
where \(\pi_{>}\), \(\pi_{<}\) are projections of \(\mathfrak{g}\) onto \(\mathfrak{n}_{+}\) and \(\mathfrak{n}_{-}\) and \(R_{0}^{\mathsf{F}}\) acts on \(\mathfrak{h}\) (see [13] for more details).
In what follows we will use a Poisson bracket on \(\mathcal{G}\) that is a generalization of the bracket (2.1). Let \(r,r^{\prime}\) be two classical R-matrices, and \(R_{+},R_{+}^{\prime}\) be the corresponding operators, then we write
\[\{f_{1},f_{2}\}_{r,r^{\prime}}=\langle R_{+}^{\prime}(\nabla^{L}f_{1}),\nabla ^{L}f_{2}\rangle-\langle R_{+}(\nabla^{R}f_{1}),\nabla^{R}f_{2}\rangle. \tag{2.2}\]
By [14, Proposition 12.11], the above expression defines a Poisson bracket, which is not Poisson-Lie unless \(r=r^{\prime}\), in which case \(\{f_{1},f_{2}\}_{r,r}\) evidently coincides with \(\{f_{1},f_{2}\}_{r}\). The bracket (2.2) defines a Poisson homogeneous structure on \(\mathcal{G}\) with respect to the left and right multiplication by Poisson-Lie groups \((\mathcal{G},\{\cdot,\cdot\}_{r^{\prime}})\) and \((\mathcal{G},\{\cdot,\cdot\}_{r})\), respectively.
## 3. Poisson map
### The main construction
We will write \(\mathcal{G}_{r,r^{\prime}}\) for the Poisson manifold (\(\mathcal{G}\), \(\{\cdot,\cdot\}_{r,r^{\prime}}\)). Fix a pair of R-matrices \(r^{\mathsf{T}^{\mathsf{r}}}\), \(r^{\mathsf{T}^{\mathsf{c}}}\) from the BD classes defined by \(\boldsymbol{\Gamma}^{\mathsf{r}}\) and \(\boldsymbol{\Gamma}^{\mathsf{c}}\), respectively. Additionally, fix two R-matrices \(r^{\varnothing}_{\boldsymbol{\Gamma}^{\mathsf{c}}}\), \(r^{\varnothing}_{\boldsymbol{\Gamma}^{\mathsf{c}}}\) from the standard BD class (corresponding to the empty triple \(\Gamma\)) so that \(R_{0}\) for \(r^{\boldsymbol{\Gamma}^{\mathsf{r}}}\) and \(r^{\varnothing}_{\boldsymbol{\Gamma}^{\mathsf{c}}}\) coincide, and \(R_{0}\) for \(r^{\mathsf{T}^{\mathsf{c}}}\) and \(r^{\varnothing}_{\boldsymbol{\Gamma}^{\mathsf{c}}}\) coincide. Our aim is to build a rational Poisson map \(h:\mathcal{G}_{r^{\mathsf{F}^{\mathsf{c}}},r^{\varnothing}_{\boldsymbol{ \Gamma}^{\mathsf{c}}}}\to\mathcal{G}_{r^{\mathsf{F}^{\mathsf{r}}},r^{ \mathsf{c}}}\).
Take \(U\in\mathcal{G}\) and consider its Gauss decomposition \(U=U_{-}U_{0}U_{+}\) (so, in fact, \(U\) lies in an open dense subset in \(\mathcal{G}\)). We further factor \(U_{-}=V^{\mathrm{r}}\tilde{U}_{-}\) with \(V^{\mathrm{r}}\in\mathcal{N}_{-}^{\Gamma^{\mathrm{r}}_{1}}\) and \(U_{+}=\tilde{U}_{+}V^{\mathrm{c}}\) with \(V^{\mathrm{c}}\in\mathcal{N}_{+}^{\Gamma^{\mathrm{c}}_{2}}\), as explained in Section 2.1. Next, choose \(W^{\mathrm{r}}\in\mathcal{G}^{\Gamma^{\mathrm{r}}_{1}}\) and \(W^{\mathrm{c}}\in\mathcal{G}^{\Gamma^{\mathrm{c}}_{2}}\) such that \(W^{\mathrm{r}}\mathcal{B}_{-}^{\Gamma^{\mathrm{r}}_{1}}(W^{\mathrm{r}})^{-1}= \mathcal{B}_{+}^{\Gamma^{\mathrm{r}}_{1}}\) and \(W^{\mathrm{c}}\mathcal{B}_{+}^{\Gamma^{\mathrm{c}}_{2}}(W^{\mathrm{c}})^{-1}= \mathcal{B}_{-}^{\Gamma^{\mathrm{c}}_{2}}\). The elements \(W^{\mathrm{r}}\) and \(W^{\mathrm{c}}\) may be chosen, for example, as representatives of the longest elements of the Weyl groups of \(\mathcal{G}^{\Gamma^{\mathrm{r}}_{1}}\) and \(\mathcal{G}^{\Gamma^{\mathrm{c}}_{2}}\), respectively, via the procedure described in [7, Sect. 1.4]. Write
\[V^{\mathrm{r}}W^{\mathrm{r}}=(V^{\mathrm{r}}W^{\mathrm{r}})_{+}(V^{\mathrm{r}}W ^{\mathrm{r}})_{0,-},\qquad W^{\mathrm{c}}V^{\mathrm{c}}=(W^{\mathrm{c}}V^{ \mathrm{c}})_{+,0}(W^{\mathrm{c}}V^{\mathrm{c}})_{-}\]
and set \(\bar{V}^{\mathrm{r}}=(V^{\mathrm{r}}W^{\mathrm{r}})_{+}\in\mathcal{N}_{+}^{ \Gamma^{\mathrm{r}}_{1}}\), \(\bar{V}^{\mathrm{c}}=(W^{\mathrm{c}}V^{\mathrm{c}})_{-}\in\mathcal{N}_{-}^{ \Gamma^{\mathrm{c}}_{2}}\).
Define
\[H^{\mathrm{r}}=H^{\mathrm{r}}(U) =\cdots(\boldsymbol{\gamma}^{\mathrm{r}})^{3}(\bar{V}^{\mathrm{r}}) (\boldsymbol{\gamma}^{\mathrm{r}})^{2}(\bar{V}^{\mathrm{r}})\boldsymbol{\gamma}^ {\mathrm{r}}(\bar{V}^{\mathrm{r}})\in\mathcal{N}_{+}^{\Gamma^{\mathrm{r}}_{2}},\] \[H^{\mathrm{c}}=H^{\mathrm{c}}(U) =\cdots((\boldsymbol{\gamma}^{\mathrm{c}})^{*})^{3}(\bar{V}^{\mathrm{ c}})((\boldsymbol{\gamma}^{\mathrm{c}})^{*})^{2}(\bar{V}^{\mathrm{c}})( \boldsymbol{\gamma}^{\mathrm{c}})^{*}(\bar{V}^{\mathrm{c}})\in\mathcal{N}_{-}^{ \Gamma^{\mathrm{c}}_{1}}\]
(the products above are finite due to the nilpotency of \(\gamma^{\mathrm{r}}\) and \(\gamma^{\mathrm{c}}\)).
**Theorem 3.1**.: _The map \(h:\mathcal{G}_{r^{\varnothing}_{\Gamma^{\mathrm{r}}},r^{\varnothing}_{\Gamma^{ \mathrm{c}}}}\to\mathcal{G}_{r^{\Gamma^{\mathrm{r}}},r^{\Gamma^{\mathrm{c}}}}\) defined by \(h(U)=H^{\mathrm{r}}(U)UH^{\mathrm{c}}(U)\) is a rational Poisson map._
Proof.: Fix an arbitrary \(r^{\varnothing}\) from the standard BD class and define a map \(h^{\mathrm{r}}:\mathcal{G}_{r^{\varnothing}_{\Gamma^{\mathrm{r}}},r^{ \varnothing}}\to\mathcal{G}_{r^{\Gamma^{\mathrm{r}}},r^{\varnothing}}\) via \(h^{\mathrm{r}}(U)=H^{\mathrm{r}}(U)U\). For the same \(r^{\varnothing}\) as above define a map \(h^{\mathrm{c}}:\mathcal{G}_{r^{\varnothing},r^{\varnothing}_{\Gamma^{\mathrm{c} }}}\to\mathcal{G}_{r^{\varnothing},r^{\Gamma^{\mathrm{c}}}}\) via \(h^{\mathrm{c}}(U)=UH^{\mathrm{c}}(U)\).
**Theorem 3.2**.: _The maps \(h^{\mathrm{r}}\) and \(h^{\mathrm{c}}\) are rational Poisson maps._
The proof of Theorem 3.2 is given in the next subsection.
Since the Borel subgroup \(\mathcal{B}_{+}\subset\mathcal{G}\) and the opposite Borel subgroup \(\mathcal{B}_{-}\subset\mathcal{G}\) are Poisson submanifolds, the restrictions of \(h^{\mathrm{c}}\) to \(\mathcal{B}_{+}\) and of \(h^{\mathrm{r}}\) to \(\mathcal{B}_{-}\) are Poisson maps as well; their images \(h^{\mathrm{c}}(\mathcal{B}_{+})\) and \(h^{\mathrm{r}}(\mathcal{B}_{-})\) are called _twisted Borels_.
Note that the following diagram is commutative:
(3.1)
where \(g:(U^{-},U^{+})\mapsto U^{-}U^{+}=U\) is the multiplication map. Indeed, since \(h^{\mathrm{r}}(U^{-})=H^{\mathrm{r}}(U^{-})U^{-}\) and \(h^{\mathrm{c}}(U^{+})=U^{+}H^{\mathrm{c}}(U^{+})\), we get
\[g\circ(h^{\mathrm{r}},h^{\mathrm{c}})(U^{-},U^{+})=H^{\mathrm{r}}(U^{-})U^{-} U^{+}H^{\mathrm{c}}(U^{+})=H^{\mathrm{r}}(U^{-})UH^{\mathrm{c}}(U^{+}).\]
Recall that \(H^{\mathrm{r}}(U)\) depends only on the first term of the Gauss decomposition, and \(H^{\mathrm{c}}(U)\) only on its last term, hence \(H^{\mathrm{r}}(U^{-})=H^{\mathrm{r}}(U)\), \(H^{\mathrm{c}}(U^{+})=H^{\mathrm{c}}(U)\), and
\[h\circ g(U^{-},U^{+})=h(U)=H^{\mathrm{r}}(U)UH^{\mathrm{c}}(U). \tag{3.2}\]
**Proposition 3.3**.: _For any three R-matrices \(r\), \(r^{\prime}\), \(r^{\prime\prime}\), the multiplication map \(g:\mathcal{G}_{r^{\prime},r}\times\mathcal{G}_{r,r^{\prime\prime}}\to \mathcal{G}_{r^{\prime},r^{\prime\prime}}\) is Poisson._
Proof.: Let \(\lambda_{X}\) denote the left translation by \(X\) and \(\rho_{Y}\) denote the right translation by \(Y\). We have to check the identity
\[\{\rho_{Y}f^{1},\rho_{Y}f^{2}\}_{r^{\prime},r}(X)+\{\lambda_{X}f^{1},\lambda_{ X}f^{2}\}_{r,r^{\prime\prime}}(Y)=\{f^{1},f^{2}\}_{r^{\prime},r^{\prime\prime}}(Z) \tag{3.3}\]
for \(Z=XY\). Note that \(\nabla_{X}(\rho_{Y}f)(X)=Y\nabla_{Z}f(Z)\) and \(\nabla_{Y}(\lambda_{X}f)(Y)=\nabla_{Z}f(Z)X\). Consequently,
\[\nabla^{R}(\rho_{Y}f)(X) =X\nabla_{X}(\rho_{Y}f)(X)=Z\nabla_{Z}f(Z)=\nabla^{R}f(Z),\] \[\nabla^{L}(\rho_{Y}f)(X) =\nabla_{X}(\rho_{Y}f)(X)X=\operatorname{Ad}_{Y}(\nabla_{Z}f(Z)Z )=\operatorname{Ad}_{Y}\nabla^{L}f(Z),\] \[\nabla^{R}(\lambda_{X}f)(Y) =Y\nabla_{Y}(\lambda_{X}f)(Y)=\operatorname{Ad}_{Y}(\nabla_{Z}f(Z )Z)=\operatorname{Ad}_{Y}\nabla^{L}f(Z),\] \[\nabla^{L}(\lambda_{X}f)(Y) =\nabla_{Y}(\lambda_{X}f)(Y)Y=\nabla_{Z}f(Z)Z=\nabla^{L}f(Z).\]
We thus have
\[\{\rho_{Y}f^{1},\rho_{Y}f^{2}\}_{r^{\prime},r}(X)=\langle R_{+} \nabla^{L}(\rho_{Y}f^{1}),\nabla^{L}(\rho_{Y}f^{2})\rangle-\langle R^{\prime}_ {+}\nabla^{R}(\rho_{Y}f^{1}),\nabla^{R}(\rho_{Y}f^{2})\rangle\\ =\langle R_{+}\operatorname{Ad}_{Y}\nabla^{L}f^{1},\operatorname{ Ad}_{Y}\nabla^{L}f^{2}\rangle-\langle R^{\prime}_{+}\nabla^{R}f^{1},\nabla^{R}f^{2}\rangle\]
and
\[\{\lambda_{X}f^{1},\lambda_{X}f^{2}\}_{r,r^{\prime\prime}}(Y)= \langle R^{\prime\prime}_{+}\nabla^{L}(\lambda_{X}f^{1}),\nabla^{L }(\lambda_{X}f^{2})\rangle-\langle R_{+}\nabla^{R}(\lambda_{X}f^{1}),\nabla^{R }(\lambda_{X}f^{2})\rangle\] \[=\langle R^{\prime\prime}_{+}\nabla^{L}f^{1},\nabla^{L}f^{2} \rangle-\langle R_{+}\operatorname{Ad}_{Y}\nabla^{L}f^{1},\operatorname{Ad}_{Y }\nabla^{L}f^{2}\rangle,\]
which proves (3.3).
A particular case of this claim for \(r=r^{\prime}\) or \(r=r^{\prime\prime}\) is given in Proposition 5.2.18 of [14]. Note: in their notation, our \(\{\cdot,\cdot\}_{r,r^{\prime}}\) is \(\{\cdot,\cdot\}_{r^{\prime},-r}\).
Since \(h^{\mathrm{r}}\) and \(h^{\mathrm{c}}\) are Poisson and \(g\) is Poisson and surjective we get that \(h\) is Poisson.
### Proof of Theorem 3.2
We only present the proof for \(h^{\mathrm{r}}\), since the proof for \(h^{\mathrm{c}}\) is similar. To make the formulas more readable, in this Section we use the following notation: \(\boldsymbol{\Gamma}=\boldsymbol{\Gamma}^{\mathrm{r}}\), \(\Gamma_{i}=\Gamma_{i}^{\mathrm{r}}\), \(\gamma=\gamma^{\mathrm{r}}\), \(\mathcal{G}_{\pm}=\mathcal{G}_{\pm}^{\mathrm{r}}\), \(V=V^{\mathrm{r}}\), \(W=W^{\mathrm{r}}\), \(\bar{V}=\bar{V}^{\mathrm{r}}\), \(H=H^{\mathrm{r}}\).
Our first goal is to invert \(h^{\mathrm{r}}\). We start with finding \(\bar{V}\) via \(H\). Since \(H\in\mathcal{N}_{+}^{\Gamma_{2}}\) and \(\boldsymbol{\gamma}\) is a homomorphism we have
\[\boldsymbol{\gamma}(H)=\cdots\boldsymbol{\gamma}^{4}(\bar{V})\boldsymbol{ \gamma}^{3}(\bar{V})\boldsymbol{\gamma}^{2}(\bar{V})=H\boldsymbol{\gamma}( \bar{V})^{-1},\]
and so \(\boldsymbol{\gamma}(\bar{V})=\boldsymbol{\gamma}(H^{-1})H\), which gives
\[\boldsymbol{\gamma}^{*}\boldsymbol{\gamma}(\bar{V})=\boldsymbol{\gamma}^{*} \boldsymbol{\gamma}(H^{-1})\boldsymbol{\gamma}^{*}(H).\]
Recall that \(\boldsymbol{\gamma}^{*}\boldsymbol{\gamma}\) acts on \(\mathcal{N}_{+}\) as the projection to \(\mathcal{N}_{+}^{\Gamma_{1}}\), so \(\boldsymbol{\gamma}^{*}\boldsymbol{\gamma}(\bar{V})=\bar{V}\) and hence
\[\bar{V}=\boldsymbol{\gamma}^{*}\boldsymbol{\gamma}(H^{-1})\boldsymbol{\gamma} ^{*}(H). \tag{3.4}\]
Next, we find \(V\) via \(H\). Recall that \(\bar{V}=(VW)_{+}\), hence \(\bar{V}(VW)_{0,-}=VW\), hence
\[\bar{V}W^{-1}\left(W(VW)_{0,-}W^{-1}\right)=V.\]
Applying the Gauss decomposition once again we get
\[(\bar{V}W^{-1})_{-}(\bar{V}W^{-1})_{0,+}\left(W(VW)_{0,-}W^{-1}\right)=V. \tag{3.5}\]
The last bracket on the left belongs to \(\mathcal{B}_{+}\), while \(V\in\mathcal{N}_{-}\), so the second and the third bracket cancel each other and we get \(V=(\bar{V}W^{-1})_{-}\), which together with (3.4) gives
\[V=(\boldsymbol{\gamma}^{*}\boldsymbol{\gamma}(H^{-1})\boldsymbol{\gamma}^{*}( H)W^{-1})_{-}.\]
Consider two parabolic subalgebras of \(\mathfrak{g}\) determined by \(\boldsymbol{\Gamma}\): \(\mathfrak{p}_{+}^{\boldsymbol{\Gamma}}\) contains \(\mathfrak{b}_{+}\) and all the negative root spaces in \(\mathfrak{g}^{\Gamma_{1}}\), while \(\mathfrak{p}_{-}^{\boldsymbol{\Gamma}}\) contains \(\mathfrak{b}_{-}\) and all the positive root spaces in \(\mathfrak{g}^{\Gamma_{2}}\). Denote by \(\mathcal{P}_{\pm}^{\boldsymbol{\Gamma}}\) the corresponding parabolic subgroups of \(\mathcal{G}\), and let \(\mathcal{Z}=\mathcal{P}_{+}^{\boldsymbol{\Gamma}}\cap\mathcal{P}_{-}^{ \boldsymbol{\Gamma}}\). Note that the corresponding subalgebra \(\mathfrak{p}_{+}^{\boldsymbol{\Gamma}}\cap\mathfrak{p}_{-}^{\boldsymbol{ \Gamma}}\) is a seeweed subalgebra introduced for type A in [6]. There is a commutative diagram
where \(\tilde{\mathcal{G}}^{\sigma_{1},\sigma_{2}}\) is the reduced double Bruhat cell corresponding to \(\sigma_{1}=w_{0}^{\Gamma_{1}}w_{0}\), \(\sigma_{2}=w_{0}^{\Gamma_{2}}w_{0}\) with \(w_{0}\), \(w_{0}^{\Gamma_{1}}\) and \(w_{0}^{\Gamma_{2}}\) being the longest elements of the corresponding Weyl groups, and \(g\) is the product similarly to (3.1). So, to invert \(h^{\mathrm{r}}\) on \(\mathcal{G}\) it is enough to invert it on \(\mathcal{Z}\) and to invert the vertical arrow on the right. Note that
reduced double Bruhat cells are not Poisson submanifolds. For this reason, to prove that \((h^{\mathrm{r}})^{-1}\) is Poisson on the whole \(\mathcal{G}\) provided it is Poisson on \(\mathcal{Z}\) we use, similarly to (3.1), the commutative diagram
where \(g\) on both sides is Poisson by Proposition 3.3.
To invert \(h^{\mathrm{r}}\) on \(\mathcal{Z}\) note that for \(U\in\mathcal{Z}\) one has \(\bar{U}_{-}=\mathbf{1}\), and hence
\[Z=h^{\mathrm{r}}(U)=HVU_{0,+}=(HV)_{-}(HV)_{0,+}U_{0,+}=Z_{-}Z_{0,+},\]
(one more open condition), so that
\[Z_{-}=(HV)_{-}=(H(\boldsymbol{\gamma}^{*}\boldsymbol{\gamma}(H^{-1}) \boldsymbol{\gamma}^{*}(H)W^{-1})_{-})_{-}.\]
Clearly, \((AB_{-})_{-}=(AB)_{-}\), since \(AB=AB_{-}B_{0,+}=(AB_{-})_{-}(AB_{-})_{0,+}B_{0,+}\), so
\[Z_{-}=(H\boldsymbol{\gamma}^{*}\boldsymbol{\gamma}(H^{-1})\boldsymbol{\gamma} ^{*}(H)W^{-1})_{-}.\]
Recall that \(H\in\mathcal{N}_{+}^{\Gamma_{2}}\), hence \(\boldsymbol{\gamma}^{*}(H)W^{-1}\in\mathcal{G}^{\Gamma_{1}}\). On the other hand, the projection of \(H\boldsymbol{\gamma}^{*}\boldsymbol{\gamma}(H^{-1})\) to \(\mathcal{N}_{+}^{\Gamma_{1}}\) is given by
\[\boldsymbol{\gamma}^{*}\boldsymbol{\gamma}(H\boldsymbol{\gamma}^{*} \boldsymbol{\gamma}(H^{-1}))=\boldsymbol{\gamma}^{*}\boldsymbol{\gamma}(H) \boldsymbol{\gamma}^{*}\boldsymbol{\gamma}(H^{-1})=\mathbf{1}\]
since \(\boldsymbol{\gamma}^{*}\boldsymbol{\gamma}\) is an idempotent, and so \(Z_{-}=(\boldsymbol{\gamma}^{*}(H)W^{-1})_{-}\). Using the same trick as in (3.5) in the opposite direction we get \(\boldsymbol{\gamma}^{*}(H)=(Z_{-}W)_{+}=\bar{Z}_{+}\) for \(\bar{Z}=Z_{-}W\). Note that \(\bar{Z}_{+}\in\mathcal{N}_{+}^{\Gamma_{1}}\). Since \(\boldsymbol{\gamma}\boldsymbol{\gamma}^{*}\) acts on \(\mathcal{N}_{+}\) as the projection to \(\mathcal{N}_{+}^{\Gamma_{2}}\), we get \(\boldsymbol{\gamma}\boldsymbol{\gamma}^{*}(H)=H=\boldsymbol{\gamma}(\bar{Z}_ {+})\), and finally, \(U=(h^{\mathrm{r}})^{-1}(Z)=H^{-1}Z=\boldsymbol{\gamma}(\bar{Z}_{+}^{-1})Z\). Thus, we have inverted \(h^{\mathrm{r}}\) on \(\mathcal{Z}\).
To proceed further we need to find the variation \(\delta U\). Recall that \(Z=Z_{-}Z_{0,+}\), hence \(T_{Z}\mathcal{G}=(T_{Z_{-}}\mathcal{N}_{-})Z_{0,+}\oplus Z_{-}(T_{Z_{0,+}} \mathcal{B}_{+})\), or, in other words, \(\delta Z=\delta Z_{-}Z_{0,+}+Z_{-}\delta Z_{0,+}\). Here and in what follows we admit a common abuse of notation and write \(gv\) instead of \((\lambda_{g})_{*}(v)\) for the left translation of a tangent vector \(v\) by a group element \(g\) and \(vg\) instead of \((\rho_{g})_{*}(v)\) for the right translation. Note that \(Z_{-}^{-1}\delta Z_{-}\in\mathfrak{n}_{-}\) since the left translation by \(Z_{-}^{-1}\) identifies \(\mathfrak{n}_{-}=T_{1}\mathcal{N}_{-}\) with \(T_{Z_{-}}\mathcal{N}_{-}\). Similarly, \(Z^{-1}\delta Z\in\mathfrak{g}\) and \(Z_{0,+}^{-1}\delta Z_{0,+}\in\mathfrak{b}_{+}\). Therefore,
\[\operatorname{Ad}_{Z_{0,+}}Z^{-1}\delta Z=Z_{-}^{-1}\delta Z_{-}+\operatorname {Ad}_{Z_{0,+}}Z_{0,+}^{-1}\delta Z_{0,+}.\]
The first term on the right belongs to \(\mathfrak{n}_{-}\) and the second to \(\mathfrak{b}_{+}\), hence we get \((\operatorname{Ad}_{Z_{0,+}}Z^{-1}\delta Z)_{<}=Z_{-}^{-1}\delta Z_{-}\); here and in what follows we write \(A_{<}\) for \(\pi_{<}(A)\), etc.
Similarly, \(\bar{Z}=\bar{Z}_{+}\bar{Z}_{0,-}\), and hence
\[\operatorname{Ad}_{Z_{0,-}}\bar{Z}^{-1}\delta\bar{Z}=\bar{Z}_{+}^{-1}\delta \bar{Z}_{+}+\operatorname{Ad}_{Z_{0,-}}\bar{Z}_{0,-}^{-1}\delta\bar{Z}_{0,-}.\]
Here the first term on the right belongs to \(\mathfrak{n}_{+}\) and the second to \(\mathfrak{b}_{-}\), hence
\[\bar{Z}_{+}^{-1}\delta\bar{Z}_{+}=(\operatorname{Ad}_{\bar{Z}_{0,-}}\bar{Z}^{-1}\delta\bar{Z})_{>}=(\operatorname{Ad}_{\bar{Z}_{0,-}W^{-1}}Z_{- }^{-1}\delta Z_{-})_{>}\\ =\left(\operatorname{Ad}_{\bar{Z}}(\operatorname{Ad}_{Z_{0,+}}Z^ {-1}\delta Z)_{<}\right)_{>}\]
with \(\tilde{Z}=\bar{Z}_{0,-}W^{-1}\), since \(\bar{Z}=Z_{-}W\) and \(\bar{Z}^{-1}\delta\bar{Z}=\operatorname{Ad}_{W^{-1}}Z_{-}^{-1}\delta Z_{-}\).
Finally, \(U=\boldsymbol{\gamma}(\bar{Z}_{+}^{-1})Z\), so
\[U^{-1}\delta U=U^{-1}\delta(\boldsymbol{\gamma}(\bar{Z}_{+}^{-1}))Z +U^{-1}\boldsymbol{\gamma}(\bar{Z}_{+}^{-1})\delta Z\\ =-U^{-1}\boldsymbol{\gamma}(\bar{Z}_{+}^{-1})\delta(\boldsymbol{ \gamma}(\bar{Z}_{+}))\boldsymbol{\gamma}(\bar{Z}_{+}^{-1})Z+Z^{-1}\delta Z=- \operatorname{Ad}_{U^{-1}}\left(\gamma(\bar{Z}_{+}^{-1}\delta\bar{Z}_{+}) \right)+Z^{-1}\delta Z\\ =-\operatorname{Ad}_{U^{-1}}\left(\gamma\left(\left(\operatorname {Ad}_{\bar{Z}}(\operatorname{Ad}_{Z_{0,+}}Z^{-1}\delta Z)_{<}\right)_{>}\right) \right)+Z^{-1}\delta Z.\]
Let us compute the gradients of \(\hat{f}(Z)=f\circ(h^{r})^{-1}(Z)\). We start with
\[\langle\nabla f(U)U,U^{-1}\delta U\rangle\\ =\langle\nabla f(U)U,Z^{-1}\delta Z\rangle-\left\langle\nabla f(U )U,\operatorname{Ad}_{U^{-1}}\left(\gamma\left(\left(\operatorname{Ad}_{\bar {Z}}(\operatorname{Ad}_{Z_{0,+}}Z^{-1}\delta Z)_{<}\right)_{>}\right)\right) \right\rangle\\ =\langle\nabla f(U)U,Z^{-1}\delta Z\rangle-\left\langle \operatorname{Ad}_{U}(\nabla f(U)U),\gamma\left(\left(\operatorname{Ad}_{\bar {Z}}(\operatorname{Ad}_{Z_{0,+}}Z^{-1}\delta Z)_{<}\right)_{>}\right)\right\rangle.\]
Note that \(\langle\alpha,\gamma(\beta_{>})\rangle=\langle\gamma^{*}(\alpha_{<}),\beta\rangle\) for any \(\alpha,\beta\in\mathfrak{g}\), since
\[\langle\alpha,\gamma(\beta_{>})\rangle=\langle\alpha_{<},\gamma( \beta_{>})\rangle+\langle\alpha_{\geq},\gamma(\beta_{>})\rangle=\langle\alpha_ {<},\gamma(\beta_{>})\rangle\\ =\langle\gamma^{*}(\alpha_{<}),\beta_{>}\rangle=\langle\gamma^{* }(\alpha_{<}),\beta_{>}\rangle+\langle\gamma^{*}(\alpha_{<}),\beta_{\leq} \rangle=\langle\gamma^{*}(\alpha_{<}),\beta\rangle,\]
so that
\[\left\langle\operatorname{Ad}_{U}(\nabla f(U)U),\gamma\left( \left(\operatorname{Ad}_{\bar{Z}}(\operatorname{Ad}_{Z_{0,+}}Z^{-1}\delta Z)_{ <}\right)_{>}\right)\right\rangle\\ =\left\langle\gamma^{*}\left(\left(\operatorname{Ad}_{U}(\nabla f (U)U)\right)_{<}\right),\operatorname{Ad}_{\bar{Z}}\left(\operatorname{Ad}_{Z_{ 0,+}}(Z^{-1}\delta Z)\right)_{<}\right\rangle\\ =\left\langle\operatorname{Ad}_{\bar{Z}^{-1}}\gamma^{*}\left( \left(\operatorname{Ad}_{U}(\nabla f(U)U)\right)_{<}\right),\left( \operatorname{Ad}_{Z_{0,+}}(Z^{-1}\delta Z)\right)_{<}\right\rangle.\]
Further, \(\gamma^{*}\left(\left(\operatorname{Ad}_{U}(\nabla f(U)U)\right)_{<}\right) \in\mathfrak{n}_{-}\), hence \(\operatorname{Ad}_{\bar{Z}_{0,-}^{-1}}\left(\gamma^{*}\left(\left(\operatorname {Ad}_{U}(\nabla f(U)U)\right)_{<}\right)\right)\in\mathfrak{n}_{-}\), so that \(\operatorname{Ad}_{W}\operatorname{Ad}_{\bar{Z}_{0,-}^{-1}}\left(\gamma^{*} \left(\left(\operatorname{Ad}_{U}(\nabla f(U)U)\right)_{<}\right)\right)\in \mathfrak{n}_{+}\). Therefore
\[\left\langle\operatorname{Ad}_{\bar{Z}^{-1}}\gamma^{*}\left( \left(\operatorname{Ad}_{U}(\nabla f(U)U)\right)_{<}\right),\left( \operatorname{Ad}_{Z_{0,+}}(Z^{-1}\delta Z)\right)_{<}\right\rangle\\ =\left\langle\operatorname{Ad}_{\bar{Z}^{-1}}\gamma^{*}\left( \left(\operatorname{Ad}_{U}(\nabla f(U)U)\right)_{<}\right),\operatorname{Ad}_ {Z_{0,+}}(Z^{-1}\delta Z)\right\rangle\\ =\left\langle\operatorname{Ad}_{Z_{0,+}^{-1}}\operatorname{Ad}_{ \bar{Z}^{-1}}\gamma^{*}\left(\left(\operatorname{Ad}_{U}(\nabla f(U)U)\right)_ {<}\right),Z^{-1}\delta Z\right\rangle,\]
so finally
\[\langle\nabla f(U)U,U^{-1}\delta U\rangle =\left\langle\nabla f(U)U-\operatorname{Ad}_{Z_{0,+}^{-1}} \operatorname{Ad}_{\bar{Z}^{-1}}\gamma^{*}\left(\left(\operatorname{Ad}_{U}( \nabla f(U)U)\right)_{<}\right),Z^{-1}\delta Z\right\rangle\] \[=\left\langle\nabla f(U)U-\operatorname{Ad}_{Z^{-1}\bar{Z}_{+}} \gamma^{*}\left(\left(\operatorname{Ad}_{U}(\nabla f(U)U)\right)_{<}\right),Z^ {-1}\delta Z\right\rangle\]
since \(Z_{0,+}^{-1}W\bar{Z}_{0,-}^{-1}=Z^{-1}\bar{Z}_{+}\).
Recall that \(\langle\nabla\hat{f}(Z)Z,Z^{-1}\delta Z\rangle=\langle\nabla f(U)U,U^{-1} \delta U\rangle\), hence
\[\nabla^{L}\hat{f} =\nabla\hat{f}(Z)Z=\nabla f(U)U-\operatorname{Ad}_{Z^{-1}\bar{Z}_{+ }}\gamma^{*}\left(\left(\operatorname{Ad}_{U}(\nabla f(U)U)\right)_{<}\right)\] \[\nabla^{R}\hat{f} =\operatorname{Ad}_{Z}(\nabla\hat{f}(Z)Z)=\operatorname{Ad}_{Z}( \nabla f(U)U)-\operatorname{Ad}_{\bar{Z}_{+}}\gamma^{*}\left(\left(\operatorname {Ad}_{U}(\nabla f(U)U)\right)_{<}\right).\]
To prove Theorem 3.2 we need to verify that \(\{\hat{f}_{1},\hat{f}_{2}\}_{r^{\Gamma},r^{\varrho}}(Z)=\{f_{1},f_{2}\}_{r^{ \Gamma}_{\Gamma},r^{\varrho}}(U)\) for \(U=h_{\Gamma}^{-1}(Z)\). Recall that
\[\{\hat{f}_{1},\hat{f}_{2}\}_{r^{\Gamma},r^{\varrho}} =\langle R_{+}^{\varrho}\nabla^{L}\hat{f}_{1},\nabla^{L}\hat{f}_{2} \rangle-\langle R_{+}^{\Gamma}\nabla^{R}\hat{f}_{1},\nabla^{R}\hat{f}_{2}\rangle,\] \[\{f_{1},f_{2}\}_{r^{\Gamma}_{\Gamma},r^{\varrho}} =\langle R_{+}^{\varrho}\nabla^{L}f_{1},\nabla^{L}f_{2}\rangle- \langle R_{+}^{\varrho}\nabla^{R}f_{1},\nabla^{R}f_{2}\rangle, \tag{3.6}\]
with
\[R_{+}^{\mathbf{\Gamma}} =R_{0}^{\mathbf{\Gamma}}+\frac{1}{1-\gamma}\pi_{>}-\frac{\gamma^{*} }{1-\gamma^{*}}\pi_{<},\] \[R_{+}^{\varnothing} =R_{0}^{\varnothing}+\pi_{>},\qquad(R_{+})_{\mathbf{\Gamma}}^{ \varnothing}=R_{0}^{\mathbf{\Gamma}}+\pi_{>}. \tag{3.7}\]
Further, \(R_{0}^{\mathbf{\Gamma}}=(\frac{1}{2}+S^{\mathbf{\Gamma}})\pi_{*}\), where \(\pi_{*}\) is the projection on \(\mathfrak{h}\) and \(S^{\mathbf{\Gamma}}\) is a skew-symmetric operator on \(\mathfrak{h}\) satisfying \(S^{\mathbf{\Gamma}}(1-\gamma)=\frac{1}{2}(1+\gamma)\) on \(\mathfrak{h}^{\Gamma_{1}}\). Consequently, on \(\mathfrak{h}^{\Gamma_{2}}\)
\[S^{\mathbf{\Gamma}}(\gamma^{*}-1)=S^{\mathbf{\Gamma}}(1-\gamma)\gamma^{*}= \frac{1}{2}(1+\gamma)\gamma^{*}=\frac{1}{2}(\gamma^{*}+1),\]
and hence
\[R_{0}^{\mathbf{\Gamma}}(\gamma^{*}-1)=\gamma^{*}\quad\text{on }\mathfrak{h}^{ \Gamma_{2}}. \tag{3.8}\]
Finally, \((R_{0}^{\mathbf{\Gamma}})^{*}=(\frac{1}{2}-S^{\mathbf{\Gamma}})\pi_{*}=\pi_{ *}-R_{0}^{\mathbf{\Gamma}}\), so that
\[(R_{0}^{\mathbf{\Gamma}})^{*}(1-\gamma^{*})=1\quad\text{on }\mathfrak{h}^{ \Gamma_{2}}. \tag{3.9}\]
Introduce some notation:
\[\alpha_{f}=\nabla^{L}f=\nabla f(U)U,\qquad\beta_{f}=\nabla^{R}f=\operatorname{ Ad}_{U}\alpha_{f}.\]
Further,
\[\zeta_{f}=\operatorname{Ad}_{Z_{+}}\gamma^{*}(\beta_{f})_{<},\qquad\xi_{f}= \operatorname{Ad}_{Z^{-1}}\zeta_{f}.\]
Note that \(\xi_{f}\in\mathfrak{n}_{+}\) since \(Z^{-1}\bar{Z}_{+}=Z_{0,+}^{-1}W\bar{Z}_{0,-}^{-1}\). Finally, denote
\[\operatorname{Ad}_{Z}\alpha_{f}=\operatorname{Ad}_{\boldsymbol{\gamma}(Z_{+ })}\beta_{f}=\eta_{f},\]
so that
\[\nabla^{L}\hat{f}=\alpha_{f}-\xi_{f},\qquad\nabla^{R}\hat{f}=\eta_{f}-\zeta_{ f}.\]
We will need the following technical result.
**Lemma 3.4**.: _For any \(X\in\mathcal{G}^{\Gamma_{1}}\) and \(\eta\in\mathfrak{g}\) holds \(\gamma(\operatorname{Ad}_{X}\eta)=\operatorname{Ad}_{\boldsymbol{\gamma}(X)} \gamma(\eta)\)._
Proof.: Write \(\eta=\lambda+\theta\) with \(\lambda\in\mathfrak{g}^{\Gamma_{1}}\) and \(\theta\) orthogonal to \(\mathfrak{g}^{\Gamma_{1}}\) with respect to \(\langle\cdot,\cdot\rangle\), then \(\gamma(\eta)=\gamma(\lambda)\). Next, \(\gamma(\operatorname{Ad}_{X}\eta)=\gamma(\operatorname{Ad}_{X}\lambda)+\gamma( \operatorname{Ad}_{X}\theta)\). Note that \(\gamma(\operatorname{Ad}_{X}\lambda)=\operatorname{Ad}_{\boldsymbol{\gamma}(X )}\gamma(\lambda)=\operatorname{Ad}_{\boldsymbol{\gamma}(X)}\gamma(\eta)\) since \(\gamma\) is a Lie algebra isomorphism on \(\mathfrak{g}^{\Gamma_{1}}\). It remains to prove that \(\operatorname{Ad}_{X}\theta\) is orthogonal to \(\mathfrak{g}^{\Gamma_{1}}\), and thus \(\gamma(\operatorname{Ad}_{X}\theta)=0\). This is equivalent to \(\langle\operatorname{ad}_{\xi}\theta,\zeta\rangle=0\) for any \(\xi,\zeta\in\mathfrak{g}^{\Gamma_{1}}\). Clearly, \(\langle[\xi,\theta],\zeta\rangle=\langle\theta,[\xi,\zeta]\rangle=0\) since \([\xi,\zeta]\in\mathfrak{g}^{\Gamma_{1}}\).
From (3.6) and (3.7) follows that the non-diagonal part of \(\{\hat{f}_{1},\hat{f}_{2}\}_{\mathbf{\Gamma},\varnothing}\) equals
\[\langle(\nabla^{L}\hat{f}_{1})_{>},\nabla^{L}\hat{f}_{2}\rangle-\left\langle \frac{1}{1-\gamma}(\nabla^{R}\hat{f}_{1})_{>},\nabla^{R}\hat{f}_{2}\right\rangle +\left\langle\frac{\gamma^{*}}{1-\gamma^{*}}(\nabla^{R}\hat{f}_{1})_{<},\nabla^ {R}\hat{f}_{2}\right\rangle. \tag{3.10}\]
The first expression in (3.10) is equal to
\[\langle(\alpha_{f_{1}}-\xi_{f_{1}})_{>},\alpha_{f_{2}}-\xi_{f_{2}}\rangle= \langle(\alpha_{f_{1}})_{>},\alpha_{f_{2}}\rangle-\langle(\xi_{f_{1}})_{>}, \alpha_{f_{2}}\rangle-\langle(\alpha_{f_{1}})_{>},\xi_{f_{2}}\rangle+\langle( \xi_{f_{1}})_{>},\xi_{f_{2}}\rangle.\]
Recall that \(\xi_{f}\in\mathfrak{n}_{+}\), so the last two expressions above vanish. The second expression equals
\[\langle(\xi_{f_{1}})_{>},\alpha_{f_{2}}\rangle=\langle\xi_{f_{1}},\alpha_{f_{2 }}\rangle=\langle\operatorname{Ad}_{Z^{-1}}\zeta_{f_{1}},\alpha_{f_{2}}\rangle= \langle\zeta_{f_{1}},\operatorname{Ad}_{Z}\alpha_{f_{2}}\rangle=\langle\zeta_ {f_{1}},\eta_{f_{2}}\rangle,\]
so that finally
\[\langle(\nabla^{L}\hat{f}_{1})_{>},\nabla^{L}\hat{f}_{2}\rangle=\langle(\alpha_ {f_{1}})_{>},\alpha_{f_{2}}\rangle-\langle\zeta_{f_{1}},\eta_{f_{2}}\rangle.\]
The second expression in (3.10) is equal to
\[\left\langle\frac{1}{1-\gamma}(\eta_{f_{1}}-\zeta_{f_{1}})_{>},\eta _{f_{2}}-\zeta_{f_{2}}\right\rangle\] \[=\left\langle(\eta_{f_{1}}-\zeta_{f_{1}})_{>},\eta_{f_{2}}-\zeta_{ f_{2}}\right\rangle+\left\langle\frac{\gamma}{1-\gamma}(\eta_{f_{1}}-\zeta_{f_{1}})_{> },\eta_{f_{2}}-\zeta_{f_{2}}\right\rangle\] \[=\left\langle(\eta_{f_{1}}-\zeta_{f_{1}})_{>},\eta_{f_{2}}-\zeta_{ f_{2}}\right\rangle+\left\langle(\eta_{f_{1}}-\zeta_{f_{1}})_{>},\frac{ \gamma^{*}}{1-\gamma^{*}}(\eta_{f_{2}}-\zeta_{f_{2}})\right\rangle.\]
Note that
\[\frac{\gamma^{*}}{1-\gamma^{*}}(\eta_{f_{2}}-\zeta_{f_{2}}) =\frac{1}{1-\gamma^{*}}\left(\gamma^{*}(\eta_{f_{2}})-\zeta_{f_{2 }}\right)+\zeta_{f_{2}}\] \[=\frac{1}{1-\gamma^{*}}\left(\gamma^{*}(\operatorname{Ad}_{ \boldsymbol{\gamma}(\bar{Z}_{+})}\beta_{f_{2}})-\operatorname{Ad}_{\bar{Z}_{+ }}\gamma^{*}(\beta_{f_{2}})_{<}\right)+\zeta_{f_{2}}\] \[=\frac{1}{1-\gamma^{*}}\left(\operatorname{Ad}_{\boldsymbol{ \gamma}^{*}\boldsymbol{\gamma}(\bar{Z}_{+})}\gamma^{*}(\beta_{f_{2}})- \operatorname{Ad}_{\bar{Z}_{+}}\gamma^{*}(\beta_{f_{2}})_{<}\right)+\zeta_{f_ {2}}\] \[=\frac{1}{1-\gamma^{*}}\left(\operatorname{Ad}_{\bar{Z}_{+}} \gamma^{*}(\beta_{f_{2}})_{\geq}\right)+\zeta_{f_{2}},\]
since \(\boldsymbol{\gamma}^{*}\boldsymbol{\gamma}(\bar{Z}_{+})=\bar{Z}_{+}\). Consequently,
\[\left\langle(\eta_{f_{1}}-\zeta_{f_{1}})_{>},\frac{\gamma^{*}}{1 -\gamma^{*}}(\eta_{f_{2}}-\zeta_{f_{2}})\right\rangle\] \[=\left\langle(\eta_{f_{1}}-\zeta_{f_{1}})_{>},\frac{1}{1-\gamma^ {*}}\left(\operatorname{Ad}_{\bar{Z}_{+}}\gamma^{*}(\beta_{f_{2}})_{\geq} \right)+\zeta_{f_{2}}\right\rangle\] \[=\left\langle(\eta_{f_{1}}-\zeta_{f_{1}})_{>},\zeta_{f_{2}} \right\rangle,\]
since \(\bar{Z}_{+}\in\mathcal{N}_{+}\) and hence \(\operatorname{Ad}_{\bar{Z}_{+}}\gamma^{*}(\beta_{f_{2}})_{\geq}\in\mathfrak{b }_{+}\), so that finally
\[\left\langle\frac{1}{1-\gamma}(\nabla^{R}\hat{f}_{1})_{>},\nabla^{R}\hat{f}_{ 2}\right\rangle=\left\langle(\eta_{f_{1}})_{>},\eta_{f_{2}}\right\rangle- \left\langle(\zeta_{f_{1}})_{>},\eta_{f_{2}}\right\rangle.\]
The third expression in (3.10) is treated similarly to the second one:
\[\left\langle\frac{\gamma^{*}}{1-\gamma^{*}}(\eta_{f_{1}}-\zeta_{ f_{1}})_{>},\eta_{f_{2}}-\zeta_{f_{2}}\right\rangle\] \[=\left\langle\frac{1}{1-\gamma^{*}}\left(\operatorname{Ad}_{\bar{ Z}_{+}}\gamma^{*}(\beta_{f_{1}})_{\geq}\right)_{<}+(\zeta_{f_{1}})_{<},\eta_{f_{2}}- \zeta_{f_{2}}\right\rangle\] \[=\left\langle(\zeta_{f_{1}})_{<},\eta_{f_{2}}\right\rangle- \left\langle(\zeta_{f_{1}})_{<},\zeta_{f_{2}}\right\rangle.\]
since \(\operatorname{Ad}_{\bar{Z}_{+}}\gamma^{*}(\beta_{f_{1}})_{\geq}\in\mathfrak{b }_{+}\).
Further, \(-\langle(\zeta_{f_{1}},\eta_{f_{2}})+\langle(\zeta_{f_{1}})_{>},\eta_{f_{2}} \rangle+\langle(\zeta_{f_{1}})_{<},\eta_{f_{2}}\rangle=-\langle(\zeta_{f_{1}})_ {*},(\eta_{f_{2}})_{*}\rangle\). Note that for \(A\in\mathcal{N}_{+}\) and \(\beta\in\mathfrak{g}\) one has \((\operatorname{Ad}_{A}\beta)_{<}=(\operatorname{Ad}_{A}\beta_{<})_{<}\) and for \(\xi\in\mathfrak{n}_{+}\) one has
\[\langle\operatorname{Ad}_{A}\beta_{<},\xi\rangle=\langle \operatorname{Ad}_{A}\beta,\xi\rangle,\] hence \[-\langle(\eta_{f_{1}})_{>},\eta_{f_{2}}\rangle-\langle(\zeta_{f_{1}})_{< },\zeta_{f_{2}}\rangle=-\langle\eta_{f_{1}},(\eta_{f_{2}})_{<}\rangle-\langle \zeta_{f_{1}},(\zeta_{f_{2}})_{>}\rangle\] \[=-\left\langle\eta_{f_{1}},\left(\operatorname{Ad}_{\boldsymbol{ \gamma}(Z_{+})}\beta_{f_{2}}\right)_{<}\right\rangle-\left\langle \operatorname{Ad}_{Z_{+}}\gamma^{*}(\beta_{f_{1}})_{<},(\zeta_{f_{2}})_{>}\right\rangle\] \[=-\left\langle\eta_{f_{1}},\operatorname{Ad}_{\boldsymbol{ \gamma}(Z_{+})}(\beta_{f_{2}})_{<}\right\rangle+\left\langle\eta_{f_{1}}, \left(\operatorname{Ad}_{\boldsymbol{\gamma}(Z_{+})}(\beta_{f_{2}})_{<} \right)_{\geq}\right\rangle-\left\langle\operatorname{Ad}_{\bar{Z}_{+}} \gamma^{*}(\beta_{f_{1}}),(\zeta_{f_{2}})_{>}\right\rangle.\]
The first term above is equal to
\[-\left\langle\eta_{f_{1}},\operatorname{Ad}_{\boldsymbol{\gamma }(Z_{+})}(\beta_{f_{2}})_{<}\right\rangle=-\left\langle\operatorname{Ad}_{ \boldsymbol{\gamma}(Z_{+})}\beta_{f_{1}},\operatorname{Ad}_{\boldsymbol{ \gamma}(Z_{+})}(\beta_{f_{2}})_{<}\right\rangle\\ =-\langle\beta_{f_{1}},(\beta_{f_{2}})_{<}\rangle=-\langle(\beta _{f_{1}})_{>},\beta_{f_{2}}\rangle.\]
The second term above is equal to
\[\left\langle\eta_{f_{1}},\left(\operatorname{Ad}_{\boldsymbol{ \gamma}(\bar{Z}_{+})}(\beta_{f_{2}})_{<}\right)_{\geq}\right\rangle=\left\langle \operatorname{Ad}_{\boldsymbol{\gamma}(\bar{Z}_{+})}\beta_{f_{1}},\left( \operatorname{Ad}_{\boldsymbol{\gamma}(\bar{Z}_{+})}(\beta_{f_{2}})_{<} \right)_{\geq}\right\rangle\] \[=\left\langle\gamma^{*}\operatorname{Ad}_{\boldsymbol{\gamma}( \bar{Z}_{+})}\beta_{f_{1}},d\gamma^{*}\left(\operatorname{Ad}_{\boldsymbol{ \gamma}(\bar{Z}_{+})}(\beta_{f_{2}})_{<}\right)_{\geq}\right\rangle\] \[=\left\langle\operatorname{Ad}_{\bar{Z}_{+}}\gamma^{*}(\beta_{f_ {1}}),\left(\operatorname{Ad}_{\bar{Z}_{+}}\gamma^{*}(\beta_{f_{2}})_{<} \right)_{\geq}\right\rangle=\left\langle\operatorname{Ad}_{\bar{Z}_{+}} \gamma^{*}(\beta_{f_{1}}),(\zeta_{f_{2}})_{\geq}\right\rangle,\]
which together with the third term gives
\[\left\langle\left(\operatorname{Ad}_{\bar{Z}_{+}}\gamma^{*}(\beta_{f_{1}}) \right)_{\boldsymbol{\cdot}},(\zeta_{f_{2}})_{\boldsymbol{\cdot}}\right\rangle =\langle\gamma^{*}(\eta_{f_{1}})_{\boldsymbol{\cdot}},(\zeta_{f_{2}})_{ \boldsymbol{\cdot}}\rangle.\]
Therefore, the total contribution of the non-diagonal terms equals to
\[\langle(\alpha_{f_{1}})_{>},\alpha_{f_{2}}\rangle-\langle(\beta_{f_{1}})_{>}, \beta_{f_{2}}\rangle-\langle(\zeta_{f_{1}})_{\boldsymbol{\cdot}},(\eta_{f_{2}}) _{\boldsymbol{\cdot}}\rangle+\langle\gamma^{*}(\eta_{f_{1}}))_{\boldsymbol{ \cdot}},(\zeta_{f_{2}})_{\boldsymbol{\cdot}}\rangle.\]
On the other hand, the total contribution of the non-diagonal terms to \(\{f_{1},f_{2}\}_{r_{\mathbf{r}^{\varnothing}}^{\varnothing},r_{\mathbf{r}^{ \prime}}^{\varnothing}}\) equals to
\[\langle(\alpha_{f_{1}})_{>},\alpha_{f_{2}}\rangle-\langle(\beta_{f_{1}})_{>}, \beta_{f_{2}}\rangle,\]
so it remains to prove that
\[\langle R_{0}^{\varnothing}(\alpha_{f_{1}}-\xi_{f_{1}}),(\alpha_{f _{2}}-\xi_{f_{2}})_{\boldsymbol{\cdot}}\rangle-\langle R_{0}^{\boldsymbol{ \Gamma}}(\eta_{f_{1}}-\zeta_{f_{1}}),(\eta_{f_{2}}-\zeta_{f_{2}})_{\boldsymbol {\cdot}}\rangle\\ =\langle R_{0}^{\varnothing}(\alpha_{f_{1}}),(\alpha_{f_{2}})_{ \boldsymbol{\cdot}}\rangle-\langle R_{0}^{\boldsymbol{\Gamma}}(\beta_{f_{1}}),(\beta_{f_{2}})_{\boldsymbol{\cdot}}\rangle+\langle(\zeta_{f_{1}})_{ \boldsymbol{\cdot}},(\eta_{f_{2}})_{\boldsymbol{\cdot}}\rangle-\langle \gamma^{*}(\eta_{f_{1}})_{\boldsymbol{\cdot}},(\zeta_{f_{2}})_{\boldsymbol{ \cdot}}\rangle. \tag{3.11}\]
Recall that \(\xi_{f}\in\mathfrak{n}_{+}\), hence \((\xi_{f})_{\boldsymbol{\cdot}}\) vanishes. Therefore, the first term on the left in (3.11) equals \(\langle R_{0}^{\varnothing}(\alpha_{f_{1}}),(\alpha_{f_{2}})_{\boldsymbol{ \cdot}}\rangle\), which coincides with the first term on the right.
Further, \(\zeta_{f}=\operatorname{Ad}_{\bar{Z}_{+}}(\gamma^{*}(\beta_{f}))- \operatorname{Ad}_{\bar{Z}_{+}}(\gamma^{*}(\beta_{f})_{\geq})\), hence \((\zeta_{f})_{\boldsymbol{\cdot}}=\gamma^{*}(\eta_{f})_{\boldsymbol{\cdot}}- \gamma^{*}(\beta_{f})_{\boldsymbol{\cdot}}\) since \(\bar{Z}_{+}\in\mathfrak{n}_{+}\) and \(\gamma^{*}(\beta_{f})_{\geq}\in\mathfrak{b}_{+}\), so that finally
\[(\eta_{f}-\zeta_{f})_{\boldsymbol{\cdot}}=(1-\gamma^{*})(\eta_{f}-\beta_{f})_{ \boldsymbol{\cdot}}+(\beta_{f})_{\boldsymbol{\cdot}}.\]
Note that \((\eta_{f}-\beta_{f})_{\blacksquare}\in\mathfrak{h}^{\Gamma_{2}}\), consequently, in view of (3.8), the second term in (3.11) can be rewritten as
\[\langle R_{0}^{\mathbf{\Gamma}}(\eta_{f_{1}}-\zeta_{f_{1}}),(\eta_ {f_{2}}-\zeta_{f_{2}})_{\blacksquare}\rangle\] \[=\langle R_{0}^{\mathbf{\Gamma}}(1-\gamma^{*})(\eta_{f_{1}}-\beta _{f_{1}}),(\eta_{f_{2}}-\zeta_{f_{2}})_{\blacksquare}\rangle+\langle R_{0}^{ \mathbf{\Gamma}}(\beta_{f_{1}}),(\eta_{f_{2}}-\zeta_{f_{2}})_{\blacksquare}\rangle\] \[=-\langle\langle(\zeta_{f_{1}})_{\blacksquare},(\eta_{f_{2}})_{ \blacksquare}\rangle+\langle\gamma^{*}(\eta_{f_{1}})_{\blacksquare},(\zeta_{f_{2}})_{ \blacksquare}\rangle-\langle\gamma^{*}(\beta_{f_{1}})_{\blacksquare},(\zeta_{f_{2}})_{ \blacksquare}\rangle+\langle R_{0}^{\mathbf{\Gamma}}(\beta_{f_{1}}),(\eta_{f_{2}} -\zeta_{f_{2}})_{\blacksquare}\rangle.\]
The first two terms in the last line above are cancelled by the last two terms in the right hand side of (3.11). So, (3.11) is reduced to
\[\langle R_{0}^{\mathbf{\Gamma}}(\beta_{f_{1}}),(\eta_{f_{2}}-\zeta_{f_{2}})_{ \blacksquare}\rangle-\langle\gamma^{*}(\beta_{f_{1}})_{\blacksquare},(\zeta_{f_{2}})_{ \blacksquare}\rangle=\langle R_{0}^{\mathbf{\Gamma}}(\beta_{f_{1}}),(\beta_{f_{2}} )_{\blacksquare}\rangle.\]
The first term in the left hand side above can be rewritten using (3.9) as
\[\langle R_{0}^{\mathbf{\Gamma}}(\beta_{f_{1}}),(\eta_{f_{2}}- \zeta_{f_{2}})_{\blacksquare}\rangle=\langle R_{0}^{\mathbf{\Gamma}}(\beta_{f_{1}} ),(1-\gamma^{*})(\eta_{f_{2}}-\beta_{f_{2}})_{\blacksquare}\rangle+\langle R_{0}^ {\mathbf{\Gamma}}(\beta_{f_{1}}),(\beta_{f_{2}})_{\blacksquare}\rangle\] \[=\langle(\beta_{f_{1}})_{\blacksquare},(R_{0}^{\mathbf{\Gamma}})^{*}( 1-\gamma^{*})(\eta_{f_{2}}-\beta_{f_{2}})\rangle+\langle R_{0}^{\mathbf{ \Gamma}}(\beta_{f_{1}}),(\beta_{f_{2}})_{\blacksquare}\rangle\] \[=\langle(\beta_{f_{1}})_{\blacksquare},(\eta_{f_{2}}-\beta_{f_{2}})_{ \blacksquare}\rangle+\langle R_{0}^{\mathbf{\Gamma}}(\beta_{f_{1}}),(\beta_{f_{2}} )_{\blacksquare}\rangle\] \[=\langle\gamma^{*}(\beta_{f_{1}})_{+},(\zeta_{f_{2}})_{\blacksquare} \rangle+\langle R_{0}^{\mathbf{\Gamma}}(\beta_{f_{1}}),(\beta_{f_{2}})_{ \blacksquare}\rangle,\]
which proves (3.11).
## 4. The \(A_{n}\) case
In this Section we assume that \(\mathcal{G}=SL_{n}\), and hence \(\Gamma_{1}\) and \(\Gamma_{2}\) can be identified with subsets of \([1,n-1]\). Note that the isometry condition on \(\gamma\) implies that if \(i,i+1\in\Gamma_{1}\) then \(\gamma(i+1)=\gamma(i)\pm 1\). We say that \(\mathbf{\Gamma}\) is _oriented_ if \(i,i+1\in\Gamma_{1}\) yields \(\gamma(i+1)=\gamma(i)+1\). In other words, the orientation of every subset of \(\Gamma_{1}\) that consists of consecutive roots is preserved by \(\gamma\). In [13] we treated the case when both BD triples \(\Gamma^{\mathrm{r}}\) and \(\Gamma^{\mathrm{c}}\) are oriented. In this paper we lift this restriction and consider the general case.
### Combinatorial data
Let us briefly remind combinatorial constructions introduced in [13] together with their non-oriented analogs (see [13] for more details and examples).
For any \(i\in[1,n]\) put
\[i_{+}=\min\{j\in[1,n]\setminus\Gamma_{1}\colon\ j\geq i\},\qquad i_{-}=\max \{j\in[0,n]\setminus\Gamma_{1}\colon\ j<i\}.\]
The interval \(\Delta(i)=[i_{-}+1,i_{+}]\) is called the \(X\)_-run_ of \(i\). Clearly, all distinct \(X\)-runs form a partition of \([1,n]\). The \(X\)-runs are numbered consecutively from left to right. The dual partition of \([1,n]\) into \(X^{\dagger}\)_-runs_ is defined via \(\Delta^{\dagger}(i)=[n-i_{+}+1,n-i_{-}]\); the \(X^{\dagger}\)-runs are numbered consecutively from right to left. In a similar way, \(\Gamma_{2}\) defines another two partitions of \([1,n]\) into \(Y\)-runs \(\bar{\Delta}(i)\) and \(Y^{\dagger}\)-runs \(\bar{\Delta}^{\dagger}(i)\).
Runs of length one are called trivial. The map \(\gamma\) induces a bijection on the sets of pairs of nontrivial \(X\)- and \(X^{\dagger}\)-runs and \(Y\)- and \(Y^{\dagger}\)-runs. Abusing notation, we denote by the same \(\gamma\) and say that \((\bar{\Delta}_{i},\bar{\Delta}_{i}^{\dagger})=\gamma(\Delta_{j},\Delta_{j}^{ \dagger})\) if there exists \(k\in\Delta_{j}\) such that \(\bar{\Delta}(\gamma(k))=\bar{\Delta}_{i}\). The inverse of the bijection \(\gamma\) is naturally denoted \(\gamma^{*}\).
The _BD graph_\(G_{\mathbf{\Gamma}}\) is defined as follows. The vertices of \(G_{\mathbf{\Gamma}}\) are two copies of the set of positive simple roots identified with \([1,n-1]\). One of the sets is called the _upper_ part of the graph, and the other is called the _lower_ part. A vertex \(i\in\Gamma_{1}\) is connected with an _inclined_ edge to the vertex \(\gamma(i)\in\Gamma_{2}\). Finally, vertices \(i\) and \(n-i\)
in the same part are connected with a _horizontal_ edge. If \(n=2k\) and \(i=n-i=k\), the corresponding horizontal edge is a loop.
Given a pair of BD triples \((\mathbf{\Gamma}^{\mathrm{r}},\mathbf{\Gamma}^{\mathrm{c}})\), one can define a BD graph \(G_{\mathbf{\Gamma}^{\mathrm{r}},\mathbf{\Gamma}^{\mathrm{c}}}\) as follows. Take \(G_{\mathbf{\Gamma}^{\mathrm{r}}}\) with all inclined edges directed downwards and \(G_{\mathbf{\Gamma}^{\mathrm{c}}}\) in which all inclined edges are directed upwards. Superimpose these graphs by identifying the corresponding vertices. In the resulting graph, for every pair of vertices \(i,n-i\) in either top or bottom row there are two edges joining them. We give these edges opposite orientations. If \(n\) is even, then we retain only one loop at each of the two vertices labeled \(\frac{n}{2}\). The result is a directed graph \(G_{\mathbf{\Gamma}^{\mathrm{r}},\mathbf{\Gamma}^{\mathrm{c}}}\) on \(2(n-1)\) vertices. For example, consider the case of \(GL_{7}\) with \(\mathbf{\Gamma}^{\mathrm{r}}=(\{1,2,5\},\{1,3,4\},1\mapsto 4,2\mapsto 3,5 \mapsto 1)\) and \(\mathbf{\Gamma}^{\mathrm{c}}=(\{3,4,6\},\{2,3,5\},3\mapsto 2,4\mapsto 3,6 \mapsto 5)\). The corresponding graph \(G_{\mathbf{\Gamma}^{\mathrm{r}},\mathbf{\Gamma}^{\mathrm{c}}}\) is shown on the left in Fig. 1. For horizontal edges, no direction is indicated, which means that they can be traversed in both directions.
A directed path in \(G_{\mathbf{\Gamma}^{\mathrm{r}},\mathbf{\Gamma}^{\mathrm{c}}}\) is called _alternating_ if horizontal and inclined edges in the path alternate. In particular, an edge is a (trivial) alternating path. An alternating path with coinciding endpoints and an even number of edges is called an _alternating cycle_. We can decompose the set of directed edges of \(G_{\mathbf{\Gamma}^{\mathrm{r}},\mathbf{\Gamma}^{\mathrm{c}}}\) into a disjoint union of maximal alternating paths and alternating cycles. If the resulting collection contains no alternating cycles, we call the pair \((\mathbf{\Gamma}^{\mathrm{r}},\mathbf{\Gamma}^{\mathrm{c}})\)_aperiodic_. For the graph in Fig. 1, the corresponding maximal alternating paths are \(52\bar{34}\), \(25\bar{16}\), \(\bar{5}\bar{2}34\), \(\bar{2}561\bar{4}343\), \(16\), and \(\bar{61}\) (here vertices in the lower part are marked with a dash for better visualization). None of them is an alternating cycle, so the corresponding pair is aperiodic.
Every horizontal directed edge in an upper part of the BD graph defines a pair of _blocks_ carved out from two \(n\times n\) matrices: a matrix of indeterminates \(X=(x_{ij})\) and the dual matrix \(X^{\dagger}\) obtained via conjugation of the cofactor matrix of \(X\) by \(w_{0}\mathbf{J}\) with \(\mathbf{J}=\mathrm{diag}((-1)^{i})_{i=1}^{n}\) and \(w_{0}\) being the matrix of the longest permutation. The rows of \(X\) are partitioned into \(X\)-runs with respect to \(\mathbf{\Gamma}^{\mathrm{r}}\), and the columns of \(X\), into \(X\)-runs with respect to \(\mathbf{\Gamma}^{\mathrm{c}}\). The rows and columns of \(X^{\dagger}\) are partitioned into the corresponding dual \(X^{\dagger}\)-runs: rows with respect to \(\mathbf{\Gamma}^{\mathrm{r}}\) and columns with respect to \(\mathbf{\Gamma}^{\mathrm{c}}\). A block in \(X\) is a submatrix \(X^{[1,\beta]}_{[\alpha,n]}\) whose row and column sets are unions of consecutive \(X\)-runs; a block in \(X^{\dagger}\) is defined similarly via \(X^{\dagger}\)-runs. The \(X\)-block that corresponds to a horizontal directed edge \(i\rightarrow(n-i)\) is the minimal block in \(X\) that contains the subdiagonal through the entries \((n-i+1,1)\) and \((n,i)\). These entries are called the _exit point_ and the _entrance point_ of the \(X\)-block, respectively. Note that the exit point of an \(X\)-block belongs to its uppermost \(X\)-run
(with respect to \(\mathbf{\Gamma}^{\mathrm{r}}\)), and its entrance point belongs to the rightmost \(X\)-run (with respect to \(\mathbf{\Gamma}^{\mathrm{c}}\)). The \(X^{\dagger}\)-block that corresponds to the same horizontal edge is the minimal block in \(X^{\dagger}\) that contains the subdiagonal through the entries \((i+1,1)\) and \((n,n-i)\) called the exit and the entrance points of the \(X^{\dagger}\)-block; these points have similar extremal properties as the corresponding points of an \(X\)-block. It is easy to see that if \((i,1)\) and \((i^{\dagger},1)\) are the entry points of the \(X\)- and \(X^{\dagger}\)-blocks corresponding to the same horizontal edge then \(i+i^{\dagger}=n+2\).
In a similar way, every horizontal directed edge in the lower part of the BD graph defines a pair of blocks carved out from an \(n\times n\) matrix of indeterminates \(Y=(y_{ij})\) and the dual matrix \(Y^{\dagger}\) obtained from cofactor matrix of \(Y\) by the same procedure as above. The rows and columns of \(Y\) are partitioned into \(Y\)-runs with respect to \(\mathbf{\Gamma}^{\mathrm{r}}\) and \(\mathbf{\Gamma}^{\mathrm{c}}\), respectively. The row and columns of \(Y^{\dagger}\) are partitioned into the corresponding \(Y^{\dagger}\)-runs. A block in \(Y\) is a submatrix \(Y^{[\bar{\beta},n]}_{[1,\bar{\alpha}]}\) whose row and column sets are unions of consecutive \(Y\)-runs; a block in \(Y^{\dagger}\) is defined similarly via \(Y^{\dagger}\)-runs. The \(Y\)-block that corresponds to a horizontal directed edge \(i\to(n-i)\) is the minimal block in \(Y\) that contains the superdiagonal through the entries \((1,n-i+1)\) and \((i,n)\). The \(Y^{\dagger}\)-block that corresponds to the same horizontal edge is the minimal block in \(Y^{\dagger}\) that contains the superdiagonal through the entries \((1,i+1)\) and \((n-i,n)\). The exit and the entrance points retain their meaning and have similar extremal properties: namely, the exit point belongs to the leftmost \(Y\)- or \(Y^{\dagger}\)-run with respect to \(\mathbf{\Gamma}^{\mathrm{c}}\), and the entrance point belongs to the lower \(Y\)- or \(Y^{\dagger}\)-run with respect to \(\mathbf{\Gamma}^{\mathrm{r}}\). If \((1,j)\) and \((1,j^{\dagger})\) are the entry points of the \(Y\)- and \(Y^{\dagger}\)-blocks corresponding to the same horizontal edge then \(j+j^{\dagger}=n+2\).
For the BD graph shown in Fig. 1, the rows of \(X\) are partitioned into the \(X\)-runs \(\Delta^{\mathrm{r}}_{1}=[1,3]\), \(\Delta^{\mathrm{r}}_{2}=[4,4]\), \(\Delta^{\mathrm{r}}_{3}=[5,6]\), and \(\Delta^{\mathrm{r}}_{4}=[7,7]\); the first and the third are nontrivial. The columns of \(X\) are partitioned into the \(X\)-runs \(\Delta^{\mathrm{c}}_{1}=[1,1]\), \(\Delta^{\mathrm{c}}_{2}=[2,2]\), \(\Delta^{\mathrm{c}}_{3}=[3,5]\), \(\Delta^{\mathrm{c}}_{4}=[6,7]\); the last two are nontrivial. Consequently, the dual partition of rows and columns of \(X^{\dagger}\) is given by \((\Delta^{\mathrm{r}}_{1})^{\dagger}=[5,7]\), \((\Delta^{\mathrm{r}}_{2})^{\dagger}=[4,4]\), \((\Delta^{\mathrm{r}}_{3})^{\dagger}=[2,3]\), \((\Delta^{\mathrm{r}}_{4})^{\dagger}=[1,1]\), and \((\Delta^{\mathrm{c}}_{1})^{\dagger}=[7,7]\), \((\Delta^{\mathrm{c}}_{2})^{\dagger}=[6,6]\), \((\Delta^{\mathrm{c}}_{3})^{\dagger}=[3,5]\), \((\Delta^{\mathrm{c}}_{4})^{\dagger}=[1,2]\). Thus, the \(X\)-block defined by the edge \(5\to 2\) in the upper part is the submatrix \(X^{[1,5]}\), and the corresponding \(X^{\dagger}\)-block is the submatrix \((X^{\dagger})^{[1,2]}_{[5,7]}\). Similarly, the rows of \(Y\) are partitioned into the \(Y\)-runs \(\bar{\Delta}^{\mathrm{r}}_{1}=[1,2]\), \(\bar{\Delta}^{\mathrm{r}}_{2}=[3,5]\), \(\bar{\Delta}^{\mathrm{r}}_{3}=[6,6]\), and \(\bar{\Delta}^{\mathrm{r}}_{4}=[7,7]\); the first two of them are nontrivial. The columns of \(Y\) are partitioned into the \(Y\)-runs \(\bar{\Delta}^{\mathrm{c}}_{1}=[1,1]\), \(\bar{\Delta}^{\mathrm{c}}_{2}=[2,4]\), \(\bar{\Delta}^{\mathrm{c}}_{3}=[5,6]\), \(\bar{\Delta}^{\mathrm{c}}_{4}=[7,7]\); the second and the third are nontrivial. Consequently, the dual partition of rows and columns of \(Y^{\dagger}\) is given by \((\bar{\Delta}^{\mathrm{r}}_{1})^{\dagger}=[6,7]\), \((\bar{\Delta}^{\mathrm{r}}_{2})^{\dagger}=[3,5]\), \((\bar{\Delta}^{\mathrm{r}}_{3})^{\dagger}=[2,2]\), \((\bar{\Delta}^{\mathrm{r}}_{4})^{\dagger}=[1,1]\), and \((\bar{\Delta}^{\mathrm{c}}_{1})^{\dagger}=[7,7]\), \((\bar{\Delta}^{\mathrm{c}}_{2})^{\dagger}=[4,6]\), \((\bar{\Delta}^{\mathrm{c}}_{3})^{\dagger}=[2,3]\), \((\bar{\Delta}^{\mathrm{c}}_{4})^{\dagger}=[1,1]\). Thus, the \(Y\)-block defined by the edge \(\bar{3}\to\bar{4}\) in the lower part is the submatrix \(Y^{[5,7]}_{[1,5]}\), and the corresponding \(Y^{\dagger}\)-block is the submatrix \((Y^{\dagger})^{[4,7]}_{[1,5]}\).
Every maximal alternating path defines a pair of matrices glued from blocks defined above that correspond to horizontal edges of the path. There are two types of gluing: row-to-row gluing governed by the BD triple \(\mathbf{\Gamma}^{\mathrm{r}}\) and column-to-column gluing governed by the BD triple \(\mathbf{\Gamma}^{\mathrm{c}}\). The first situation occurs when we consider three consecutive edges in an alternating path such that the first of them is a horizontal edge \(i\to(n-i)\) in the upper part, the second one is an inclined edge \((n-i)\to k\) with \(k=\gamma^{\mathrm{r}}(n-i)\), and the third one is the horizontal edge \(k\to(n-k)\)
in the lower part. Assume that \(n-i\) belongs to an \(X\)-run \(\Delta_{j}^{\mathrm{r}}\) and \(k\) belongs to a \(Y\)-run \(\bar{\Delta}_{m}^{\mathrm{r}}\); as explained above, this means that \((\bar{\Delta}_{m}^{\mathrm{r}},(\bar{\Delta}_{m}^{\mathrm{r}})^{\dagger})= \gamma^{\mathrm{r}}(\Delta_{j}^{\mathrm{r}},(\Delta_{j}^{\mathrm{r}})^{\dagger})\). Note that each \(X\)-run defined by \(\boldsymbol{\Gamma}^{\mathrm{r}}\) contains a connected component of \(\Gamma_{1}^{\mathrm{r}}\), while each \(Y\)-run defined by \(\boldsymbol{\Gamma}^{\mathrm{r}}\) contains a connected component of \(\Gamma_{2}^{\mathrm{r}}\). If the restriction of \(\gamma^{\mathrm{r}}\) to this connected component is oriented we glue \(\Delta_{j}^{\mathrm{r}}\) to \(\bar{\Delta}_{m}^{\mathrm{r}}\) and \((\Delta_{j}^{\mathrm{r}})^{\dagger}\) to \((\bar{\Delta}_{m}^{\mathrm{r}})^{\dagger}\). If the restriction of \(\gamma^{\mathrm{r}}\) reverses the orientation, we glue \(\Delta_{j}^{\mathrm{r}}\) to \((\bar{\Delta}_{m}^{\mathrm{r}})^{\dagger}\) and \((\Delta_{j}^{\mathrm{r}})^{\dagger}\) to \(\bar{\Delta}_{m}^{\mathrm{r}}\).
For example, consider the path \(52\bar{3}\bar{4}\) in the BD graph shown in Fig. 1. The corresponding blocks were described above. The exit point of the \(X\)-block \(X^{[1,5]}\) belongs to \(\Delta_{1}^{\mathrm{r}}=[1,3]\), and the corresponding connected component of \(\Gamma_{1}^{\mathrm{r}}\) is \([1,2]\). The entry point of the \(Y\)-block \(Y_{[1,5]}^{[5,7]}\) belongs to \(\bar{\Delta}_{2}^{\mathrm{r}}=[3,5]\), and the corresponding connected component of \(\Gamma_{2}^{\mathrm{r}}\) is \([3,4]\). The map \(\gamma^{\mathrm{r}}\) on \([1,2]\) reverses the orientation, so \(\Delta_{1}^{\mathrm{r}}\) is glued to \((\bar{\Delta}_{2}^{\mathrm{r}})^{\dagger}=[3,5]\) and \((\Delta_{1}^{\mathrm{r}})^{\dagger}=[5,7]\) is glued to \(\bar{\Delta}_{2}^{\mathrm{r}}\). The resulting matrices are shown in Fig. 2. All entries outside the blocks are equal to zero. The numbers of the rows that are glued are indicated in the figure.
The column-to-column gluing occurs when we consider three consecutive edges in an alternating path such that the first of them is a horizontal edge \(i\to(n-i)\) in the lower part, the second one is an inclined edge \((n-i)\to k\) with \(n-i=\gamma^{\mathrm{c}}(k)\), and the third one is the horizontal edge \(k\to(n-k)\) in the upper part. Assume that \(n-i\) belongs to a \(Y\)-run \(\bar{\Delta}_{j}^{\mathrm{c}}\) and \(k\) belongs to an \(X\)-run \(\Delta_{m}^{\mathrm{c}}\); as explained above, this means that \((\bar{\Delta}_{j}^{\mathrm{c}},(\bar{\Delta}_{j}^{\mathrm{c}})^{\dagger})= \gamma^{\mathrm{c}}(\Delta_{m}^{\mathrm{c}},(\Delta_{m}^{\mathrm{c}})^{\dagger})\). If the restriction of \(\gamma^{\mathrm{c}}\) to the connected component of \(\Gamma_{1}^{\mathrm{c}}\) contained in \(\Delta_{m}^{\mathrm{c}}\) is oriented we glue \(\bar{\Delta}_{j}^{\mathrm{c}}\) to \(\Delta_{m}^{\mathrm{c}}\) and \((\bar{\Delta}_{j}^{\mathrm{c}})^{\dagger}\) to \((\Delta_{m}^{\mathrm{c}})^{\dagger}\). If the restriction of \(\gamma^{\mathrm{c}}\) reverses the orientation, we glue \(\bar{\Delta}_{j}^{\mathrm{c}}\) to \((\Delta_{m}^{\mathrm{c}})^{\dagger}\) and \((\bar{\Delta}_{j}^{\mathrm{c}})^{\dagger}\) to \(\Delta_{m}^{\mathrm{c}}\).
For example, consider the path \(\bar{5}\bar{2}34\) in the BD graph shown in Fig. 1. The exit point of the \(Y\)-block \(Y_{[1,5]}^{[2,7]}\) belongs to \(\bar{\Delta}_{2}^{\mathrm{c}}=[2,4]\), and the corresponding connected component of \(\Gamma_{2}^{\mathrm{c}}\) is \([2,3]\). The entry point of the \(X\)-block \(X_{[5,7]}^{[1,5]}\) belongs to \(\Delta_{3}^{\mathrm{c}}=[3,5]\), and the corresponding connected component of \(\Gamma_{1}^{\mathrm{c}}\) is \([3,4]\). The map \(\gamma^{\mathrm{c}}\) on \([3,4]\) preserves the orientation, so \(\bar{\Delta}_{2}^{\mathrm{c}}\) is glued to \(\Delta_{3}^{\mathrm{c}}\) and \((\bar{\Delta}_{2}^{\mathrm{c}})^{\dagger}=[4,6]\) is glued to \((\bar{\Delta}_{3}^{\mathrm{c}})^{\dagger}=[3,5]\). The resulting matrices are shown in Fig. 3. All entries outside the blocks are equal to zero. The numbers of the columns that are glued are indicated in the figure.
In general, the number of blocks in the obtained pair of matrices is equal to the number of horizontal edges in the alternating path. Clearly, the entrance point of
Figure 2. The pair of matrices corresponding to the path \(52\bar{3}\bar{4}\)
the first block is its lower right corner, while the exit point of the last block is its upper left corner, so each of the obtained matrices is square. The pair of matrices defined by the alternating path \(\bar{2}561\bar{4}343\) is shown in Fig. 4.
Let \(\mathcal{L}\) and \(\mathcal{L}^{\dagger}\) be two matrices corresponding to a maximal alternating path in \(G_{\mathbf{\Gamma}^{\mathbf{r}},\mathbf{\Gamma}^{\mathbf{c}}}\). Every initial segment of this path that ends with a horizontal edge defines a pair of distinguished trailing submatrices of \(\mathcal{L}\) and \(\mathcal{L}^{\dagger}\) that are built of blocks that correspond to the horizontal edges in this segment. In the example shown in Fig. 4, the initial segment \(\bar{2}561\) defines a \(7\times 4\) submatrix in the lower right corner of the matrix on the left and a \(8\times 7\) submatrix in the lower right corner of the matrix on the right; both these submatrices consist of two blocks.
For every pair of matrices as above we consider the set of principal trailing minors with the following property: the upper left corner of the minor belongs to an \(X\)-block or to a \(Y\)-block. For example, for the pair of matrices in Fig. 2 these are the first three principal trailing minors of the second matrix and the last five of the first one. For the pair of matrices in Fig. 3 these are all principal trailing minors of the first matrix, while none of the second are used.
_Remark 4.1_.: The situation described in the last example, when one of the matrices in the pair is built solely of \(X\)- and \(Y\)-blocks, and the other one is built solely of \(X^{\dagger}\)- and \(Y^{\dagger}\)-blocks occurs for all pairs if both \(\mathbf{\Gamma}^{r}\) and \(\mathbf{\Gamma}^{c}\) are oriented. For this reason we only needed one block matrix for each alternating path in our constructions in [13].
Figure 4. The pair of matrices corresponding to the path \(\bar{2}561\bar{4}343\)
Figure 3. The pair of matrices corresponding to the path \(\bar{5}\bar{2}34\)
Consider a trailing minor as above, and assume that its upper left corner contains an entry \(x_{ij}\) (for \(i>j\)) or \(y_{ij}\) (for \(i<j\)). We denote this minor by \(\mathfrak{f}_{ij}(X,Y)\) and its restriction to the diagonal \(X=Y=Z\) by \(f_{ij}(Z)\). Additionally, define \(f_{ii}(Z)\) as the trailing minor of \(Z\) in rows and columns \([i,n]\).
The following claim follows immediately from the construction.
**Proposition 4.2**.: _For any pair \((i,j)\in[1,n]\times[1,n]\) there exists a unique function \(\mathbf{f}_{ij}\). The upper left corner of the corresponding minor belongs to the \(X\)-block \(X(i,j)\) defined by the horizontal edge \((n-i+j)\to(i-j)\) in the upper part of the graph (for \(i>j\)), or to the \(Y\)-block \(Y(i,j)\) defined by the horizontal edge \((n+i-j)\to(j-i)\) in the lower part of the graph (for \(i<j\))._
It follows from Proposition 4.2 that we can unambiguously define \(\mathcal{L}(i,j)\) for \(i\neq j\) as the matrix corresponding to the maximal alternating path that goes through the horizontal edge \((n-i+j)\to(i-j)\) in the upper part of the graph (for \(i>j\)), or through the horizontal edge \((n+i-j)\to(j-i)\) in the lower part of the graph (for \(i<j\)); let \(\mathcal{L}^{\dagger}(i,j)\) stand for the other matrix defined by the same path. For example, the pair of matrices shown in Fig. 4 can be described as \(\mathcal{L}(4,1)\) and \(\mathcal{L}^{\dagger}(4,1)\), or \(\mathcal{L}(4,7)\) and \(\mathcal{L}^{\dagger}(4,7)\), or \(\mathcal{L}^{\dagger}(2,1)\) and \(\mathcal{L}(2,1)\), etc. It is clear from the definition that the matrices \(\mathcal{L}(i,j)\) and \(\mathcal{L}^{\dagger}(i,j)\) depend only on the difference \(i-j\).
Theorem 3.4 in [13] claims that in the oriented case the family \(\{f_{ij}\}\) forms a log-canonical coordinate system on \(SL_{n}\) with respect to the Poisson bracket defined by the pair \(\mathbf{\Gamma}^{\mathrm{r}}\), \(\mathbf{\Gamma}^{\mathrm{c}}\). The proof is very technical and occupies \(40\) pages. Below we deduce a generalization of this result, which covers both the oriented and the non-oriented cases, from Theorem 3.1.
### The basis
The goal of this Section is the proof of the following generalization of Theorem 3.4 in [13].
Define \(F_{\mathbf{\Gamma}^{\mathrm{r}},\mathbf{\Gamma}^{\mathrm{c}}}=\{f_{ij}(Z):i, j\in[1,n],(i,j)\neq(1,1)\}\).
**Theorem 4.3**.: _Let \((\mathbf{\Gamma}^{\mathrm{r}},\mathbf{\Gamma}^{\mathrm{c}})\) be an aperiodic pair of BD triples, then the family \(F_{\mathbf{\Gamma}^{\mathrm{r}},\mathbf{\Gamma}^{\mathrm{c}}}\) forms a log-canonical coordinate system on \(SL_{n}\) with respect to the Poisson bracket \(\{\cdot,\cdot\}_{r^{\mathrm{c}},r^{\mathrm{r}^{\mathrm{r}}}}\)._
Proof.: Let \(F_{ij}(A)\) denote the trailing minor of \(A\) whose upper left corner contains the entry \(a_{ij}\). By Theorem 4.18 in [10] (see also Theorem 5.2 in [9]) functions \(F_{ij}\) are log-canonical with respect to the standard Sklyanin bracket. The proof of Theorem 4.3 is an immediate consequence of this fact, Theorem 3.1, and the following statement.
For an arbitrary pair \((i,j)\), \(i\neq j\), consider the pair of matrices \(\mathcal{L}(i,j)\) and \(\mathcal{L}^{\dagger}(i,j)\). We say that an exit point \((k,1)\) of an \(X\)-block (or an exit point \((1,m)\) of a \(Y\)-block) is _subordinate_ to \((i,j)\) if either it or the exit point \((k^{\dagger},1)\) of the dual \(X^{\dagger}\)-block (the exit point \((1,m^{\dagger})\) of the dual \(Y^{\dagger}\)-block, respectively) belongs to the main diagonal of the matrix \(\mathcal{L}(i,j)\) and lies below or to the right of the block \(X(i,j)\) (or \(Y(i,j)\)). For example, consider the entry \((3,6)\) that belongs to the block \(Y(3,6)\) in the left matrix in Fig. 4. The exit points subordinate to \((3,6)\) are \((2,1)\) and \((1,6)\) in the matrix on the right, since the exit points \((7,1)\) and \((1,3)\) of the corresponding dual blocks lie to the right of \(Y(3,6)\). The exit point \((1,4)\) is not subordinate to \((3,6)\).
Let \((k_{1},1),\ldots,(k_{s},1)\) and \((1,m_{1}),\ldots,(1,m_{t})\) be all exit points subordinate to \((i,j)\) (note that \(s-t\) is either \(0\) or \(\pm 1\)).
**Theorem 4.4**.: _Let \(h\) be the Poisson map defined in Theorem 3.1, and let \(f^{h}_{ij}(U)=f_{ij}\circ h(U)\), then_
\[f^{h}_{ij}(U)=F_{ij}(U)\prod_{p=1}^{s}F_{k_{p},1}(U)\prod_{q=1}^{t}F_{1,m_{q}}(U). \tag{4.1}\]
_Remark 4.5_.: For \(i=j\) (4.1) holds trivially as \(f^{h}_{ii}(U)=F_{ii}(U)\).
Proof.: We start from stating an invariance property of the functions \(\mathtt{f}_{ij}\) that is a direct generalization of the invariance property (4.11) in [13].
**Proposition 4.6**.: _Let \(f=\mathtt{f}_{ij}\) for some \((i,j)\), then for any \(N_{+}\in\mathcal{N}_{+}\) and \(N_{-}\in\mathcal{N}_{-}\)_
\[f(N_{+}X(\boldsymbol{\gamma}^{c})^{*}(N_{-}),\boldsymbol{\gamma}^{\mathtt{r}} (N_{+})YN_{-})=f(X,Y).\]
Proof.: It follows from the construction of the matrices described above that if an \(X\)-block is multiplied on the left by \(N_{+}\) then to keep the same value of \(f\) the \(Y\)-block immediately to the left of this \(X\)-block should be multiplied on the left by \(\boldsymbol{\gamma}^{\mathtt{r}}(N_{+})\). Similarly, if a \(Y\)-block is multiplied on the right by \(N_{-}\), the \(X\)-block immediately above it should be multiplied on the right by \((\boldsymbol{\gamma}^{\mathtt{c}})^{*}(N_{-})\).
_Remark 4.7_.: In fact, it follows from the proof that one can choose different matrices \(N_{+}\) for different \(X\)-blocks, and different matrices \(N_{-}\) for different \(Y\)-blocks.
Let us apply this Proposition for \(N_{+}=(H^{\mathtt{r}}\bar{V}^{\mathtt{r}})^{-1}\) and \(N_{-}=(\bar{V}^{\mathtt{c}}H^{\mathtt{c}})^{-1}\), then we get
\[f_{ij}(Z)=\mathtt{f}_{ij}(Z,Z)=\mathtt{f}_{ij}(H^{\mathtt{r}}UH^{\mathtt{c}}, H^{\mathtt{r}}UH^{\mathtt{c}})=\mathtt{f}_{ij}((\bar{V}^{\mathtt{r}})^{-1}U,U( \bar{V}^{\mathtt{c}})^{-1}). \tag{4.2}\]
The proof of Theorem 4.4 proceeds by induction on the number of blocks in the submatrix of \(\mathcal{L}(i,j)\) that defines the minor \(f_{ij}\). If \(b=1\), that is, the upper left corner of the above minor lies in the lower right block of \(\mathcal{L}(i,j)\), then by (4.2) it is either an \(X\)-block for \(X=(\bar{V}^{\mathtt{r}})^{-1}U\) or a \(Y\)-block for \(Y=U(\bar{V}^{\mathtt{c}})^{-1}\). In both cases (4.1) holds trivially, since left multiplication by a matrix from \(\mathcal{N}_{+}\) and right multiplication by a matrix from \(\mathcal{N}_{-}\) do not change minors in question.
For \(b>1\), consider the lower right block \(B\) of \(\mathcal{L}(i,j)\) and assume first that it is an \(X\)-block. The block Laplace expansion of \(f_{ij}\) by the last block column involves minors of \(X\) in the first \(c\) columns, where \(c\) is the width of \(B\). Clearly, such minors are not affected by multiplication of \(X\) on the right by a matrix in \(\mathcal{N}_{+}\). Let \([k,m]\) be the upper \(X\)-run of \(B\) that contains its exit point \((k_{s},1)\) (so that \(c=n-k_{s}+1\)). Consider once again the Gauss decomposition \(U=U_{-}U_{0,+}\) and refactor \(U_{-}\) as follows. Let \(s_{[p,q]}\) stand for the product of reflections \(s_{p}s_{p+1}\dots s_{q}\) for \(p<q\) and \(s_{p}s_{p-1}\dots s_{q}\) for \(p>q\). Besides, let \(\mathcal{N}_{-}(r)\) stand for the subset of matrices in \(\mathcal{N}_{-}\) such that the \(r\times r\) submatrix in the lower left corner is upper triangular. The factor \(U_{-}\) corresponds to the reduced expression \(s_{[1,n-1]}s_{[1,n-2]}\dots s_{[1,2]}s_{1}\) for the longest permutation \(w_{0}\). First, we use \(2\)-moves to rewrite it as
\[(s_{[1,m-1]}s_{[1,m-2]}\dots s_{[1,2]}s_{1})(s_{[m,1]}s_{[m+1,1]}\dots s_{[n-1, 1]}).\]
Next, we replace the left reduced expression above by its opposite
\[s_{[m-1,1]}s_{[m-1,2]}\dots s_{[m-1,m-2]}s_{m-1}.\]
Finally, we use \(2\)-moves to rewrite it as
\[(s_{[m-1,k]}s_{[m-1,k+1]}\dots s_{[m-1,m-2]}s_{m-1})(s_{[k-1,m-1]}s_{[k-2,m-1] }\dots s_{[1,m-1]}).\]
The corresponding factorization of \(U_{-}\) is \(U_{-}=U_{-}^{[k,m]}U_{L}U_{R}\) with \(U_{-}^{[k,m]}\in\mathcal{N}_{-}^{[k,m]}\), \(U_{L}\in\mathcal{N}_{-}^{[1,m]}(m-k+1)\), and \(U_{R}^{-1}\in\mathcal{N}_{-}(m)\) (the latter inclusion can be observed from the fact that the corresponding reduced word is \(s_{[1,n-1]}s_{[1,n-2]}\cdots s_{[1,m]}\)). Consequently, we get \(U=U_{-}^{[k,m]}U_{L}U_{R}U_{0,+}\), which can be further refactored as \(U=U_{-}^{[k,m]}U_{L}U_{0,+}^{\prime}U_{R}^{\prime}\) with \(U_{0,+}^{\prime}\in\mathcal{B}_{+}\) and \((U_{R}^{\prime})^{-1}\in\mathcal{N}_{-}(n-m)\).
Recall that \((\tilde{V}^{r})^{-1}\) has a block-diagonal structure, and its blocks correspond to \(X\)-runs defined by \(\mathbf{\Gamma}^{r}\). The block that corresponds to the \(X\)-run \([k,m]\) is \((U_{-}^{[k,m]}w_{0}^{[k,m]})_{+}^{-1}\). Note that
\[(U_{-}^{[k,m]}w_{0}^{[k,m]})_{+}^{-1}U_{-}^{[k,m]}=(U_{-}^{[k,m]}w _{0}^{[k,m]})_{+}^{-1}(U_{-}^{[k,m]}w_{0}^{[k,m]})w_{0}^{[k,m]}=\\ (U_{-}^{[k,m]}w_{0}^{[k,m]})_{0,-}w_{0}^{[k,m]},\]
and hence
\[X=(\tilde{V}^{r})^{-1}U=\hat{V}(U_{-}^{[k,m]}w_{0}^{[k,m]})_{0,-}w_{0}^{[k,m]} U_{L}U_{0,+}^{\prime}U_{R}^{\prime}.\]
Note that \(\hat{V}\) retains all blocks of \((\bar{V}^{r})^{-1}\) except for the one corresponding to \([k,m]\), so that
\[\hat{V}(U_{-}^{[k,m]}w_{0}^{[k,m]})_{0,-}w_{0}^{[k,m]}=\begin{bmatrix}\hat{V}_ {1}&0&0\\ 0&\hat{V}_{2}&0\\ 0&0&\hat{V}_{3}\end{bmatrix}\]
with \(\hat{V}_{1}\) upper triangular of size \((k-1)\times(k-1)\), \(\hat{V}_{2}\) is lower anti-triangular of size \((m-k+1)\times(m-k+1)\) and \(\hat{V}_{3}\) is upper triangular of size \((n-m)\times(n-m)\). Further,
\[U_{L}=\begin{bmatrix}U_{L}^{11}&0&0\\ U_{L}^{21}&U_{L}^{22}&0\\ 0&0&\mathbf{1}\end{bmatrix}\]
where \(U_{L}^{11}\) is of size \((k-1)\times(m-k+1)\), \(U_{L}^{22}\) is of size \((m-k+1)\times(m-k+1)\), and \(U_{L}^{21}\) is upper triangular of size \((m-k+1)\times(m-k+1)\) since \(U_{L}\in\mathcal{N}_{-}^{[1.m]}(m-k+1)\). Consequently
\[\hat{V}(U_{-}^{[k,m]}w_{0}^{[k,m]})_{0,-}w_{0}^{[k,m]}U_{L}=\begin{bmatrix} \star&\star&0\\ \hat{U}_{-}^{21}&\star&0\\ 0&0&\hat{U}_{-}^{33}\end{bmatrix} \tag{4.3}\]
where \(\hat{U}_{-}^{21}\) is lower anti-triangular of size \((m-k+1)\times(m-k+1)\), \(\hat{U}_{-}^{33}\) is upper triangular of size \((n-m)\times(n-m)\), and shapes of all submatrices denoted by \(\star\) are not relevant for this discussion.
To proceed further, we need the following technical statement.
**Lemma 4.8**.: _Let \(M\) be an \(n\times n\) matrix such that \(M^{-1}\in\mathcal{N}_{-}(r)\). Write \(M\) as \(M=\begin{bmatrix}M_{11}&M_{12}\\ M_{21}&M_{22}\end{bmatrix}\) where \(M_{12}\) is of size \(r\times r\), then \(C(M)=M_{12}-M_{11}M_{21}^{-1}M_{22}\) is upper triangular._
Proof.: Indeed, write \(M^{-1}\) as \(M^{-1}=\begin{bmatrix}\tilde{M}_{11}&0\\ \tilde{M}_{21}&\tilde{M}_{22}\end{bmatrix}\) where \(\tilde{M}_{21}\) is of size \(r\times r\), and hence upper triangular. Then \(M_{11}\tilde{M}_{11}+M_{12}\tilde{M}_{21}=\mathbf{1}\) and \(M_{21}\tilde{M}_{11}+M_{22}\tilde{M}_{21}=0\), so that \((M_{12}-M_{11}M_{21}^{-1}M_{22})\tilde{M}_{21}=\mathbf{1}\), and the claim follows.
We apply this Lemma to the matrix \(M=U_{R}^{\prime}\) with \(r=m\), and get that \(C(U_{R}^{\prime})\) is an \(m\times m\) upper triangular matrix. Recall that multiplying \(X\) on the right by a matrix in \(\mathcal{N}_{+}\) does not affect \(f_{ij}\); we thus multiply \(X\) by \(K=\begin{bmatrix}\mathbf{1}&-M_{21}^{-1}M_{12}\\ 0&\mathbf{1}\end{bmatrix}\). Note that \(U_{R}^{\prime}K=\begin{bmatrix}\star&C(U_{R}^{\prime})\\ \star&0\end{bmatrix}\); multiplication by \(U_{0,+}^{\prime}\) on the left does not change the shape of the result, that is, the upper right \(m\times m\) submatrix remains upper triangular. Comparing this with (4.3) yields
\[XK=\begin{bmatrix}\star&\star&\star\\ \star&\tilde{X}&\star\\ \star&0&0\end{bmatrix}\]
where \(\tilde{X}\) is an \((m-k+1)\times(m-k+1)\) lower anti-triangular matrix and the submatrix in the lower left corner is of size \((n-m)\times(n-m)\). Consequently, any minor of \(XK\) in the first \(c\) columns and rows \(R\cup[m+1,n]\) for \(R\subset[k,m]\), \(|R|=c-n+m=m-k_{s}+1\), vanishes unless \(R=[k_{s},m]\). The corresponding minor is exactly \(F_{k_{s},1}(XK)=F_{k_{s},1}(X)=F_{k_{s},1}(U)\).In the Laplace expansion for \(f_{ij}\) by the last block column it us multiplied by the minor \(f_{ij}^{\prime}\) similar to \(f_{ij}\). It has \(b-1\) blocks: the block \(B\) is deleted and the previous \(Y\)- or \(Y^{\dagger}\)-block is truncated by deletion of the last \(m-k_{s}+1\) rows. All the exit points subordinate to \((i,j)\) remain the same except for \((k_{s},1)\) that disappears. So, by induction (4.1) holds for \(f_{ij}^{\prime}\) with \(s-1\) factors in the first prodcut, hence it holds for \(f_{ij}\circ h(U)=f_{ij}^{\prime}\circ h(U)\cdot F_{k_{s},1}(U)\).
Assume now that the lower right block \(B\) of \(\mathcal{L}(i,j)\) is an \(X^{\dagger}\)-block. In this case the Laplace expansion by the last block column involves minors of \(X^{\dagger}\). By the Jacobi's complementary minor formula for the minors of the adjugate matrix,
\[|X_{I}^{J}|=|(X^{\dagger})\overline{\frac{w_{0}J}{w_{0}\bar{I}}}|, \tag{4.4}\]
where bar stands for the complement and \(w_{0}\) moves each index \(p\) to \(n-p+1\). The sign \((-1)^{\Sigma I+\Sigma J}\) in the Jacobi's formula is compensated by the conjugation by the signature matrix \(\mathbf{J}\). Let \([m^{\dagger},k^{\dagger}]\) be the upper \(X^{\dagger}\)-run of \(B\). The minors involved in the Laplace expansion lie in the first \(c^{\dagger}=n-k_{s}^{\dagger}+1\) columns and in rows \(R^{\dagger}\cup[k^{\dagger}+1,n]\) for \(R^{\dagger}\subset[m^{\dagger},k^{\dagger}]\) and \(|R^{\dagger}|=k^{\dagger}-k_{s}^{\dagger}+1\). By (4.4) such minors correspond bijectively to the minors of the \(X\)-block studied above since \(|R|+|R^{\dagger}|=m-k+1=k^{\dagger}-m^{\dagger}+1\). Consequently, all of them vanish except for the one that corresponds to \(R^{\dagger}=[k_{s}^{\dagger},k^{\dagger}]\), which is equal to \(F_{k_{s},1}(U)\).
The case when the lower right block \(B\) of \(\mathcal{L}(i,j)\) is a \(Y\)-block is treated similarly to the case of and \(X\)-block. In this case \(U_{+}\) is refactored and \(Y\) is multiplied from the left by a lower triangular matrix \(K^{\prime}\) so that
\[K^{\prime}Y=\begin{bmatrix}\star&\star&\star\\ \star&\tilde{Y}&0\\ \star&\star&0\end{bmatrix}\]
where \(\tilde{Y}\) is a lower anti-triangular matrix whose size is equal to the size of the leftmost \(Y\)-run of \(B\), so that the only non-vanihing minor of \(B\) involved in the Laplace expansion by the last block row is \(F_{1,m_{t}}(U)\). The case when \(B\) is a \(Y^{\dagger}\)-block is treated via the Jacobi's complementary minor formula exactly as above.
Note that the double product in the right hand side of (4.1) that defines the ratio \(f^{h}_{ij}(U)/F_{ij}(U)\) depends only on the difference \(i-j\); we denote it \(t_{i-j}(U)\). Consequently,
\[\frac{f^{h}_{i+1,j+1}(U)}{f^{h}_{ij}(U)}=\frac{F_{i+1,j+1}(U)}{F_{ij}(U)} \tag{4.5}\]
for \(1\leq i,j<n\). Further,
\[\begin{split} t_{i-n}(U)&=\begin{cases}f^{h}_{( \gamma^{\mathrm{r}})^{*}(i)+1,1}(U)&\text{if $i\in\Gamma^{\mathrm{r}}_{2}$,}\\ 1&\text{otherwise,}\end{cases}\\ t_{n-j}(U)&=\begin{cases}f^{h}_{1,\gamma^{\mathrm{c}}(j)+1}(U)&\text{if $ j\in\Gamma^{\mathrm{c}}_{1}$,}\\ 1&\text{otherwise.}\end{cases}\end{split} \tag{4.6}\]
If \((\gamma^{\mathrm{r}})^{*}\) keeps the orientation of the connected component of \(\Gamma^{\mathrm{r}}_{2}\) that contains \(i\), or, respectively, \(\gamma^{\mathrm{c}}\) keeps the orientation of the connected component of \(\Gamma^{\mathrm{c}}_{1}\) that contains \(j\), the above formulas follow immediately from (4.1). If the orientation is reversed, it is enough to note that by (4.4), each minor in the product that defines \(t_{i-j}(U)\) can be replaced by the corresponding minor of the dual block.
It follows from the proof of Theorem 4.4 that formulas similar to (4.1) are valid for certain other minors of matrices \(\mathcal{L}\) and \(\mathcal{L}^{\dagger}\) restricted to the diagonal \(X=Y=Z\). Slightly abusing notation, we will write \(\mathcal{L}\circ h(U)\) instead of \(\mathcal{L}(h(U),h(U))\), etc. In particular, let \(\mathcal{L}(i,1)\) be of size \(N\times N\), and let \(p\) be such that \(\mathcal{L}(i,1)_{pp}=x_{i1}\). Recall that this entry of \(\mathcal{L}(i,1)\) belongs to an \(X\)-block \(X^{[1,\beta]}_{[\alpha,n]}\) with \(\alpha=(i-1)_{-}+1\). Similarly, let \(\mathcal{L}^{\dagger}(i,1)\) be of size \(N^{\dagger}\times N^{\dagger}\), and let \(p^{\dagger}\) be such that \(\mathcal{L}^{\dagger}(i,1)_{p^{\dagger}p^{\dagger}}=x_{i^{\dagger}1}\), and let \((X^{\dagger})^{[1,\beta^{\dagger}]}_{[\alpha^{\dagger},n]}\) be the \(X^{\dagger}\)-block dual to \(X^{[1,\beta]}_{[\alpha,n]}\).
**Proposition 4.9**.: (i) _Let \(I\subset[\alpha,n]\) be an arbitrary subset of size \(n-i+1\), then_
\[\det\mathcal{L}(i,1)^{[p,N]}_{(I-i+p)\cup[n-i+1+p,N]}\circ h(U)=\det h^{\mathrm{ r}}(U)^{[1,n-i+1]}_{I}\cdot t_{i-1}(U), \tag{4.7}\]
_where \(I+\gamma\) denotes the shift of \(I\) by \(\gamma\)._
(ii) _Let \(I^{\dagger}\subset[\alpha^{\dagger},n]\) be an arbitrary subset of size \(n-i^{\dagger}+1\), then_
\[\det\mathcal{L}^{\dagger}(i,1)^{[p^{\dagger},N^{\dagger}]}_{(I^{\dagger}-i^{ \dagger}+p^{\dagger})\cup[n-i^{\dagger}+1+p^{\dagger},N^{\dagger}]}\circ h(U)= \det(h^{\mathrm{r}}(U)^{\dagger})^{[1,n-i^{\dagger}+1]}_{I^{\dagger}}\cdot t_{ i-1}(U). \tag{4.8}\]
Note that (4.7) for \(I=[i,n]\) coincides with (4.1) for \(j=1\). There are similar formulas for the minors of \(\mathcal{L}(1,j)\) and \(\mathcal{L}^{\dagger}(1,j)\), but we will not reproduce them here.
_Remark 4.10_.: Formulas (4.7) and (4.8) can be generalized even further. They remain valid if one replaces the top block in the left hand side with the corresponding block of an arbitrary \(n\times n\) matrix \(A\) and \(h^{\mathrm{r}}(U)\) in the right hand side with \(AH^{\mathrm{c}}(U)^{-1}\).
### The quiver
The goal of this Section is to describe the quiver \(Q_{\mathbf{\Gamma}^{\mathrm{r}},\mathbf{\Gamma}^{\mathrm{c}}}\) and to prove that the seed \(\Sigma=(F_{\mathbf{\Gamma}^{\mathrm{r}},\mathbf{\Gamma}^{\mathrm{c}}},Q_{ \mathbf{\Gamma}^{\mathrm{r}},\mathbf{\Gamma}^{\mathrm{c}}})\) defines a cluster structure \(\mathcal{C}_{\mathbf{\Gamma}^{\mathrm{r}},\mathbf{\Gamma}^{\mathrm{c}}}\) compatible with a Poisson bracket \(\{\cdot,\cdot\}_{r\mathbf{\Gamma}^{\mathrm{r}},r\mathbf{\Gamma}^{\mathrm{c}}}\). This provides a generalization of Theorem 3.19 in [13] that dealt only with oriented BD data. Moreover, the proof is much simpler that the one in [13]. It is based on Theorem 4.4 and avoids complicated calculations.
The quiver has \(n^{2}-1\) vertices labeled \((i,j)\). The function attached to a vertex \((i,j)\) is \(f_{ij}\). It is convenient to describe the quiver with an additional dummy frozen vertex \((1,1)\) that corresponds to \(f_{11}=|X|=1\). In fact, the latter quiver corresponds to the cluster structure on \(GL_{n}\) defined by \((\mathbf{\Gamma}^{\mathrm{r}},\mathbf{\Gamma}^{\mathrm{c}})\).
Recall first how looks the quiver \(Q_{\varnothing,\varnothing}\). This quiver corresponds to the standard cluster structure built for the open double Bruhat cell in [2] and extended to the whole group in [11]. All vertices in the first row and column are frozen, all other vertices are mutable. The quiver \(Q_{\varnothing,\varnothing}\) for \(SL_{7}\) is presented in Fig. 5.
The quiver \(Q_{\mathbf{\Gamma}^{\mathrm{r}},\mathbf{\Gamma}^{\mathrm{c}}}\) is obtained from \(Q_{\varnothing,\varnothing}\) in the following way. For every row \(X\)-run \([k,m]\), the vertex \((k,1)\) remains frozen, and all other vertices \((k+1,1),\ldots,(m,1)\) become mutable. If \(\gamma^{\mathrm{r}}\) preserves the orientation of the connected component \([k,m-1]\in\Gamma_{1}^{\mathrm{r}}\) then the following two paths are added: \((m,1)\rightarrow(m-1,1)\rightarrow\cdots\rightarrow(k,1)\) and \((\gamma^{\mathrm{r}}(k),n)\rightarrow(k+1,1)\rightarrow(\gamma^{\mathrm{r}}( k+1),n)\rightarrow(k+2,1)\rightarrow\cdots\rightarrow(\gamma^{\mathrm{r}}(m-1),n) \rightarrow(m,1)\rightarrow(\gamma^{\mathrm{r}}(m-1)+1,n)\). If \(\gamma^{\mathrm{r}}\) reverses the orientation of the connected component \([k,m-1]\) then the following two paths are added: \((k+1,1)\rightarrow(k,1)\) and \((\gamma^{\mathrm{r}}(m-1),n)\rightarrow(m,1)\rightarrow(\gamma^{\mathrm{r}}( m-2),n)\rightarrow(m-1,1)\rightarrow\cdots\rightarrow(\gamma^{\mathrm{r}}(k),n) \rightarrow(k+1,1)\rightarrow(\gamma^{\mathrm{r}}(k)+1,n)\).
Similarly, for every column \(Y\)-run \([p,q]\), the vertex \((1,p)\) remains frozen, and all other vertices \((1,p+1),\ldots,(1,q)\) become mutable. If \((\gamma^{\mathrm{c}})^{*}\) preserves the orientation of the connected component \([p,q-1]\in\Gamma_{2}^{\mathrm{c}}\) that corresponds to the run \([p,q]\) then the following two paths are added: \((1,q)\rightarrow(1,q-1)\rightarrow\cdots\rightarrow(1,p)\) and \((n,(\gamma^{\mathrm{c}})^{*}(p))\rightarrow(1,p+1)\rightarrow(n,(\gamma^{ \mathrm{c}})^{*}(p+1))\rightarrow(1,p+2)\rightarrow\cdots\rightarrow(n,( \gamma^{\mathrm{c}})^{*}(q-1))\rightarrow(1,q)\rightarrow(n,(\gamma^{\mathrm{c }})^{*}(q-1)+1)\). If \((\gamma^{\mathrm{c}})^{*}\) reverses the orientation of the connected component \([p,q-1]\) then the following two paths are added: \((1,p+1)\rightarrow(1,p)\) and \((n,(\gamma^{\mathrm{c}})^{*}(q-1))\rightarrow(1,q)\rightarrow(n,(\gamma^{ \mathrm{c}})^{*}(q-2))\rightarrow(1,q-1)\rightarrow\cdots\rightarrow(n,( \gamma^{\mathrm{c}})^{*}(p))\rightarrow(1,p+1)\rightarrow(n,(\gamma^{ \mathrm{c}})^{*}(p)+1)\).
Consider our running example. As explained above, \(Y\)-runs defined by \(\gamma^{\mathrm{c}}\) are \([1,1]\), \([2,4]\), \([5,6]\), and \([7,7]\). Consequently, vertices \((1,1)\), \((1,2)\), \((1,5)\), and \((1,7)\)
remain frozen and vertices \((1,3)\), \((1,4)\), and \((1,6)\) become mutable. Further, \((\gamma^{\rm c})^{*}\) preserves the orientation of all connected components, hence the following paths are added: \((1,4)\to(1,3)\to(1,2)\) and \((7,3)\to(1,3)\to(7,4)\to(1,4)\to(7,5)\) for the component \([2,3]\) and \((1,6)\to(1,5)\) and \((7,6)\to(1,6)\to(7,7)\) for the component \([5,5]\). Similarly, \(X\)-runs defined by \(\gamma^{\rm r}\) are \([1,3]\), \([4,4]\), \([5,6]\), and \([7,7]\). Consequently, vertices \((4,1)\), \((5,1)\), and \((7,1)\) remain frozen and vertices \((2,1)\), \((3,1)\), and \((6,1)\) become mutable. Further, \(\gamma^{\rm r}\) reverses the orientation of the connected component \([1,2]\) and (trivially) preserves the orientation of the connected component \([5,5]\), hence the following paths are added: \((2,1)\to(1,1)\) and \((3,7)\to(3,1)\to(4,7)\to(2,1)\to(5,7)\) for the component \([1,2]\) and \((6,1)\to(5,1)\) and \((1,7)\to(6,1)\to(2,7)\) for the component \([5,5]\). The resulting graph is presented in Fig. 6. Vertices shown as dotted circles are copies of the existing vertices and are placed to make the figure easier to comprehend. The edges of additional paths are shown by paler arrows.
**Theorem 4.11**.: _Let \((\mathbf{\Gamma}^{\rm r},\mathbf{\Gamma}^{\rm c})\) be an aperiodic pair of BD triples, then the seed \((F_{\mathbf{\Gamma}^{\rm r},\mathbf{\Gamma}^{\rm c}},Q_{\mathbf{\Gamma}^{\rm r },\mathbf{\Gamma}^{\rm c}})\) defines a cluster structure compatible with the Poisson bracket \(\{\cdot,\cdot\}_{r^{\rm r^{\rm c}}}\),\(r^{\rm r^{\rm r}}\) on \(SL_{n}\) for any pair of R-matrices \(r^{\mathbf{\Gamma}^{\rm c}}\), \(r^{\mathbf{\Gamma}^{\rm r}}\) from the BD classes defined by \(\mathbf{\Gamma}^{\rm c}\), \(\mathbf{\Gamma}^{\rm r}\), respectively._
Proof.: The proof is based on the characterization of pairs of compatible Poisson and cluster structures given in [9] and on Theorems 3.1 and 4.4 above.
Recall the definition of cluster \(y\)-variables associated with a seed \((\mathcal{F}=(F_{v})_{v\in Q},Q)\) (see [9, 8]): for any mutable \(v\in Q\)
\[y_{v}=\frac{\prod\limits_{v\to u}F_{u}}{\prod\limits_{w\to v}F_{w}}, \tag{4.9}\]
where \(\to\) means an arrow in the quiver \(Q\).
For \(i,j\in[2,n]\), let \(y_{ij}\) and \(Y_{ij}\) be the \(y\)-variables that correspond to the vertex \((i,j)\) in the seeds \((F_{\mathbf{\Gamma}^{*},\mathbf{\Gamma}^{c}},Q_{\mathbf{\Gamma}^{*},\mathbf{ \Gamma}^{c}})\) and \((F_{\varnothing,\varnothing},Q_{\varnothing,\varnothing})\), respectively; recall that \(F_{\varnothing,\varnothing}=(F_{ij})_{i,j=1}^{n}\).
**Lemma 4.12**.: _For any \(i,j\in[2,n]\),_
\[y_{ij}^{h}(U)=Y_{ij}(U), \tag{4.10}\]
_where \(y_{ij}^{h}(U)=y_{ij}\circ h(U)\)._
Proof.: For \(i,j\in[2,n-1]\) the neighborhoods of the vertex \((i,j)\) in \(Q_{\mathbf{\Gamma}^{*},\mathbf{\Gamma}^{c}}\) and \(Q_{\varnothing,\varnothing}\) are identical, and
\[y_{ij}^{h}(U)=\frac{f_{i+1,j+1}^{h}(U)}{f_{i-1,j-1}^{h}(U)}\cdot\frac{f_{i-1,j }^{h}(U)}{f_{i,j+1}^{h}(U)}\cdot\frac{f_{i,j-1}^{h}(U)}{f_{i+1,j}^{h}(U)}=Y_{ ij}(U)\]
by (4.1) and (4.5).
For \(j\in[2,n-1]\) the neighborhoods of the vertex \((n,j)\) in \(Q_{\mathbf{\Gamma}^{*},\mathbf{\Gamma}^{c}}\) and \(Q_{\varnothing,\varnothing}\) are identical unless \(j\) or \(j-1\) belongs to \(\Gamma_{1}^{c}\), in which case the former contains one or two additional vertices. No matter which case occurs, we can use (4.5) and the second formula in (4.6) to write
\[y_{nj}^{h}(U) =\frac{t_{n-j}(U)}{f_{n-1,j-1}^{h}(U)}\cdot\frac{f_{n-1,j}^{h}(U) }{f_{n,j+1}^{h}(U)}\cdot\frac{f_{n,j-1}^{h}(U)}{t_{n-j+1}(U)}\] \[=\frac{1}{F_{n-1,j-1}(U)}\cdot\frac{F_{n-1,j}(U)}{F_{n,j+1}(U)} \cdot F_{n,j-1}(U)=Y_{nj}(U).\]
A similar argument based on the first formula in (4.6) applies in the case of the vertex \((i,n)\), \(i\in[2,n-1]\). To treat the vertex \((n,n)\) we use both arguments.
For brevity, in what follows we write \(\{\cdot,\cdot\}\) for \(\{\cdot,\cdot\}_{r_{\Gamma^{c}}^{\varnothing},r_{\Gamma^{c}}^{\varnothing}}\) and \(\{\cdot,\cdot\}_{\mathbf{\Gamma}}\) for \(\{\cdot,\cdot\}_{r^{\Gamma^{c}},r^{\Gamma^{r}}}\). By Theorem 4.5 in [10], to prove Theorem 4.11 it suffices to check relation
\[\{\bar{y}_{ij},\bar{f}_{i\bar{j}}\}=\sum_{(u,v)\xrightarrow{\Gamma}(i,j)}\{ \bar{f}_{uv},\bar{f}_{i\bar{j}}\}_{\mathbf{\Gamma}}-\sum_{(i,j)\xrightarrow{ \Gamma}(u,v)}\{\bar{f}_{uv},\bar{f}_{i\bar{j}}\}_{\mathbf{\Gamma}}=\begin{cases} \lambda&\text{ for }(\hat{i},\hat{j})=(i,j),\\ 0&\text{ otherwise}\end{cases} \tag{4.11}\]
for all pairs \((i,j),(\hat{i},\hat{j})\) such that \(f_{ij}\) is not frozen, where \(\lambda\neq 0\) is fixed, \(\xrightarrow{\longrightarrow}\) is an arrow in \(Q_{\mathbf{\Gamma}^{*},\mathbf{\Gamma}^{c}}\), and the bar over a function stands for the logarithm of this function.
For \(i,j\in[2,n]\), Theorems 3.1 and 4.4 together with Lemma 4.12 imply
\[\{\bar{y}_{ij},\bar{f}_{i\bar{j}}\}_{\mathbf{\Gamma}}=\{\bar{Y}_{ij},\bar{F}_{ \bar{i}\bar{j}}+\bar{t}_{i-\bar{j}}\}=\{\bar{Y}_{ij},\bar{F}_{\bar{i}\bar{j}} \}=\begin{cases}1&\text{ for }(i,j)=(\hat{i},\hat{j}),\\ 0&\text{ otherwise}.\end{cases}\]
Here the second equality uses the fact that \(t_{i-\hat{j}}(U)\) is a product of frozen variables for the standard cluster structure defined by the seed \((F_{\varnothing,\varnothing},Q_{\varnothing,\varnothing})\), and therefore
haz a zero Poisson bracket with \(Y_{ij}(U)\). The third equality follows from the compatibility of the log-canonical basis \((F_{ij})\) with the standard cluster structure, see the proof of Theorem 4.18 in [10, p.98]. Note that the bracket in this theorem has the opposite sign, which is compensated by the opposite direction of the quiver, see [10, p.32].
Consider now the case \(1<i<n\), \(j=1\). Assume first that \(i-1\in\Gamma_{1}^{\mathrm{r}}\), \(i\notin\Gamma_{1}^{\mathrm{r}}\), and \(\gamma^{\mathrm{r}}\) preserves the orientation of the connected component of \(\Gamma_{1}^{\mathrm{r}}\) that contains \(i-1\). In this subcase the index set in the first sum in (4.11) consists fo the vertices \((\gamma^{\mathrm{r}}(i-1),n)\) and \((i,2)\), and the index set of the second sum consists of the vertices \((i+1,2)\), \((i-1,1)\), and \((\gamma^{\mathrm{r}}(i-1)+1,n)\). Further, \(\bar{t}_{\gamma^{\mathrm{r}}(i-1)-n}=\bar{F}_{i1}+\bar{t}_{i-1}\), and \(\bar{t}_{\gamma^{\mathrm{r}}(i-1)+1-n}=0\). Consequently, the left hand side of (4.11) boils down to
\[\left\{\bar{F}_{i1}-\bar{F}_{i-1,1}+\bar{F}_{i2}-\bar{F}_{i+1,2}+\bar{F}_{ \gamma^{\mathrm{r}}(i-1),n}-\bar{F}_{\gamma^{\mathrm{r}}(i-1)+1,n},\bar{F}_{ i\bar{j}}+\bar{t}_{i-\bar{j}}\right\}\!=\!\{\Phi,\bar{F}_{i\bar{j}}\!+\!\bar{t}_{i- \bar{j}}\}.\]
Assume now that \(i-1,i\in\Gamma_{1}^{\mathrm{r}}\), and \(\gamma^{\mathrm{r}}\) preserves the orientation of the connected component of \(\Gamma_{1}^{\mathrm{r}}\) that contains \(i-1\), so that \(\gamma^{\mathrm{r}}(i-1)+1=\gamma^{\mathrm{r}}(i)\). In this subcase the vertex \((i+1,1)\) is added to the index set in the first sum in (4.11). Further, condition \(\bar{t}_{\gamma^{\mathrm{r}}(i-1)+1-n}=0\) is replaced by \(\bar{t}_{\gamma^{\mathrm{r}}(i)-n}=\bar{F}_{i+1,1}+\bar{t}_{i}\). Consequently, the left hand side of (4.11) is given by the same expression \(\{\Phi,\bar{F}_{i\bar{j}}+\bar{t}_{i-\bar{j}}\}\) as before.
Assume next that \(i-1\in\Gamma_{1}^{\mathrm{r}}\), \(i-2\notin\Gamma_{1}^{\mathrm{r}}\) and \(\gamma^{\mathrm{r}}\) reverses the orientation of the connected component of \(\Gamma_{1}^{\mathrm{r}}\) that contains \(i-1\). In this subcase the index sets for both sums in (4.11) are the same as in the first subcase, and the tails \(t,\bar{t}\) satisfy the same conditions. Consequently, the left hand side of (4.11) is given by the same expression \(\{\Phi,\bar{F}_{i\bar{j}}+\bar{t}_{i-\bar{j}}\}\) as before.
Finally, assume that \(i-2,i-1\in\Gamma_{1}^{\mathrm{r}}\) and \(\gamma^{\mathrm{r}}\) reverses the orientation of the connected component of \(\Gamma_{1}^{\mathrm{r}}\) that contains \(i-1\), so that \(\gamma^{\mathrm{r}}(i-1)+1=\gamma^{\mathrm{r}}(i-2)\). In this subcase the vertex \((i-1,1)\) is deleted from the index set in the second sum in (4.11). Further, condition \(\bar{t}_{\gamma^{\mathrm{r}}(i-1)+1-n}=0\) is replaced by \(\bar{t}_{\gamma^{\mathrm{r}}(i-2)-n}=\bar{F}_{i-1,1}+\bar{t}_{i-2}\). Consequently, the left hand side of (4.11) is given by the same expression \(\{\Phi,\bar{F}_{i\bar{j}}+\bar{t}_{i-\bar{j}}\}\) as before.
To evaluate \(\{\Phi,\bar{F}_{i\bar{j}}+\bar{t}_{i-\bar{j}}\}\), we start with studying the bracket \(\{\cdot,\cdot\}_{0}=\{\cdot,\cdot\}_{r^{\varnothing}_{\varnothing},r^{\varnothing} _{\varnothing}}\) where \(r^{\varnothing}_{\varnothing}\) corresponds to \(R^{\varnothing}_{0}=\frac{1}{2}\pi_{\ast}\). For an arbitrary \(i\in[1,n]\) and a subset \(I\subset[1,n]\) define
\[\mathrm{sign}(i-I)=\begin{cases}-1&\text{ if }i\text{ is less than the minimal element in }I,\\ 0&\text{ if }i\in I,\\ 1&\text{ if the maximal element in }I\text{ is less than }i;\end{cases}\]
otherwise \(\mathrm{sign}(i-I)\) is not defined.
**Lemma 4.13**.: _If \(\mathrm{sign}(i-I)\) and \(\mathrm{sign}(j-J)\) are defined and satisfy the inequality \(|\mathrm{sign}(i-I)+\mathrm{sign}(j-J)|\leq 1\) then_
\[\{\bar{u}_{i\bar{j}},|\bar{U}_{I}^{J}|\}_{0}=\frac{1}{2}\left(\mathrm{sign}(i -I)+\mathrm{sign}(j-J)\right).\]
Proof.: Follows immediately from [10, equation (8.21)] and [10, Lemma 4.7].
Further, for an arbitrary pair of functions \(f_{1},f_{2}\) we have \(\Delta(f_{1},f_{2})=\{f_{1},f_{2}\}-\{f_{1},f_{2}\}_{0}=\langle S^{\mathbf{ \Gamma}^{\mathbf{c}}}\pi_{\bullet}\nabla^{L}f_{1},\nabla^{L}f_{2}\rangle- \langle S^{\mathbf{\Gamma}^{\mathbf{r}}}\pi_{\bullet}\nabla^{R}f_{1},\nabla^{R} f_{2}\rangle\). A straightforward computation gives \(\Delta(\bar{u}_{ij},\bar{u}_{kl})=s_{lj}^{\Gamma^{\varepsilon}}-s_{ki}^{ \mathbf{\Gamma}^{\varepsilon}}\), hence
\[\Delta(\overline{\det U_{I}^{J}},\overline{\det U_{I^{\prime}}^{J^{\prime}}}) =\sum_{i\in J,j\in J^{\prime}}s_{ij}^{\mathbf{\Gamma}^{\mathbf{c}}}-\sum_{i\in I,j\in I^{\prime}}s_{ij}^{\mathbf{\Gamma}^{\mathbf{r}}}. \tag{4.12}\]
It follows from Lemma 4.13 that
\[\{\bar{F}_{i1}-\bar{F}_{i-1,1},\bar{F}_{i\bar{j}}\}_{0}=\begin{cases}\frac{1} {2}&\text{for $2\leq\hat{\imath}\leq i-1$, $n-i+3\leq\hat{\jmath}\leq n-i+1+\hat{\imath}$},\\ -\frac{1}{2}&\text{for $i\leq\hat{\imath}\leq n$, $\hat{\imath}-i+2\leq\hat{ \jmath}\leq n-i+2$},\\ 0&\text{otherwise}.\end{cases}\]
Further,
\[\{\bar{F}_{i2}-\bar{F}_{i+1,2},\bar{F}_{i\bar{j}}\}_{0}=\begin{cases}\frac{1} {2}&\text{for $i+1\leq\hat{\imath}\leq n$, $\hat{\imath}-i+2\leq\hat{\jmath}\leq n-i+2$},\\ &\text{or $(\hat{\imath},\hat{\jmath})=(i,1)$, or $(\hat{\imath},\hat{\jmath})=(1,n-i+2)$},\\ -\frac{1}{2}&\text{for $3\leq\hat{\imath}\leq i$, $n-i+3\leq\hat{\jmath}\leq n-i+\hat{ \imath}$},\\ 0&\text{otherwise},\end{cases}\]
and
\[\{\bar{F}_{\gamma^{\tau}(i-1),n}-\bar{F}_{\gamma^{\tau}(i-1)+1,n}\}_{0}= \begin{cases}-\frac{1}{2}&\text{for $\hat{\imath}=\gamma^{\tau}(i-1)+1$}\\ &\text{or $1\leq\hat{\imath}\leq\gamma^{\tau}(i-1)$, $\hat{\jmath}=n-\gamma^{\tau}(i-1)+\hat{ \imath}$},\\ 0&\text{otherwise}.\end{cases}\]
Consequently, \(\big{\{}\Phi,\bar{F}_{i\bar{j}}\big{\}}_{0}\) vanishes if \((\hat{\imath},\hat{\jmath})\) does not belong to the rows \(\hat{\imath}=i\) and \(i=\gamma^{\tau}(i-1)+1\) or to the diagonals \(\hat{\jmath}-\hat{\imath}=n-\gamma^{\tau}(i-1)\) and \(\hat{\jmath}-\hat{\imath}=n-i+1\). Both rows and the first of the diagonals contribute \(-\frac{1}{2}\), the second diagonal contributes \(\frac{1}{2}\). Therefore, for \(i-1>\gamma^{\tau}(i-1)\) the second diagonal intersects the row \(\hat{\imath}=\gamma^{\tau}(i-1)+1\) and the contributions cancel at \((\gamma^{\tau}(i-1)+1,\gamma^{\tau}(i-1)+i-n)\), while for \(i-1<\gamma^{\tau}(i-1)\) the first diagonal intersects the row \(\hat{\imath}=i\) and the contributions at \((i,n+i-\gamma^{\tau}(i-1)\) add to \(-1\). Finally, the value of the bracket at \((i,1)\) equals \(1/2\).
To compute \(\Delta(\Phi,\bar{F}_{i\bar{j}})\) note that the column sets for the minors involved with the positive sign are \([1,n-i+1]\), \([2,n-i+2]\), and \([n,n]\), while for the minors involved with the negative sign they are \([1,n-i+2]\), \([2,n-i+1]\), and \([n,n]\). Consequently, the contribution of the elements of \(S^{\mathbf{\Gamma}^{\mathbf{c}}}\) in (4.12) vanishes. The row sets for the minors involved with the positive sign are \([i,n]\), \([i,n]\), and \([\gamma^{\tau}(i-1),\gamma^{\tau}(i-1)]\), while for the minors involved with the negative sign they are \([i-1,n]\), \([i+1,n]\), and \([\gamma^{\tau}(i-1)+1,\gamma^{\tau}(i-1)+1]\). It follows from (4.12) that
\[\Delta(\Phi,\bar{F}_{i\bar{\jmath}})=\sum_{l}\left(s_{i-1,l}^{\mathbf{\Gamma} ^{\mathbf{r}}}-s_{il}^{\mathbf{\Gamma}^{\mathbf{r}}}-s_{\gamma^{\tau}(i-1),l}^ {\mathbf{\Gamma}^{\mathbf{r}}}+s_{\gamma^{\tau}(i-1)+1,l}^{\mathbf{\Gamma}^{ \mathbf{r}}}\right)\]
where \(l\) belongs to the row set of the minor that defines \(F_{i\bar{\jmath}}\). Recall that \(S^{\mathbf{\Gamma}^{\mathbf{r}}}\) is skew symmetric and \(S^{\mathbf{\Gamma}^{\mathbf{r}}}(1-\gamma^{\tau})h_{\alpha}=\frac{1}{2}(1+ \gamma^{\tau})h_{\alpha}\), hence
\[s_{i-1,l}^{\mathbf{\Gamma}^{\mathbf{r}}}-s_{il}^{\mathbf{\Gamma}^{\mathbf{r}}}-s _{\gamma^{\tau}(i-1),l}^{\mathbf{\Gamma}^{\mathbf{r}}}+s_{\gamma^{\tau}(i-1)+1, l}^{\mathbf{\Gamma}^{\mathbf{r}}}=\begin{cases}\frac{1}{2}&\text{for $l=i$ or $l=\gamma^{\tau}(i-1)+1$},\\ -\frac{1}{2}&\text{for $l=i-1$ or $l=\gamma^{\tau}(i-1)$},\\ 0&\text{otherwise}.\end{cases}\]
Consequently, \(\Delta(\Phi,\bar{F}_{i\bar{\jmath}})\) vanishes if \((\hat{\imath},\hat{\jmath})\) does not belong to the same rows \(\hat{\imath}=i\) and \(\hat{\imath}=\gamma^{\tau}(i-1)+1\) or to the same diagonals \(\hat{\jmath}-\hat{\imath}=n-\gamma^{\tau}(i-1)\) and \(\hat{\jmath}-\hat{\imath}=n-i+1\). This
time both rows contribute \(\frac{1}{2}\), and both diagonals contribute \(-\frac{1}{2}\). For \(i-1>\gamma^{\mathrm{r}}(i-1)\) the second diagonal intersects the row \(\hat{\imath}=\gamma^{\mathrm{r}}(i-1)+1\) and the contributions cancel at \((\gamma^{\mathrm{r}}(i-1)+1,\gamma^{\mathrm{r}}(i-1)+i-n)\), while for \(i-1<\gamma^{\mathrm{r}}(i-1)\) the first diagonal intersects the row \(\hat{\imath}=i\) and the contributions cancel at \((i,n+i-\gamma^{\mathrm{r}}(i-1)\) add to \(1\).
Combining this result with the previous computations for the bracket \(\{\Phi,\bar{F}_{i\bar{\jmath}}\}_{0}\) we see that \(\big{\{}\Phi,\bar{F}_{i\bar{\jmath}}\big{\}}\) equals \(1\) for \((\hat{\imath},\hat{\jmath})=(i,1)\), \(-1\) on the diagonal \(\hat{\jmath}-\hat{\imath}=n-\gamma^{\mathrm{r}}(i-1)\) and vanishes otherwise. Consequently, \(\{\Phi,\bar{t}_{i-j}\}\) equals \(1\) on the diagonal \(\hat{\jmath}-\hat{\imath}=n-\gamma^{\mathrm{r}}(i-1)\) (those \((\hat{\imath},\hat{\jmath})\) for which \((i,1)\) is subordinate and \((1,n-\gamma^{\mathrm{r}}(i-1)+1)\) is not subordinate) and vanishes otherwise. Tus, \(\{\Phi,\bar{F}_{i\bar{\jmath}}+\bar{t}_{i-j}\}\) equals \(1\) for \((\hat{\imath},\hat{\jmath})=(i,1)\) and vanishes otherwise, which completes the verification of (4.11) in this case.
The case \(i=1\), \(1<j<n\) is treated along the same lines. In this case the left hand side in (4.11) boils down to
\[\left\{\bar{F}_{1j}-\bar{F}_{1,j-1}+\bar{F}_{2j}-\bar{F}_{2,j+1}+\bar{F}_{n,( \gamma^{\mathrm{c}})^{*}(j-1)}-\bar{F}_{n,(\gamma^{\mathrm{c}})^{*}(j-1)+1}, \bar{F}_{i\bar{\jmath}}+\bar{t}_{i-j}\right\}.\]
The latter is treated in a similar way as above. In this case the contribution of the elements of \(S^{\Gamma^{\mathrm{r}}}\) vanishes, and the required result follows from \(S^{\Gamma^{\mathrm{c}}}((\gamma^{\mathrm{c}})^{*}-1)h_{\alpha}=\frac{1}{2}(1+ (\gamma^{\mathrm{c}})^{*})h_{\alpha}\) and the skew symmetry of \(S^{\Gamma^{\mathrm{c}}}\).
Cases \(i=n\), \(j=1\) and \(i=1\), \(j=n\) are treated similarly taking into account \(\bar{t}_{n-1}=\bar{F}_{1,\gamma^{\mathrm{c}}(1)+1}+\bar{t}_{-\gamma^{\mathrm{ c}}(1)}\) for \(1\in\Gamma^{\mathrm{c}}_{1}\) and \(\bar{t}_{1-n}=\bar{F}_{(\gamma^{\mathrm{r}})^{*}(1)+1,1}+\bar{t}_{(\gamma^{ \mathrm{r}})^{*}(1)}\) for \(1\in\Gamma^{\mathrm{r}}_{2}\).
### Regularity
Recall that a cluster structure in the field of rational functions on a quasi-affine variety is called _regular_ if every variable in every cluster is a regular function. By [13, Proposition 3.11], to prove regularity it is enough to exhibit a regular cluster such that all adjacent clusters are regular as well. The goal of this section is to extend the regularity result of Theorem 6.1 in [13] to the general case of an aperiodic pair \((\mathbf{\Gamma}^{\mathrm{r}},\mathbf{\Gamma}^{\mathrm{c}})\).
**Theorem 4.14**.: _For any mutable cluster variable \(f_{ij}\in F_{\mathbf{\Gamma}^{\mathrm{r}},\mathbf{\Gamma}^{\mathrm{c}}}\), the adjacent variable \(f^{\prime}_{ij}\) is a regular function on \(SL_{n}\)._
Proof.: We start with the following auxiliary statement. Assume that \(i-1\in\Gamma^{\mathrm{r}}_{1}\) and that \(\gamma^{\mathrm{r}}\) reverses the orientation of the connected component of \(\Gamma^{\mathrm{r}}_{1}\) that contains \(i-1\). Let this component be \([i-1-s,i-1+t]\), \(s+t>0\). Consider the pair of dual matrices \(\mathcal{L}(i,1)\) and \(\mathcal{L}^{\dagger}(i,1)\) restricted to the diagonal \(X=Y\). Abusing notation, we denote them by the same symbols \(\mathcal{L}(i,1)\) and \(\mathcal{L}^{\dagger}(i,1)\). This should not lead to confusion since from now on we will only deal with matrices subject to this restriction.
Denote by \(M\) and \(M^{\dagger}\) the pair of square trailing submatrices of \(\mathcal{L}(i,1)\) and \(\mathcal{L}^{\dagger}(i,1)\), respectively, such that the entry in the upper left corner of \(M\) is \(x_{i1}\), and the entry in the upper left corner of \(M^{\dagger}\) is \(x_{i^{\dagger}1}^{\dagger}\); recall that by definition, \(f_{i1}=\det M\). Let \(r\) denote the size of \(M\) and \(r^{\dagger}\) denote the size of \(M^{\dagger}\). Note that the first row of \(M\) is an initial segment of the row \(X_{i}\). For \(1\leq j\leq s\) and \(0\leq k\leq t-1\) define an \(r\times r\) matrix \(M(j,k)\) via deleting row \(k+1\) from \(M\) and adding the corresponding segment of row \(X_{i-j}\) on top of the obtained matrix. Similarly, the first row of \(M^{\dagger}\) is an initial segment of the row \(X_{i^{\dagger}}^{\dagger}\); define an \(r^{\dagger}\times r^{\dagger}\) matrix \(M^{\dagger}(j,k)\) via adding the corresponding segment of row \(X_{i^{\dagger}-k-1}^{\dagger}\) on top of \(M^{\dagger}\) and deleting row \(j+1\) of the obtained matrix.
**Lemma 4.15**.: _For any \(1\leq j\leq s\) and \(0\leq k\leq t-1\)_
\[\det M(j,k)=\det M^{\dagger}(j,k).\]
Proof.: Define \(I(j,k)=(i-j)\cup([i,n]\setminus(i+k))\) and \(I^{\dagger}(j,k)=\left([i^{\dagger},n]\cup(i^{\dagger}-k-1)\right)\setminus(i^ {\dagger}+j-1)\), then
\[M(j,k) =\mathcal{L}(i,1)^{[p,N]}_{(I(j,k)-i+p)\cup[n-i+1+p,N]},\] \[M^{\dagger}(j,k) =\mathcal{L}^{\dagger}(i,1)^{[p^{\dagger},N^{\dagger}]}_{(I^{ \dagger}(j,k)-i^{\dagger}+p^{\dagger})\cup[n-i^{\dagger}+1+p^{\dagger},N^{ \dagger}]},\]
and hence \(\det M(j,k)\) and \(\det M^{\dagger}(j,k)\) are particular cases of minors studied in Lemma 4.9. Note that \(I^{\dagger}(j,k)=\overline{w_{0}I(j,k)}\), so by (4.4) and Lemma 4.9 it follows that \(\det M(j,k)\circ h(U)=\det M^{\dagger}(j,k)\circ h(U)\). It remains to note that \(h\) is invertible, as explained in Section 3.2.
_Remark 4.16_.: (i) The statement of the lemma remains true for \((j,k)=(0,0)\), in which case \(I(0,0)=[i,n]\) and \(I^{\dagger}(0,0)=[i^{\dagger},n]\), so that \(M(0,0)=M\) and \(M^{\dagger}(0,0)=M^{\dagger}\); the proof goes without any changes.The obtained equality gives an alternative representation \(f_{i1}=\det M^{\dagger}\).
(ii) In fact, the statement of the lemma holds also for the corresponding minors of \(\mathcal{L}(X,Y)\) and \(\mathcal{L}^{\dagger}(X,Y)\) and can be proved directly by using block-Laplace expansions.
We can now proceed with the proof of Theorem 4.14. Assume first that we want to prove the regularity of \(f^{\prime}_{ij}\) for \(1<i<n\), \(1<j<n\). Recall that the approach suggested in [13] consists of the following steps. If \(p=\text{deg}f_{ij}<\text{deg}f_{i-1,j}=m\), we define an \(m\times(m+1)\) submatrix \(A\) of \(\mathcal{L}(i-1,j)\) such that \(A_{12}=x_{i-1,j}\). Note that
\[\begin{split}& f_{i-1,j}=\det A^{\hat{1}},\qquad f_{i,j+1}=\det A ^{\hat{1}\hat{2}}_{\hat{1}},\\ & f_{i-1,j-1}\cdot\det B=\det A^{\widehat{m+1}},\ \ f_{ij}\cdot\det B=\det A^{\hat{1}\widehat{m+1}}_{\hat{1}}\end{split} \tag{4.13}\]
with \(B=A^{[p+2,m]}_{[p+2,m]}\); here and in what follows "hatted" subscripts and superscripts indicate deleted rows and columns, respectively.
Applying the Desnanot-Jacobi identity for matrices of size \(d\times(d+1)\) we get
\[f_{i-1,j}\cdot\det\bar{A}^{\hat{2}}_{\hat{1}}+f_{i-1,j-1}f_{i,j+1}=f_{ij}\cdot \det A^{\hat{2}} \tag{4.14}\]
where \(\bar{A}=A^{[1,p+1]}_{[1,p+1]}\) has the property \(f_{i-1,j-1}=\det\bar{A}\).
If \(p=\text{deg}f_{ij}\geq\text{deg}f_{i-1,j}=m\), we define a \((p+1)\times(p+2)\) matrix \(A\) by taking the submatrix of \(\mathcal{L}(i-1,j-1)\) whose upper left entry equals \(x_{i-1,j-1}\) and adding on the right the column \([0,\dots,0,1]^{T}\). Similarly to (4.13), we have
\[\begin{split}& f_{i-1,j}\cdot\det B=\det A^{\hat{1}},\ \ f_{i,j+1}\cdot\det B=\det A^{\hat{1}\hat{2}}_{\hat{1}},\\ & f_{i-1,j-1}=\det A^{\widehat{p+2}},\qquad f_{ij}=\det A^{ \widehat{1p+2}}_{\hat{1}}\end{split} \tag{4.15}\]
with \(B=A^{[m+2,p+2]}_{[m+1,p+1]}\). Applying the same Desnanot-Jacobi identity we arrive at the same equation (4.14), see Section 6.1 in [13] for more details.
Next, we compare \(\text{deg}f_{ij}\) with \(\text{deg}f_{i,j-1}\) and consider in a similar way two cases \(\text{deg}f_{ij}<\text{deg}f_{i,j-1}\) and \(\text{deg}f_{ij}\geq\text{deg}f_{i,j-1}\), both producing equation
\[f_{ij}\cdot\det C^{\hat{2}}_{\hat{1}}+f_{i+1,j+1}f_{i,j-1}=f_{i+1,j}\cdot\det A ^{\hat{2}} \tag{4.16}\]
where \(C\) is the square submatrix of \(\mathcal{L}(i,j-1)\) with the property \(f_{i,j-1}=\det C\) and \(\bar{A}\) is the same as in (4.14). The linear combination of (4.14) and (4.16) with coefficients \(f_{i+1,j}\) and \(f_{i-1,j}\), respectively, yields
\[f_{ij}(f_{i+1,j}\det A^{\hat{2}}-f_{i-1,j}\det C_{\hat{1}}^{\hat{2}})=f_{i-1,j- 1}f_{i,j+1}f_{i+1,j}+f_{i-1,j}f_{i,j-1}f_{i+1,j+1}.\]
Combining this with the description of the quiver \(Q_{\mathbf{\Gamma}^{r},\mathbf{\Gamma}^{c}}\) given in the previous section we see that \(f_{ij}^{\prime}=f_{i+1,j}\det A^{\hat{2}}-f_{i-1,j}\det C_{\hat{1}}^{\hat{2}}\) is a regular function. Note that the above reasoning does not depend on whether \(\gamma^{r}\) and \(\gamma^{c}\) reverse orientation or preserve it.
Consider now \(f_{in}^{\prime}\) for \(1\leq i\leq n\). Assume first that both \(i-1\) and \(i\) belong to \(\Gamma_{2}^{r}\) and that \((\gamma^{r})^{*}\) preserves the orientation of the corresponding connected component of \(\Gamma_{2}^{r}\), that is, \((\gamma^{r})^{*}(i)=(\gamma^{r})^{*}(i-1)+1\). Then the above reasoning remains valid with \(f_{i,j+1}\) in (4.13)-(4.15) replaced by \(f_{(\gamma^{r})^{*}(i),1}\) and \(f_{i+1,j+1}\) in (4.16) replaced by \(f_{(\gamma^{r})^{*}(i)+1,1}\). The resulting equation reads
\[f_{in}(f_{i+1,n}\det A^{2}-f_{i-1,n}\det C_{\hat{1}}^{2})\\ =f_{i-1,n-1}f_{(\gamma^{r})^{*}(i),1}f_{i+1,n}+f_{i-1,n}f_{i,n-1} f_{(\gamma^{r})^{*}(i)+1,1},\]
and hence \(f_{in}^{\prime}=f_{i+1,n}\det A^{\hat{2}}-f_{i-1,n}\det C_{\hat{1}}^{\hat{2}}\) is a regular function. If \((\gamma^{r})^{*}\) reverses the orientation of the connected component of \(\Gamma_{2}^{r}\) that contains \(i-1\) and \(i\) then \((\gamma^{r})^{*}(i-1)=(\gamma^{r})^{*}(i)+1\). Using the alternative representation of \(f_{(\gamma^{r})^{*}(i-1),1}\) and \(f_{(\gamma^{r})^{*}(i-1)+1,1}\) provided by Remark 4.16, we apply the same reasoning as above with \(f_{i,j+1}\) in (4.13)-(4.15) replaced by \(f_{(\gamma^{r})^{*}(i-1),1}\) and \(f_{i+1,j+1}\) in (4.16) replaced by \(f_{(\gamma^{r})^{*}(i-1)+1,1}\). The resulting equation reads
\[f_{in}(f_{i+1,n}\det A^{\hat{2}}-f_{i-1,n}\det C_{\hat{1}}^{\hat {2}})\\ =f_{i-1,n-1}f_{(\gamma^{r})^{*}(i-1),1}f_{i+1,n}+f_{i-1,n}f_{i,n-1 }f_{(\gamma^{r})^{*}(i-1)+1,1},\]
which yields the same regular expression for \(f_{in}^{\prime}\). If \(i-1\notin\Gamma_{2}^{r}\) then \(f_{i,j+1}\) in all formulas above is replaced by \(1\), which corresponds to a vertex of degree \(5\). Similarly, if \(i\notin\Gamma_{2}^{r}\) then \(f_{i+1,j+1}\) in all formulas above is replaced by \(1\), which again corresponds to a vertex of degree \(5\). If both conditions hold simultaneously then both functions are replaced by \(1\), which corresponds to a vertex of degree \(4\).
Consider now \(f_{i1}^{\prime}\) for \(1\leq i\leq n\). Assume first that both \(i-1\) and \(i\) belong to \(\Gamma_{1}^{r}\) and that \(\gamma^{r}\) preserves the orientation of the corresponding connected component of \(\Gamma_{1}^{r}\), that is, \(\gamma^{r}(i)=\gamma^{r}(i-1)+1\). Then the above reasoning remains valid with \(f_{i-1,j-1}\) in (4.13)-(4.15) replaced by \(f_{\gamma^{r}(i-1),n}\) and \(f_{i,j-1}\) in (4.16) replaced by \(f_{\gamma^{r}(i),n}\). The resulting equation reads
\[f_{i1}(f_{i+1,1}\det A^{\hat{2}}-f_{i-1,1}\det C_{\hat{1}}^{\hat{2}})=f_{\gamma ^{r}(i-1),n}f_{i+1,1}f_{i2}+f_{i-1,1}f_{i+1,2}f_{\gamma^{r}(i),1},\]
and hence \(f_{i1}^{\prime}=f_{i+1,1}\det A^{\hat{2}}-f_{i-1,1}\det C_{\hat{1}}^{\hat{2}}\) is a regular function. If \(i\notin\Gamma_{1}^{r}\) then \(f_{\gamma^{r}(i),1}\) above is replaced by \(f_{\gamma^{r}(i-1)+1,1}\) while \(f_{i+1,1}\) is replaced by \(1\), which corresponds to a vertex of degree \(5\).
Consider now the case when both \(i-1\) and \(i-2\) belong to \(\Gamma_{1}^{r}\) and \(\gamma^{r}\) reverses the orientation of the corresponding connected component of \(\Gamma_{1}^{r}\), that is, \(\gamma^{r}(i-2)=\gamma^{r}(i-1)+1\). Assume first that \(\text{deg}f_{i1}\geq\text{deg}f_{i-1,1}\). Consider the \((p+1)\times p\) trailing submatrix \(A\) of \(\mathcal{L}(i,1)\) defined by the property \(A_{21}=x_{i1}\). Note that \(A_{[1,m]}^{[1,m]}\) for some
\(m\leq p\) is the submatrix of \(\mathcal{L}(i-1,1)\) whose determinant equals \(f_{i-1,1}\); we denote it \(\bar{M}\) to distinguish it from \(M\) that plays the same role for \(\mathcal{L}(i,1)\). Consequently,
\[\begin{split}& f_{i1}=\det A_{\hat{1}},\qquad f_{i+1,2}=\det A_{ \hat{1}2}^{\hat{1}},\\ & f_{i-1,1}\cdot\det B=\det A_{\overline{p+1}},\ \ f_{i2}\cdot\det B=\det A_{ \hat{1}\overline{p+1}}^{\hat{1}}\end{split} \tag{4.17}\]
with \(B=A_{[m+1,p]}^{[m+1,p]}\). Applying the Desnanot-Jacobi identity for matrices of size \((d+1)\times d\) we get
\[f_{i1}\cdot\det\bar{M}_{\hat{2}}^{\hat{1}}+f_{i-1,1}f_{i+1,2}=f_{i2}\cdot\det A _{\hat{2}}. \tag{4.18}\]
Next, consider the \((p^{\dagger}+1)\times(p^{\dagger}+1)\) trailing submatrix \(A^{\dagger}\) of \(\mathcal{L}(\gamma^{\mathrm{r}}(i-1),n)\) defined by the property \(A_{11}^{\dagger}=x_{\gamma^{\mathrm{r}}(i-1),n}\). Note that \(\mathrm{deg}f_{\gamma^{\mathrm{r}}(i-2),n}=1+\mathrm{deg}f_{i-1,1}\leq 1+ \mathrm{deg}f_{i1}=\mathrm{deg}f_{\gamma^{\mathrm{r}}(i-1),n}\), and hence \((A^{\dagger})_{[2,m^{\dagger}+2]}^{[1,m^{\dagger}+1]}\) for some \(m^{\dagger}\) is the submatrix of \(\mathcal{L}(\gamma^{\mathrm{r}}(i-2),n)\) whose determinant equals \(f_{\gamma^{\mathrm{r}}(i-2),n}\). Additionally, \((A^{\dagger})_{\hat{1}}^{\hat{1}}\) is exactly the submatrix \(M^{\dagger}\) of \(\mathcal{L}^{\dagger}(i,1)\) defined above, and \(\bar{M}^{\dagger}=(A^{\dagger})_{[3,m^{\dagger}+2]}^{[2,m^{\dagger}+1]}\) plays the same role for \(\mathcal{L}^{\dagger}(i-1,1)\). Consequently, using the alternative description of \(f_{i1}\) and \(f_{i-1,1}\) provided by Remark 4.16, we get
\[\begin{split}& f_{\gamma^{\mathrm{r}}(i-1),n}=\det A^{\dagger}, \qquad\qquad f_{i1}=\det(A^{\dagger})_{\hat{1}}^{\hat{1}},\\ & f_{\gamma^{\mathrm{r}}(i-2),n}\cdot\det B^{\dagger}=\det(A^{ \dagger})_{\hat{1}}^{\widehat{p^{\dagger}+1}},\ \ f_{i-1,1}\cdot\det B^{\dagger}=\det(A^{\dagger})_{\hat{1}\hat{2}}^{ \widehat{1p^{\dagger}+1}}\end{split} \tag{4.19}\]
with \(B^{\dagger}=(A^{\dagger})_{[m^{\dagger}+2,p^{\dagger}]}^{[m^{\dagger}+2,p^{ \dagger}]}\). Applying the Desnanot-Jacobi identity for square matrices we get
\[f_{i1}\cdot\det(\bar{A}^{\dagger})_{\hat{2}}=f_{\gamma^{\mathrm{r}}(i-1),n}f_ {i-1,1}+f_{\gamma^{\mathrm{r}}(i-2),n}\cdot\det(A^{\dagger})_{\hat{2}}^{\hat{1}} \tag{4.20}\]
with \(\bar{A}^{\dagger}=(A^{\dagger})_{[1,m^{\dagger}+2]}^{[1,m^{\dagger}+1]}\). The linear combination of (4.18) and (4.20) with coefficients \(f_{\gamma^{\mathrm{r}}(i-1),n}\) and \(f_{i+1,2}\), respectively, yields
\[\begin{split} f_{i1}(f_{\gamma^{\mathrm{r}}(i-1),n}\cdot\det\bar{ M}_{\hat{2}}^{\hat{1}}+f_{i+1,2}\cdot\det(\bar{A}^{\dagger})_{\hat{2}})\\ =f_{i2}f_{\gamma^{\mathrm{r}}(i-1),n}\cdot\det A_{\hat{2}}+f_{i+1,2}f_{\gamma^{\mathrm{r}}(i-2),n}\cdot\det(A^{\dagger})_{2}^{\hat{1}}.\end{split} \tag{4.21}\]
Note that \(A_{\hat{2}}\) is \(M(1,0)\) and \((A^{\dagger})_{\hat{2}}^{\hat{1}}\) is \(M^{\dagger}(1,0)\) for the dual pair \(\mathcal{L}(i,1)\), \(\mathcal{L}^{\dagger}(i,1)\), hence by Lemma 4.15 we get \(\det A_{2}=\det(A^{\dagger})_{\hat{2}}^{\hat{1}}\), and so the right hand side of (4.21) factors.
Further, expand \(f_{\gamma^{\mathrm{r}}(i-1),n}\) in the left hand side of (4.21) by the first column as
\[f_{\gamma^{\mathrm{r}}(i-1),n}=\sum_{j=0}^{s}(-1)^{j}x_{\gamma^{\mathrm{r}}(i-1 )+j,n}\det M^{\dagger}(j,0)=\sum_{j=0}^{s}(-1)^{j}x_{\gamma^{\mathrm{r}}(i-1)+ j,n}\det M(j,0)\]
via Lemma 4.15. Similarly, expand \(\det(\bar{A}^{\dagger})_{2}\) in the left hand side of (4.21) by the first column as
\[\begin{split}\det(\bar{A}^{\dagger})_{2}=x_{\gamma^{\mathrm{r}}(i- 1),n}\det\bar{M}^{\dagger}+\sum_{j=2}^{s}(-1)^{j-1}x_{\gamma^{\mathrm{r}}(i-1 )+j,n}\det\bar{M}^{\dagger}(j-1,1)\\ =x_{\gamma^{\mathrm{r}}(i-1),n}\det\bar{M}+\sum_{j=2}^{s}(-1)^{j-1 }x_{\gamma^{\mathrm{r}}(i-1)+j,n}\det\bar{M}(j-1,1)\end{split}\]
via Lemma 4.15; note that the exit point for the block that defines \(\bar{M}\) is \((i-1,1)\), so \(\bar{M}(j-1,1)\) has on top the same segment of row \(X_{j-i}\) that \(M(j,1)\) does. Substituting into the left hand side of (4.21) gives
\[f_{\gamma^{\tau}(i-1),n}\cdot\det\bar{M}_{2}^{\hat{1}}+f_{i+1,2} \cdot\det(\bar{A}^{\dagger})_{\hat{2}}\\ =x_{\gamma^{\tau}(i-1),n}\left(\det M\det\bar{M}_{2}^{\hat{1}}+f_ {i+1,2}\det\bar{M}\right)-x_{\gamma^{\tau}(i-1)+1,n}\det M(1,0)\det\bar{M}_{2} ^{\hat{1}}\\ +\sum_{j=2}^{s}(-1)^{j}x_{\gamma^{\tau}(i-1)+j,n}\left(\det M(j,0) \det\bar{M}_{2}^{\hat{1}}-f_{i+1,2}\det\bar{M}(j-1,1)\right). \tag{4.22}\]
Consider the coefficient at \(x_{\gamma^{\tau}(i-1),n}\) in (4.22). Recall that by (4.17), \(M=A_{\hat{1}},\ f_{i+1,2}=\det A_{\hat{1}\hat{2}}^{\hat{1}},\ \det\bar{M}_{2}^{\hat{1}}\det B=\det A_{\widehat{2p+1}}^{\hat{1}},\ \det\bar{M}\det B=\det A_{\widehat{p+1}}\) and \(\det\bar{M}_{\hat{1}}^{\hat{1}}\det B=\det A_{\widehat{1p+1}}^{\hat{1}}\), so that the Desnanot-Jacobi identity for the \((p+1)\times p\) matrix \(A\) yields
\[\det M\det\bar{M}_{2}^{\hat{1}}+f_{i+1,2}\det\bar{M}=\det M(1,0)\det\bar{M}_{1 }^{\hat{1}}.\]
To treat the coefficient at \(x_{\gamma^{\tau}(i-1)+j,n}\) in (4.22), consider the \((p+1)\times p\) matrix \(A(j)\) obtained by adding the initial segment of \(X_{i-j}\) on top of \(M(1,0)\). Then \(M(j,0)=A(j)_{\hat{2}}\), \(f_{i+1,2}=\det A(j)_{\hat{1}\hat{2}}^{\hat{1}},\ \det\bar{M}_{2}^{\hat{1}}\det B=\det A(j)_{\widehat{1p+1}}^{ \hat{1}},\ \det\bar{M}(j-1,1)\det B=\det A(j)_{\widehat{p+1}}\) and \(\det\bar{M}(j-1,1)_{\hat{2}}^{\hat{1}}\det B=\det A(j)_{\widehat{2p+1}}^{ \hat{1}}\), so that the Desnanot-Jacobi identity for the \((p+1)\times p\) matrix \(A(j)\) yields
\[\det M(j,0)\det\bar{M}_{2}^{\hat{1}}-f_{i+1,2}\det\bar{M}(j-1,1)=\det M(1,0) \det\bar{M}(j-1,1)_{\hat{2}}^{\hat{1}}.\]
Substitution of the obtained formulas into (4.21) and cancellation of \(\det M(1,0)\) in both sides yields
\[f_{i1}\left(x_{\gamma^{\tau}(i-1),n}\det\bar{M}_{1}^{\hat{1}}+ \sum_{j=1}^{s}(-1)^{j}x_{\gamma^{\tau}(i-1)+j,n}\det\bar{M}(j-1,1)_{\hat{2}}^{ \hat{1}}\right)\\ =f_{i2}f_{\gamma^{\tau}(i-1),n}+f_{i+1,2}f_{\gamma^{\tau}(i-2),n},\]
which means that \(f_{i1}^{\prime}\) is a regular function, and the degree of the vertex \((i,1)\) is \(4\).
If \(i-2\notin\Gamma_{1}^{\tau}\) the above reasoning remains valid with \(f_{\gamma^{\tau}(i-2),n}\) replaced by \(f_{\gamma^{\tau}(i-1)+1,n}f_{i-1,1}\), which yields a vertex of degree \(5\).
The case when \(\text{deg}f_{i1}<\text{deg}f_{i-1,1}\) is treated in a similar way. We consider an \(m\times m\) submatrix \(A\) of \(\mathcal{L}(i-1,1)\) characterized by \(A_{11}=x_{i-1,1}\) and an \((m^{\dagger}+1)\times m^{\dagger}\) submatrix \(A^{\dagger}\) of \(\mathcal{L}^{\dagger}(i-1,1)\) characterized by \(A_{21}^{\dagger}=x_{\gamma^{\tau}(i-2),n}\). Reasoning along the same lines we arrive at
\[f_{i1}(f_{\gamma^{\tau}(i-1),n}\cdot\det M_{2}^{\hat{1}}+f_{i+1, 2}\cdot\det A_{2}^{\dagger})\\ =f_{i2}f_{\gamma^{\tau}(i-1),n}\cdot\det\bar{A}_{2}+f_{i+1,2}f_{ \gamma^{\tau}(i-2),n}\cdot\det(\bar{A}^{\dagger})_{\hat{2}}^{\hat{1}},\]
which coincides with (4.21) up to switching \(M\) with \(\bar{M}\), etc.
Functions \(f_{nj}\) and \(f_{1j}\) are treated in a similar way with an analog of Lemma 4.15.
### Completeness
Recall that a cluster structure in the ring of regular functions of an algebraic variety is called _complete_ if the corresponding upper cluster algebra is naturally isomorphic to this ring. The goal of this section is to extend the completeness result of Theorem 3.3(ii) in [13] to the general case of an aperiodic pair \((\mathbf{\Gamma}^{\mathrm{r}},\mathbf{\Gamma}^{\mathrm{c}})\). As explained in Section 3.4 of [13], this amounts to extending Theorem 7.1 in [13] and to claim the following Laurent property.
**Theorem 4.17**.: _Let \((\mathbf{\Gamma}^{\mathrm{r}},\mathbf{\Gamma}^{\mathrm{c}})\) be an aperiodic pair of Belavin-Drinfeld triples and \(\mathcal{C}=\mathcal{C}_{\mathbf{\Gamma}^{\mathrm{r}},\mathbf{\Gamma}^{ \mathrm{c}}}\) be the cluster structure on \(SL_{n}\) defined by the seed \((F_{\mathbf{\Gamma}^{\mathrm{r}},\mathbf{\Gamma}^{\mathrm{c}}},Q_{\mathbf{ \Gamma}^{\mathrm{r}},\mathbf{\Gamma}^{\mathrm{c}}})\). Then every matrix entry can be written as a Laurent polynomial in the initial cluster \(F_{\mathbf{\Gamma}^{\mathrm{r}},\mathbf{\Gamma}^{\mathrm{c}}}\) and in any cluster adjacent to it._
Proof.: We will adjust the inductive argument of the corresponding proof in [13] to allow for non-oriented BD data. In the process, we will use Theorem 4.4 and formulas (4.5), (4.6) to streamline the necessary technical results of [13, Section 7.1] even in the oriented case.
Recall that the induction is on the total size \(|\Gamma_{1}^{\mathrm{r}}|+|\Gamma_{1}^{\mathrm{c}}|\) of the pair \((\mathbf{\Gamma}^{\mathrm{r}},\mathbf{\Gamma}^{\mathrm{c}})\). Since each step of induction involves either \(\mathbf{\Gamma}^{\mathrm{r}}\) or \(\mathbf{\Gamma}^{\mathrm{c}}\), but not both, we will only consider the case of reducing the size of \(\mathbf{\Gamma}^{\mathrm{r}}\); the other case can be treated similarly.
The induction step involves removing the first or the last root \(\alpha\) of a connected component of \(\Gamma_{1}^{\mathrm{r}}\), removing its image in \(\Gamma_{2}^{\mathrm{r}}\), and modifying \(\gamma^{\mathrm{r}}\) accordingly. We denote the BD triple resulting from the operation above by \(\tilde{\mathbf{\Gamma}}^{\mathrm{r}}=(\tilde{\Gamma}_{1}^{\mathrm{r}},\tilde{ \Gamma}_{2}^{\mathrm{r}},\tilde{\gamma}^{\mathrm{r}})\). Below, for any object associated with the pair \((\mathbf{\Gamma}^{\mathrm{r}},\mathbf{\Gamma}^{\mathrm{c}})\), we decorate with - the notation for its counterpart associated with \((\tilde{\mathbf{\Gamma}}^{\mathrm{r}},\mathbf{\Gamma}^{\mathrm{c}})\). Since the total size of this pair is smaller, we assume that \(\tilde{\mathcal{C}}=\mathcal{C}_{\tilde{\mathbf{\Gamma}}^{\mathrm{r}},\mathbf{ \Gamma}^{\mathrm{c}}}\) possesses the above mentioned Laurent property.
Let \(F=\{f_{ij}(Z)\colon i,j\in[1,n]\}\) and \(\tilde{F}=\{\tilde{f}_{ij}(Z)\colon i,j\in[1,n]\}\) be initial clusters for \(\mathcal{C}\) and \(\tilde{\mathcal{C}}\), respectively, and \(Q\) and \(\tilde{Q}\) be the corresponding quivers. It is easy to see that all maximal alternating paths in \(G_{\mathbf{\Gamma}^{\mathrm{r}},\mathbf{\Gamma}^{\mathrm{c}}}\) are preserved in \(G_{\tilde{\mathbf{\Gamma}}^{\mathrm{r}},\mathbf{\Gamma}^{\mathrm{c}}}\) except for the path that goes through the directed inclined edge \(\alpha\to\gamma^{\mathrm{r}}(\alpha)\). The latter one is split into two: the initial segment up to the vertex \(\alpha\) and the closing segment starting with the vertex \(\gamma^{\mathrm{r}}(\alpha)\). Consequently, the only difference between \(Q\) and \(\tilde{Q}\) is that the vertex \(v=(\alpha+1,1)\) that corresponds to the chosen endpoint of \([k,m-1]\) is mutable in \(Q\) and frozen in \(\tilde{Q}\), and that the neighborhoods of this vertex in \(Q\) and \(\tilde{Q}\) are different. This allows to invoke Proposition 7.4 in [13]. Namely, define
\[\lambda_{ij}=\frac{\mathrm{deg}f_{ij}(Z)-\mathrm{deg}\tilde{f}_{ij}(\tilde{Z}) }{\mathrm{deg}\tilde{f}_{\alpha+1,1}(\tilde{Z})}\]
and choose \(\Phi=\{\tilde{f}_{\alpha+1,1}^{\lambda_{ij}}\tilde{f}_{ij}\}\) as an initial cluster associated with \(Q\); note that we do not go beyond polynomials since it will be shown below that \(\lambda_{ij}\) defined as above are integers. Then if \(\tilde{\varphi}\) is obtained via a sequence of mutations avoiding \(v\) applied to the seed \((\tilde{F},\tilde{Q})\), then the same sequence of mutations applied to the seed \((\Phi,Q)\) yields \(\varphi=\tilde{f}_{\alpha+1,1}^{\lambda}\tilde{\varphi}\) for some integer \(\lambda\).
To implement the induction step, we need the following statement which is a simultaneous extension of Theorems 7.2 and 7.3 in [13] to the case of arbitrary aperiodic pairs of Belavin-Drinfeld triples.
**Theorem 4.18**.: _There exists a unipotent upper triangular matrix \(C=C(\tilde{Z})\) whose entries are rational functions in \(\tilde{x}_{ij}\) with denominators equal to powers of \(\tilde{f}_{\alpha+1,1}(\tilde{Z})\)
_such that \(Z=C\tilde{Z}\) and_
\[f_{ij}(Z)=\begin{cases}\tilde{f}_{ij}(\tilde{Z})\tilde{f}_{\alpha+1,1}(\tilde{Z} )&\text{ if }(\alpha+1,1)\text{ is subordinate to }(i,j)\text{ for }(\mathbf{\Gamma}^{\mathbf{r}},\mathbf{\Gamma}^{ \mathbf{c}})\text{,}\\ \tilde{f}_{ij}(\tilde{Z})&\text{ otherwise.}\end{cases}\]
It follows from Theorem 4.18 that \(\lambda_{ij}\) defined above is equal to \(1\) if \((\alpha+1,1)\) is subordinate to \((i,j)\) for \((\mathbf{\Gamma}^{\mathbf{r}},\mathbf{\Gamma}^{\mathbf{c}})\), and equal to \(0\) otherwise. Since, additionally, \(\tilde{f}_{\alpha+1,1}(\tilde{Z})=f_{\alpha+1,1}(Z)\), we conclude that any Laurent polynomial in \(\tilde{F}\) is also a Laurent polynomial in \(F\), and any Laurent polynomial in variables of the cluster in \(\tilde{\mathcal{C}}\) obtained by mutation of \(\tilde{F}\) in a direction _other than_\(v=(\alpha+1,1)\) is also a Laurent polynomial in the cluster in \(\mathcal{C}\) obtained by mutation of \(F\) in the same direction. By inductive assumption, every matrix entry \(\tilde{z}_{ij}\) can be expressed as a Laurent polynomial in \(\tilde{F}\) or any cluster adjacent to it. The first claim of Theorem 4.18 then implies that for any of these clusters except the one obtained by mutation in the direction \(v\), the entries of \(C=C(\tilde{Z})\), and therefore of \(Z=C\tilde{Z}\), are Laurent polynomials in the corresponding cluster in \(\mathcal{C}\). To verify the claim of Theorem 4.17 for the cluster in \(\mathcal{C}\) obtained by mutation of the initial one in the direction \(v\), we apply the same induction step to a different root in \(\mathbf{\Gamma}^{\mathbf{r}}\) or, if \(\mathbf{\Gamma}^{\mathbf{r}}=\{\alpha\}\), apply a similar procedure to a root in \(\mathbf{\Gamma}^{\mathbf{c}}\). In the latter case, we use an analogue of Theorem 4.18 that can be easily obtained by transposition. The case \(|\Gamma_{1}^{\mathbf{r}}|+|\Gamma_{1}^{\mathbf{c}}|=1\) serves as the base of induction; it was handled in Section 7.3 of [13]. Thus, to complete the proof of Theorem 4.17 we only need to finish
Proof of Theorem 4.18.: First, we compare functions \(f_{ij}(Z)\) and \(\tilde{f}_{ij}(\tilde{Z})\) in the initial seeds of the two cluster structures using formula (4.1). The pair of indices \((i,j)\), \(i\neq j\), defines uniquely a directed horizontal edge \(e(i,j)=(n-i+j)\to(i-j)\) in the upper part of the BD graph for \(i>j\) and a directed horizontal edge \(e(i,j)=(n+i-j)\to(j-i)\) in the lower part of the BD graph for \(i<j\). Note that despite the functions themselves depend on the whole BD graph, the right hand side of this formula can be read off directly from the maximal alternating path through \(e(i,j)\) and does not depend on the rest of the graph. Indeed, each factor in the right hand side of (4.1) corresponds to a minor of \(U\) defined by a directed horizontal edge preceding the edge \(e(i,j)\) in the alternating path. Further, the exit points of all such blocks are subordinate to \((i,j)\) for both \((\mathbf{\Gamma}^{\mathbf{r}},\mathbf{\Gamma}^{\mathbf{c}})\) and \((\tilde{\mathbf{\Gamma}}^{\mathbf{r}},\mathbf{\Gamma}^{\mathbf{c}})\). As an immediate consequence, we conclude that if maximal alternating paths that correspond to \((i,j)\) in \(G\) and \(\tilde{G}\) coincide up to and including \(e(i,j)\) then \(f_{ij}^{h}(U)=\tilde{f}_{ij}^{\tilde{h}}(U)\), which immediately yields
\[f_{ij}(Z)=\tilde{f}_{ij}(\tilde{Z}), \tag{4.23}\]
since \(\tilde{Z}=\tilde{h}\circ h^{-1}(Z)\).
It is easy to see that all maximal alternating paths in \(G\) are preserved in \(\tilde{G}\) except for the path \(P\) that goes through the directed inclined edge \(\alpha\to\gamma^{\mathbf{r}}(\alpha)\). The latter one is split into two: the initial segment up to the vertex \(\alpha\) and the closing segment starting with the vertex \(\gamma^{\mathbf{r}}(\alpha)\). Using (4.1) and the reasoning above, we conclude that if the inclined edge \(\alpha\to\gamma^{\mathbf{r}}(\alpha)\) precedes \(e(i,j)\) in \(P\) then
\[f_{ij}(Z)=\tilde{f}_{ij}(\tilde{Z})\tilde{f}_{\alpha+1,1}(\tilde{Z}), \tag{4.24}\]
since the horizontal edge in \(P\) that immediately preceeds \(\alpha\to\gamma^{\mathbf{r}}(\alpha)\) is \((n-\alpha)\to\alpha\), which corresponds to \(f_{\alpha+1,1}\). Note that in this case the exit point \((\alpha+1,1)\) is subordinate to \((i,j)\) for \((\mathbf{\Gamma}^{\mathbf{r}},\mathbf{\Gamma}^{\mathbf{c}})\).
Recall that by (3.2), \(Z=H^{\mathrm{r}}(U)UH^{\mathrm{c}}(U)\) and \(\tilde{Z}=\tilde{H}^{\mathrm{r}}(U)U\tilde{H}^{\mathrm{c}}(U)\). Additionally, \(\tilde{H}^{\mathrm{c}}(U)=H^{\mathrm{c}}(U)\) since \(\boldsymbol{\Gamma}^{\mathrm{c}}\) is the same in both cases. Consequently,
\[Z=H^{\mathrm{r}}(U)\tilde{H}^{\mathrm{r}}(U)^{-1}\tilde{Z}=C\tilde{Z}. \tag{4.25}\]
To complete the proof of Theorem 4.18 we have to check that the entries of the matrix \(C\) as functions in \(\tilde{Z}\) obtained via \(U=\tilde{h}^{-1}(\tilde{Z})\) are rational with denominators equal to powers of \(\tilde{f}_{\alpha+1,1}(\tilde{Z})\).
Assume that \([k,m-1]\) is the connected component of \(\Gamma_{1}^{\mathrm{r}}\) to which the induction step is being applied; denote \(p=m-k+1\). Define the subgroup \(\mathcal{K}\subset\mathcal{N}_{+}^{\Gamma_{1}^{\mathrm{r}}}\) via
\[\mathcal{K}=\begin{cases}\left\{\operatorname{diag}\left(\boldsymbol{1}_{k-1 },\begin{bmatrix}\boldsymbol{1}_{p-1}&\xi^{T}\\ 0&1\end{bmatrix},\boldsymbol{1}_{n-m}\right)\right\}&\quad\text{for $\alpha=m-1$,}\\ \left\{\operatorname{diag}\left(\boldsymbol{1}_{k-1},\begin{bmatrix}1&\xi\\ 0&\boldsymbol{1}_{p-1}\end{bmatrix},\boldsymbol{1}_{n-m}\right)\right\}&\quad \text{for $\alpha=k$,}\end{cases} \tag{4.26}\]
where \(\xi=(\xi_{1},\ldots,\xi_{p-1})\). Every \(p\times p\) unipotent upper triangular matrix \(A\) can be uniquely factored in every one of the following four ways:
\[\begin{split} A&=\begin{bmatrix}A_{1}&0\\ 0&1\end{bmatrix}\begin{bmatrix}\boldsymbol{1}_{p-1}&\xi_{1}^{T}\\ 0&1\end{bmatrix}=\begin{bmatrix}\boldsymbol{1}_{p-1}&\xi_{2}^{T}\\ 0&1\end{bmatrix}\begin{bmatrix}A_{1}&0\\ 0&1\end{bmatrix}\\ &=\begin{bmatrix}1&0\\ 0&A_{2}\end{bmatrix}\begin{bmatrix}1&\xi_{3}\\ 0&\boldsymbol{1}_{p-1}\end{bmatrix}=\begin{bmatrix}1&\xi_{4}\\ 0&\boldsymbol{1}_{p-1}\end{bmatrix}\begin{bmatrix}1&0\\ 0&A_{2}\end{bmatrix},\end{split} \tag{4.27}\]
where \(A_{1},A_{2}\) are \((p-1)\times(p-1)\) unipotent upper triangular matrices and \(\xi_{1},\ldots,\xi_{4}\) are \((p-1)\)-vectors. Consequently, every element \(V\in\mathcal{N}_{+}^{\Gamma_{1}^{\mathrm{r}}}\) can be uniquely factored as \(V=T(V)K_{1}(V)=K_{2}(V)T(V)\) with \(T(V)\in\mathcal{N}_{+}^{\tilde{\Gamma}_{1}^{\mathrm{r}}}\) and \(K_{1}(V),K_{2}(V)\in\mathcal{K}\). Recall that \((\boldsymbol{\gamma}^{\mathrm{r}})^{*}\boldsymbol{\gamma}^{\mathrm{r}}\) acts on \(\mathcal{N}_{+}\) as the projection to \(\mathcal{N}_{+}^{\Gamma_{1}^{\mathrm{r}}}\), which allows to define \(T(V)\) and \(K(V)\) for any \(V\in\mathcal{N}_{+}\) as \(T((\boldsymbol{\gamma}^{\mathrm{r}})^{*}\boldsymbol{\gamma}^{\mathrm{r}}(V))\) and \(K((\boldsymbol{\gamma}^{\mathrm{r}})^{*}\boldsymbol{\gamma}^{\mathrm{r}}(V))\), respectively.
We start with the following relation between \(\bar{V}^{\mathrm{r}}\) and \(\bar{\bar{V}}^{\mathrm{r}}\), as defined in Section 3.1.
**Lemma 4.19**.: \(T(\bar{V}^{\mathrm{r}})=\bar{\bar{V}}^{\mathrm{r}}\)_._
Proof.: Let us start with comparing \(V^{\mathrm{r}}\) and \(\bar{V}^{\mathrm{r}}\). By construction, they are block-diagonal matrices with lower unipotent blocks that differ only in the block with the row and column set \(\Delta=[k,m]\); let us call it the \(\Delta\)-block. For \(V^{\mathrm{r}}\), this block coincides with the \(\Delta\)-block of the factor \(U_{-}^{[k,m]}\in\mathcal{N}_{-}^{[k,m]}\) in the factorization \(U_{-}=U_{-}^{[k,m]}U_{L}U_{R}\), see the proof of Theorem 4.4 above. We denote this block \(B\); it is a lower unipotent \(p\times p\) matrix.
Let \(s_{\{k,m-1\}}=s_{[m-2,k]}s_{[m-2,k+1]}\ldots s_{m-2}\) be the reduced expression for \(w_{0}^{[k,m-1]}\) analogous to one used in the proof of Theorem 4.4, and \(s_{\{k+1,m\}}=s_{[m-1,k+1]}\)\(s_{[m-1,k+2]}\ldots s_{m-1}\) be a similar reduced expression for \(w_{0}^{[k+1,m]}\). Note that both products \(s_{\{k,m-1\}}s_{[m-1,k]}\) and \(s_{\{k+1,m\}}s_{[k,m-1]}\) constitute reduced expressions for \(w_{0}^{[k,m]}\). The two corresponding factorizations of \(B\) are
\[B=\begin{cases}\begin{bmatrix}\tilde{B}&0\\ 0&1\end{bmatrix}S^{-1}&\quad\text{for $\alpha=m-1$,}\\ \begin{bmatrix}1&0\\ 0&\tilde{B}\end{bmatrix}S&\quad\text{for $\alpha=k$,}\end{cases} \tag{4.28}\]
where \(S\) is a lower unipotent bidiagonal \(p\times p\) matrix with generically nonzero subdiagonal entries. In both cases \(\tilde{B}\) is the \(\tilde{\Delta}\)-block for \(\tilde{V}^{\mathrm{r}}\).
We can now compare \(\bar{V}^{\mathrm{r}}\) and \(\bar{\tilde{V}}^{\mathrm{r}}\). Let \(\bar{B}\) and \(\bar{\tilde{B}}\) be the corresponding \(\Delta\)- and \(\tilde{\Delta}\)-blocks. Recall that \(\bar{V}^{\mathrm{r}}=(V^{\mathrm{r}}w_{0}^{[1,n]})_{+}\), so that \(\bar{B}=(Bw_{0}^{[1,p]})_{+}\). A straightforward check shows that \(S^{-1}w_{0}^{[1,p]}\) can be refactored as
\[S^{-1}w_{0}^{[1,p]}=\begin{bmatrix}w_{0}^{[1,p-1]}&\star\\ 0&1\end{bmatrix}T_{1}\]
with \(T_{1}\in\mathcal{B}_{-}\). Consequently, (4.28) for \(\alpha=m-1\) yields
\[\bar{B}=\begin{pmatrix}\begin{bmatrix}\tilde{B}&0\\ 0&1\end{bmatrix}\begin{bmatrix}w_{0}^{[1,p-1]}&\star\\ 0&1\end{bmatrix}T_{1}\end{pmatrix}_{+}=\begin{bmatrix}\tilde{B}w_{0}^{[1,p-1]} &\star\\ 0&1\end{bmatrix}_{+}=\begin{bmatrix}\bar{\tilde{B}}&\star\\ 0&1\end{bmatrix}.\]
Taking into account that all other blocks for \(\bar{V}^{\mathrm{r}}\) and \(\bar{\tilde{V}}^{\mathrm{r}}\) are identical, we get the statement of Lemma for \(\alpha=m-1\).
Further, a straightforward check shows that \(Sw_{0}^{[1,p]}\) can be refactored as
\[Sw_{0}^{[1,p]}=\begin{bmatrix}1&\star\\ 0&w_{0}^{[1,p-1]}\end{bmatrix}T_{2}\]
with \(T_{2}\in\mathcal{B}_{-}\). Consequently, (4.28) for \(\alpha=k\) yields
\[\bar{B}=\begin{pmatrix}\begin{bmatrix}1&0\\ 0&\tilde{B}\end{bmatrix}\begin{bmatrix}1&\star\\ 0&w_{0}^{[1,p-1]}\end{bmatrix}T_{2}\end{pmatrix}_{+}=\begin{bmatrix}1&\star\\ 0&\tilde{B}w_{0}^{[1,p-1]}\end{bmatrix}_{+}=\begin{bmatrix}1&\star\\ 0&\tilde{B}\end{bmatrix}.\]
Once again, all other blocks for \(\bar{V}^{\mathrm{r}}\) and \(\bar{\tilde{V}}^{\mathrm{r}}\) are identical, hence we get the statement of Lemma for \(\alpha=k\).
Our next step is to make relation (4.25) between \(Z\) and \(\tilde{Z}\) more explicit.
**Lemma 4.20**.: (i) _There exists \(K\in\mathcal{K}\) such that \(\boldsymbol{\gamma}^{\mathrm{r}}(C)=C\boldsymbol{\gamma}^{\mathrm{r}}(K^{-1})\)._
(ii) _Consequently, \(C=\ldots(\boldsymbol{\gamma}^{\mathrm{r}})^{2}(K)\boldsymbol{\gamma}^{ \mathrm{r}}(K)\)._
Proof.: (i) As explained above, any \(V\in\mathcal{N}_{+}\) can be factored as \(V=T(V)K(V)\). It follows from (4.27) that
\[\boldsymbol{\tilde{\gamma}}^{\mathrm{r}}(V)=\boldsymbol{\tilde{\gamma}}^{ \mathrm{r}}(T(V))=\boldsymbol{\gamma}^{\mathrm{r}}(T(V))=\boldsymbol{\gamma}^ {\mathrm{r}}(VK(V)^{-1}). \tag{4.29}\]
Define \(K_{0},K_{1},\cdots\in\mathcal{K}\) via \(K_{0}=K(V)\), \(K_{j}=K((\boldsymbol{\tilde{\gamma}}^{\mathrm{r}})^{j}(V))\) for \(j=1,\ldots\). Then (4.29) implies
\[(\boldsymbol{\tilde{\gamma}}^{\mathrm{r}})^{2}\left(T(V)\right)= \boldsymbol{\tilde{\gamma}}^{\mathrm{r}}\left(\boldsymbol{\gamma}^{\mathrm{r} }(T(V))\right)=\boldsymbol{\gamma}^{\mathrm{r}}\left(T(\boldsymbol{\tilde{ \gamma}}^{\mathrm{r}}(V))\right)=\boldsymbol{\gamma}^{\mathrm{r}}\left( \boldsymbol{\tilde{\gamma}}^{\mathrm{r}}(V)K_{1}^{-1}\right)\] \[=\boldsymbol{\gamma}^{\mathrm{r}}\left(\boldsymbol{\gamma}^{ \mathrm{r}}(T(V))K_{1}^{-1}\right)=(\boldsymbol{\gamma}^{\mathrm{r}})^{2}(T(V) )\boldsymbol{\gamma}^{\mathrm{r}}(K_{1}^{-1})=(\boldsymbol{\gamma}^{\mathrm{r} })^{2}(V)(\boldsymbol{\gamma}^{\mathrm{r}})^{2}(K_{0}^{-1})\boldsymbol{\gamma} ^{\mathrm{r}}(K_{1}^{-1}),\]
and more generally,
\[(\boldsymbol{\tilde{\gamma}}^{\mathrm{r}})^{j}\left(T(V)\right)=(\boldsymbol{ \gamma}^{\mathrm{r}})^{j}(V)(\boldsymbol{\gamma}^{\mathrm{r}})^{j}(K_{0}^{-1}) (\boldsymbol{\gamma}^{\mathrm{r}})^{j-1}(K_{1}^{-1})\ldots\boldsymbol{\gamma} ^{\mathrm{r}}(K_{j-1}^{-1}).\]
Consequently,
\[\boldsymbol{\gamma}^{\mathrm{r}}\left((\boldsymbol{\tilde{\gamma}}^{\mathrm{r} })^{j}(T(V))\right)=(\boldsymbol{\tilde{\gamma}}^{\mathrm{r}})^{j+1}(T(V)) \boldsymbol{\gamma}^{\mathrm{r}}(K_{j}). \tag{4.30}\]
Recall that \(\tilde{H}^{\rm r}(U)=\dots(\tilde{\boldsymbol{\gamma}}^{\rm r})^{2}(\bar{\bar{V}}^ {\rm r})\tilde{\boldsymbol{\gamma}}^{\rm r}(\bar{\bar{V}}^{\rm r})\) and \(\bar{\bar{V}}^{\rm r}=T(\bar{V}^{\rm r})\), hence \(\tilde{H}^{\rm r}(U)^{-1}=\tilde{\boldsymbol{\gamma}}^{\rm r}(T(V))(\tilde{ \boldsymbol{\gamma}}^{\rm r})^{2}(T(V))\dots\) for \(V=(\bar{V}^{\rm r})^{-1}\). Therefore, (4.30) yields
\[\boldsymbol{\gamma}^{\rm r}(\tilde{H}^{\rm r}(U)^{-1}) =(\tilde{\boldsymbol{\gamma}}^{\rm r})^{2}(T(V))\boldsymbol{ \gamma}^{\rm r}(K_{1})(\tilde{\boldsymbol{\gamma}}^{\rm r})^{3}(T(V)) \boldsymbol{\gamma}^{\rm r}(K_{2})\dots\] \[=\left((\tilde{\boldsymbol{\gamma}}^{\rm r})^{2}(T(V))(\tilde{ \boldsymbol{\gamma}}^{\rm r})^{3}(T(V))\dots\right)\boldsymbol{\gamma}^{\rm r }(K^{\prime})=\tilde{\boldsymbol{\gamma}}^{\rm r}(\bar{\bar{V}}^{\rm r}) \tilde{H}^{\rm r}(U)^{-1}\boldsymbol{\gamma}^{\rm r}(K^{\prime})\]
for some \(K^{\prime}\in\mathcal{K}\) due to commutation rules (4.27). Further, the definition of \(H^{\rm r}(U)\) in Section 3.1 immediately yields \(\boldsymbol{\gamma}^{\rm r}(H^{\rm r}(U))=H^{\rm r}(U)\boldsymbol{\gamma}^{\rm r }((\bar{\bar{V}}^{\rm r})^{-1})\). So, finally,
\[\boldsymbol{\gamma}^{\rm r}(C) =H^{\rm r}(U)\boldsymbol{\gamma}^{\rm r}((\bar{V}^{\rm r})^{-1}) \tilde{\boldsymbol{\gamma}}^{\rm r}(\bar{\bar{V}}^{\rm r})\tilde{H}^{\rm r}(U )^{-1})\boldsymbol{\gamma}^{\rm r}(K^{\prime})\] \[=H^{\rm r}(U)\boldsymbol{\gamma}^{\rm r}((\bar{\bar{V}}^{\rm r}) ^{-1}(\bar{\bar{V}}^{\rm r}))\tilde{H}^{\rm r}(U)^{-1})\boldsymbol{\gamma}^{ \rm r}(K^{\prime})\] \[=H^{\rm r}(U)\boldsymbol{\gamma}^{\rm r}(K^{\prime\prime})\tilde{ H}^{\rm r}(U)^{-1})\boldsymbol{\gamma}^{\rm r}(K^{\prime})=H^{\rm r}(U)\tilde{H}^{ \rm r}(U)^{-1})\boldsymbol{\gamma}^{\rm r}(K^{-1})=C\boldsymbol{\gamma}^{\rm r }(K^{-1})\]
for some \(K\in\mathcal{K}\); here the equality in the second line follows from (4.29), the first equality in the third line follows from Lemma 4.19 for \(K^{\prime\prime}=(\bar{V}^{\rm r})^{-1}(\bar{\bar{V}}^{\rm r})\in\mathcal{K}\), and the second equality follows from commutation rules (4.27).
(ii) Indeed, by (i), \(\boldsymbol{\gamma}^{\rm r}(K)=\boldsymbol{\gamma}^{\rm r}(C)^{-1}C\) and hence
\[(\boldsymbol{\gamma}^{\rm r})^{j}(K)=(\boldsymbol{\gamma}^{\rm r})^{j}(C)^{-1 }(\boldsymbol{\gamma}^{\rm r})^{j-1}(C),\]
so that \(\dots(\boldsymbol{\gamma}^{\rm r})^{2}(K)\boldsymbol{\gamma}^{\rm r}(K)=C\).
To complete the proof of Theorem 4.18 we have to find an explicit expression for the matrix \(K\) in Lemma 4.20, that is, to compute parameters \(\xi_{1},\dots,\xi_{p-1}\) in (4.26). These parameters are determined uniquely via equations (4.23), (4.24) and the determinantal description of functions \(f_{ij}(Z)\) and \(f_{ij}(\tilde{Z})\) for a particular collection of \(p-1\) pairs \((i,j)\). There are four cases to consider depending on whether the deleted root \(\alpha\) is \(k\) or \(m-1\) and on whether or not \(\gamma^{\rm r}\) reverses the orientation of \([k,m-1]\).
_Case 1:_\(\alpha=k\), \(\gamma^{\rm r}\) preserves the orientation of \([k,m-1]\). Let \(K=\operatorname{diag}(\boldsymbol{1}_{k-1},\Xi,\boldsymbol{1}_{n-m})\) with \(\Xi=\begin{bmatrix}1&\xi\\ 0&\boldsymbol{1}_{p-1}\end{bmatrix}\). Denote \(q=\gamma^{\rm r}(k)\) and consider \(f_{qj}(Z)\) for \(j\in[n-p+2,n]\). Recall that \(f_{qj}(Z)\) is the determinant of the principal trailing submatrix \(M\) of \(\mathcal{L}(q,j)\) such that the entry in the upper left corner of \(M\) is \(z_{qj}\). It is easy to see that for \(j\) as above, the top left block of \(M\) is a \(Y\)-block, and since the orientation is preserved, the block immediately to the right of it is an \(X\)-block with the exit point \((k+n-j+1,1)\). By Remark 4.7, this determinant does not change if the first of the above blocks is replaced by the corresponding block of \(\boldsymbol{\gamma}^{\rm r}(N_{+})Z\), and the second one by the corresponding blob of \(N_{+}Z\). We choose \(N_{+}=C^{-1}\), hence, as mentioned in the proof of Lemma 4.20(ii), \(\boldsymbol{\gamma}^{\rm r}(N_{+})=\boldsymbol{\gamma}^{\rm r}(K)C^{-1}\). Consequently, by (4.25), the matrix in the first block is \(\boldsymbol{\gamma}^{\rm r}(K)\tilde{Z}\), and the matrix in the second block iz \(\tilde{Z}\).
Consider the Laplace expansion of the matrix \(M\) amended as explained above with respect to the first block column. In the obtained sum, apply Remark 4.10 with \(A=\tilde{Z}\) to all second factors and collect all the obtained expressions back to the unexpanded form. We thus obtain
\[f_{qj}(Z)=\det\left[\begin{array}{c}\Xi\tilde{Z}_{[q,q+p-1]}^{[j,n]}\\ \boldsymbol{0}\end{array}\right](\tilde{Z}H^{\rm c}(U)^{-1})_{[k,n]}^{[1,j-k]} \ \right]t_{k+n-j}(U), \tag{4.31}\]
where \(\mathbf{0}\) is a zero \((n-k-p+1)\times(n-j+1)\) matrix. In a similar way,
\[\tilde{f}_{qj}(\tilde{Z})=\det\left[\begin{array}{cc}\tilde{Z}_{[q,q+p-1]}^{[j,n]}&0\ \ldots\ 0\\ \mathbf{0}&(\tilde{Z}H^{c}(U)^{-1})_{[k+1,n]}^{[1,j-k]}\end{array}\right]\tilde{ t}_{k+n-j}(U); \tag{4.32}\]
note that in this case there is no need to amend the first two blocks of \(\tilde{M}\). Further, (4.1) and (4.23), (4.24) yield
\[t_{k+n-j}(U)=\begin{cases}\tilde{t}_{k+n-j}(U)f_{k+1,1}(\tilde{Z})&\text{if $(k+ 1,1)$ is subordinate to $(k+n-j+1,1)$}\\ \tilde{t}_{k+n-j}(U)&\text{otherwise.}\end{cases}\]
Consequently, (4.31), (4.32) and (4.23), (4.24) yield
for \(j\in[n-p+2,n]\). In other words, let \(P=(\tilde{Z}H^{c}(U)^{-1})_{[k+1,n]}^{[1,n-k]}\) be a \((n-k)\times(n-k)\) matrix and \(v=(\tilde{Z}H^{c}(U)^{-1})_{k}^{[1,n-k]}\) be an \((n-k)\)-vector, then all dense \((n-k+1)\times(n-k+1)\) minors of the \((n-k+1)\times(n-k+p-1)\) matrices
are equal, which gives \(p-1\) linear equations for \(\xi_{1},\ldots,\xi_{p-1}\). Define a unipotent upper triangular \((n-k+1)\times(n-k+1)\) matrix \(\Theta\) via \(\Theta=\begin{bmatrix}1&vP^{-1}\\ 0&\mathbf{1}_{n-k}\end{bmatrix}\), then multiplying the second matrix above on the left by \(\Theta\) we preserve all dense \((n-k+1)\times(n-k+1)\) minors and obtain
\[\left[\begin{array}{cc}\Theta_{[1,p]}^{[1,p]}\tilde{Z}_{[q,q+p-1]}^{[n-p+2, n]}&v\\ \mathbf{0}&P\end{array}\right].\]
Consequently, all the minors in question are equal for \(\Xi=\Theta_{[1,p]}^{[1,p]}\), which yields
\[\xi_{i}=\frac{(-1)^{i-1}\det P^{(i)}}{\det P},\qquad i\in[1,p-1],\]
where \(P^{(i)}\) is obtained from \(P\) via replacing the \(i\)th row by \(v\). Note that this solution remains valid if \(\tilde{Z}_{[q,q+p-1]}^{[n-p+2,n]}\) above is replaced by an arbitrary \(p\times(p-1)\) matrix \(A\). It follows from the lower semicontinuity of the rank function that \(\xi_{i}\) are defined uniquely, since for \(A=\begin{bmatrix}\star\\ \mathbf{1}_{p-1}\end{bmatrix}\) the system of equations for \(\xi_{i}\) is triangular and diagonal elements are minors of \(P\). Finally, we invoke once again Theorem 4.4 and Remark 4.10 to conclude that the ratio above is equal to
\[\frac{(-1)^{i-1}\det\tilde{\mathcal{L}}^{(i)}(k+1,1)(\tilde{Z})}{\det\tilde{ \mathcal{L}}(k+1,1)(\tilde{Z})},\]
where \(\tilde{\mathcal{L}}^{(i)}(k+1,1)(\tilde{Z})\) is obtained from \(\tilde{\mathcal{L}}(k+1,1)(\tilde{Z})\) via replacing the \(i\)th row of its upper leftmost block, which is a submatrix of \(\tilde{Z}_{[k+1,n]}\), by the corresponding segment of the row \(\tilde{Z}_{k}\) (cf. the construction preceding Lemma 4.15). Consequently, \(\xi_{i}\) are polynomials in \(\tilde{Z}\) divided by \(\tilde{f}_{k+1,1}(\tilde{Z})\), as required.
_Case 2: \(\alpha=m-1\), \(\gamma^{\mathrm{r}}\)_ preserves the orientation of \([k,m-1]\). Let \(K=\mathrm{diag}(\mathbf{1}_{k-1},\Xi,\mathbf{1}_{n-m})\) with \(\Xi=\begin{bmatrix}\mathbf{1}_{p-1}&\xi^{T}\\ 0&1\end{bmatrix}\). Put \(q=\gamma^{\mathrm{r}}(k)\) as before and consider \(f_{q+j,n-p+j+1}(Z)\) for \(j\in[1,p-1]\). Note that \(f_{q+j,n-p+j+1}(Z)\) is the determinant of the principal trailing submatrix \(M\) of \(\mathcal{L}(m,1)\) such that the entry in the upper left corner of \(M\) is \(z_{q+j,n-p+j+1}\). It is easy to see that for \(j\) as above, the top left block of \(M\) is a \(Y\)-block, same as in the previous case, and since the orientation is preserved, the block immediately to the right of it is an \(X\)-block with the exit point \((m,1)\). Arguing as in Case 1, we arrive at the equality of the first \(p-1\) leading minors of the \((n-k+1)\times(n-k+1)\) matrices
\[\left[\begin{array}{cc}\Xi\tilde{Z}_{[q,q+p-1]}^{[n-p+2,n]}&(\tilde{Z}H^{ \mathrm{c}}(U)^{-1})_{[k,n]}^{[1,n-m+1]}\\ \mathbf{0}&(\tilde{Z}H^{\mathrm{c}}(U)^{-1})_{[m,n]}^{[1,n-m+1]}\end{array} \right],\]
which yields a triangular system of linear equations on \(\xi_{1},\ldots,\xi_{p-1}\). Write
\[(\tilde{Z}H^{\mathrm{c}}(U)^{-1})_{[k,n]}^{[1,n-m+1]}=\begin{bmatrix}P^{\prime }\\ P\end{bmatrix}\]
where \(P^{\prime}\) consists of the upper \(p-1\) rows, and \(P\) is the remaining square submatrix, and define a unipotent upper triangular \((n-k+1)\times(n-k+1)\) matrix \(\Theta\) via \(\Theta=\begin{bmatrix}\mathbf{1}_{p-1}&P^{\prime}P^{-1}\\ \mathbf{0}&\mathbf{1}_{n-m+2}\end{bmatrix}\). Multiplication of the second matrix above on the left by \(\Theta\) preserves the leading minors and produces
\[\left[\begin{array}{cc}\Xi\tilde{Z}_{[q,q+p-1]}^{[n-p+2,n]}&(\tilde{Z}H^{ \mathrm{c}}(U)^{-1})_{[k,n]}^{[1,n-m+1]}\\ \mathbf{0}&\end{array}\right].\]
Consequently, all the minors in question are equal for \(\Xi=\Theta_{[1,p]}^{[1,p]}\), which yields
\[\xi_{i}=\frac{\det P^{((i))}}{\det P},\qquad i\in[1,p-1],\]
where \(P^{((i))}\) is obtained from \(P\) via replacing its first row by the \(i\)th row of \(P^{\prime}\). The same reasoning as in Case 1 shows that \(\xi_{i}\) are polynomials in \(\tilde{Z}\) divided by \(\tilde{f}_{m1}(\tilde{Z})\), as required.
_Cases 3 and 4._ These are the two cases when \(\alpha=k\) or \(\alpha=m-1\) and \(\gamma^{\mathrm{r}}\) reverses the orientation of \([k,m-1]\). The treatment of these cases is very similar to the treatment described above. In both cases the second block of the matrix \(M\) is an \(X^{\dagger}\)-block, so \(\tilde{Z}H^{\mathrm{c}}(U)^{-1}\) in all formulas should be replaced by \((\tilde{Z}H^{\mathrm{c}}(U)^{-1})^{\dagger}\). Further, finding \(\boldsymbol{\gamma}^{\mathrm{r}}(K)\) now involves the conjugation of the cofactor matrix by \(w_{0}\mathbf{J}\). Note that conjugation by \(w_{0}\mathbf{J}\) of the cofactor matrix of \(\Xi\) used in Case 1 gives \(\Xi\) used in Case 2. Consequently, the argument of Case 1 should be now used for \(\alpha=m-1\), and the argument used in Case 2 should be used for \(\alpha=k\).
Therefore, the proof of Theorem 4.17 is completed.
|
2309.10059 | On polynomial solutions of certain finite order ordinary differential
equations | Some properties and relations satisfied by the polynomial solutions of a
bispectral problem are studied. Given a finite order differential operator,
under certain restrictions, its polynomial eigenfunctions are explicitly
obtained, as well as the corresponding eigenvalues. Also, some linear
transformations are applied to sequences of eigenfunctions and a necessary
condition for this to be a sequence of eigenfunctions of a new differential
operator is obtained. These results are applied to the particular case of
classical Hermite polynomials. | L. M. Anguas, D. Barrios Rolanía | 2023-09-18T18:13:48Z | http://arxiv.org/abs/2309.10059v1 | # On polynomial solutions of certain finite order
###### Abstract
Some properties and relations satisfied by the polynomial solutions of a bispectral problem are studied. Given a finite order differential operator, under certain restrictions, its polynomial eigenfunctions are explicitly obtained, as well as the corresponding eigenvalues. Also, some linear transformations are applied to sequences of eigenfunctions and a necessary condition for this to be a sequence of eigenfunctions of a new differential operator is obtained. These results are applied to the particular case of classical Hermite polynomials.
keywords: Differential operator, bispectral problem, polynomial eigenfunctions. Msc: 15B99, 15A23, 34L10, 39A70, 33D45 +
Footnote †: journal:
## 1 Introduction and main results
We consider the ordinary differential operator of order \(N\)
\[L\equiv\sum_{i=0}^{N}a_{i}(x)\partial_{x}^{i} \tag{1}\]
where \(a_{i}(x)\) are polynomials in the variable \(x\), \(\deg(a_{i})\leq i\), and \(\partial_{x}^{i},i=1,\ldots,N\), represents the derivative of order \(i\) with respect to \(x\). We also consider a sequence \(\{\lambda_{n}\}\subset\mathbb{C}\) of eigenvalues and the corresponding sequence of eigenfunctions \(\{P_{n}\},\) which we assume that are monic polynomials with \(\deg(P_{n})=n\) for each \(n\in\mathbb{N}\). That is,
\[\sum_{i=1}^{N}a_{i}(x)\partial_{x}^{i}P_{n}(x)=\lambda_{n}P_{n}(x)\,,\quad \forall n\in\mathbb{N}. \tag{2}\]
The polynomials \(\{P_{n}\}\) satisfying (2) are called _eigenpolynomials_ in this work. We approach the relations between the operator \(L\) and its eigenvalues and eigenpolynomials.
Although we study eigenpolynomials of (1) in general, we are especially interested in families of polynomials that are at the same time eigenfunctions of a certain difference operator \(J\). One of such sequences \(\{P_{n}\}\) of polynomials satisfy
\[J\left(\begin{array}{c}P_{0}(x)\\ P_{1}(x)\\ \vdots\end{array}\right)=x\left(\begin{array}{c}P_{0}(x)\\ P_{1}(x)\\ \vdots\end{array}\right), \tag{3}\]
where
\[J=\left(\begin{array}{ccccccccc}\alpha_{0,0}&1&0&\ldots&&&&\\ \vdots&\ddots&\ddots&\ddots&&&&\\ \alpha_{p,0}&\cdots&\alpha_{p,p}&1&0&\cdots&&&\\ 0&\alpha_{p+1,1}&\cdots&\alpha_{p+1,p+1}&1&0&\cdots&&&\\ &0&\ddots&\ddots&\ddots&\ddots&&&\\ &&0&\alpha_{n,n-p}&\cdots&\alpha_{n,n}&1&0&\cdots\\ &&&&\ddots&\ddots&\ddots&\ddots&\ddots&\ddots&\end{array}\right). \tag{4}\]
Equation (3) is satisfied if and only if those polynomials satisfy a \((p+2)\)-term recurrence relation
\[\sum_{k=n-p}^{n-1}\alpha_{n,k}P_{k}(x)+(\alpha_{n,n}-x)P_{n}(x)+P_{n+1}=0,\quad n =0,1,\ldots, \tag{5}\]
with initial conditions
\[P_{0}=1,\quad P_{-1}=\cdots=P_{-p}=0.\]
The difference operator is given by
\[J(n)P_{n}=\sum_{k=n-p}^{n}\alpha_{n,k}P_{k}+P_{n+1}\]
and we have
\[\left(J(n)P_{n}\right)(x)=xP_{n}(x). \tag{6}\]
If the polynomials \(P_{n}\) satisfy (2) and (6), we say that the sequence \(\{P_{n}\}\), \(n\in\mathbb{N}\), is a solution for the bispectral problem defined by \(L\) and \(J\).
S. Bochner [8] studied the above problem for the case where the order of the differential operator is \(N=2\) and determined the polynomial solutions of (2). He completely solved the problem, and his classification defined the nowadays well-known families of classical orthogonal polynomials which correspond to \(p=1\) in (5). Some years later, Krall [15], [16] studied the differential operator of order \(N=4\), giving a classification with seven families. This classification includes three new families of polynomial eigenfunctions that cannot be reduced to operators of order 2. Since the celebrated paper by Bochner and the relevant contributions by Krall, there have been several contributions about the bispectral problem, see, for example, [7], [17]. Some extensions of these works have been attempted, even for operators in differences with complex coefficients [13], [14]. However, the difficulty of the problem has made impossible to obtain conclusions as relevant as those already known for the operators orders \(N=2\) and \(N=4\). The goal of this work is to shed some light on this problem by providing some relationships between the differential operator (1) and its eigenpolynomials. We hope that these contributions open new possibilities for solving the problem with a general order \(N\).
In what follows, we assume that the differential operator (1) is given; we also assume that there exist both the sequence of eigenvalues \(\{\lambda_{n}\}\) and the corresponding sequence of eigenpolynomials \(\{P_{n}\}\) satisfying (2). We may assume \(a_{0}\equiv 0\) because, otherwise, we would substitute \(a_{0}\) by \(a_{0}-\lambda_{0}\). For the same reason, we take \(\lambda_{0}=0\). Also, we define
\[a_{n}(x)=0,\quad n>N. \tag{7}\]
Then, for each \(n\in\mathbb{N}\) we write the polynomials \(a_{n}(x)\) and \(P_{n}(x)\) as
\[a_{n}(x)=\sum_{i=0}^{n}a_{n,i}x^{i},\quad a_{n,i}\in\mathbb{C},\,i=0,1,\ldots,n, \tag{8}\]
and
\[P_{n}(x)=\sum_{i=0}^{n}b_{n,i}x^{i},\quad b_{n,n}=1,b_{n,i}\in\mathbb{C},\,i= 0,1,\ldots,n-1. \tag{9}\]
A relevant tool in this paper is the sequence \(\{\delta_{n}^{(k)}\}\) defined from the coefficients of the polynomials \(a_{i}(x),\,i=1,\ldots,N,\) as
\[\delta_{n}^{(k)}=\sum_{i=k}^{n}\binom{n}{i}!a_{i,i-k},\quad k=0,1,\ldots,n\,. \tag{10}\]
The following theorem is the key to understand the connection between these sequences (10) and the coefficients of the eigenpolynomials. We remark that, in addition, this theorem provides a valuable method to obtain the coefficients \(b_{n,k},\,k=0,1,\ldots,n-1,\) from the polynomials \(P_{n}(x)\) (see (9)).
**Theorem 1**.: _For the sequence \(\{P_{n}(x)\}\) of eigenpolynomials and the sequence \(\{\lambda_{n}\}\) of eigenvalues of \(L\) we have_
1. _For each_ \(n\in\mathbb{N},\)__(2) _is equivalent to_ \[\sum_{k=0}^{N}\delta_{m+k}^{(k)}b_{n,m+k}=\lambda_{n}b_{n,m},\quad m=0,1, \ldots,n,\] (11) _where_ \(b_{n,m+k},\,k=0,\ldots,N,\) _are the coefficients of the polynomials_ \(\{P_{n}\}\) _as described in (_9_)._
2. _Let_ \(M\) _be the semi-infinite upper triangular matrix_ \[M=\left(\begin{array}{cccccccc}\delta_{0}^{(0)}&\delta_{1}^{(1)}&\cdots& \cdots&\delta_{N}^{(N)}&0&&\\ 0&\delta_{1}^{(0)}&\delta_{1}^{(1)}&\cdots&\cdots&\delta_{N+1}^{(N)}&0&&\\ 0&\delta_{2}^{(0)}&\delta_{3}^{(1)}&\cdots&\cdots&\delta_{N+2}^{(N)}&0&\\ &\ddots&\ddots&\ddots&\cdots&&\ddots&\ddots\end{array}\right)\] _and let_ \(M_{n+1}\) _be the truncation of_ \(M\) _formed by its first_ \(n+1\) _rows and columns for each fixed_ \(n\in\mathbb{N}.\) _Assume_ \[\lambda_{n}\neq 0,\lambda_{1},\,\lambda_{2},\ldots,\lambda_{n-1},\quad n=1,2,\ldots\] (12) _Then_ \(P_{n}(x),\,n=1,2\ldots,\) _is the unique monic polynomial satisfying (_2_) whose coefficients_ \(b_{n,0},\cdots,b_{n,n-1}\) _in (_9_) determine an eigenvector of_ \(M_{n+1}\) _corresponding to the eigenvalue_ \(\lambda_{n}\)_. That is,_ \[(M_{n+1}-\lambda_{n}I_{n+1})b_{n}=0,\] (13) _where_ \(b_{n}=(b_{n,0},\cdots,b_{n,n-1},1)^{T}.\)__
(We understand, here and in the rest of the paper, that \(b_{n,s}=0\) when \(s>n.\))
As a consequence of (11), the eigenvalues of \(L\) are determined as
\[\lambda_{n}=\delta_{n}^{(0)},\quad n\in\mathbb{N}. \tag{14}\]
From (10) and (14), if (2) holds then the eigenvalues can be obtained as
\[\lambda_{n}=\sum_{i=1}^{\min\{n,N\}}\binom{n}{i}i\lambda_{i,i},\quad n=1,2,\ldots \tag{15}\]
(it is also a well-known fact in the literature, see [10], [16]). This expression allows us to remark that, given a differential operator \(L\), the sequence of eigenvalues \(\{\lambda_{n}\}\) is unique.
In the proof of Theorem 1 we will see that condition (12) is not neccesary for the existence of eigenpolynomials. However, under such condition, the relation (13) provides an easy method to obtain the coefficients of these polynomials as coordinates of eigenvectors for a sequence of finite triangular matrices. Moreover, (12) is also a relevant condition in other places of this work. In particular, we highlight its importance in the expression of the coefficients of the eigenpolynomials in (16). Henceforth, in the sequel we assume that condition (12) is satisfied. That is, we assume that the eigenvalues \(\lambda_{n},\,n=0,1,\ldots,\) are all different from zero and also different from each other, which only depends on the leading coefficients \(a_{n,n}\) in (8).
In our next result, we provide the explicit expression of the coefficients of each eigenpolynomial in terms of the elements of the sequence \(\{\delta_{n}^{(k)}\}\). Due to (10), this result implies the uniqueness of the sequence \(\{P_{n}\}\) of eigenpolynomials, which is completely determined by the coefficients of polynomials \(a_{i}(x)\) that define the differential operator \(L\).
**Theorem 2**.: _Under the above conditions and using the notation from (9), for each \(n\in\mathbb{N}\) we have_
\[b_{n,i}=\sum_{E_{n}^{(i)}}\left(\prod_{s=1}^{k}\frac{\delta_{i+i_{1}+\cdots+i_{k} }^{(i_{k})}}{\lambda_{n}-\lambda_{i+i_{1}+\cdots+i_{k-1}}}\right),\quad i=0,1, \ldots,n-1, \tag{16}\]
_where the sum is extended to the set \(E_{n}^{(i)}=\{(i_{1},\ldots,i_{k})\in\mathbb{N}^{k}:\,k\in\mathbb{N},\,i_{1}+ \cdots+i_{k}=n-i\}\) and we understand \(i+i_{1}+\cdots+i_{s-1}=i\) when \(s=1\)._
The above concepts and results allow us to study the linear transformations
\[P_{n}^{(1)}=P_{n}+\gamma_{n}P_{n-1} \tag{17}\]
of the eigenpolynomials \(\{P_{n}\},\) where the sequence \(\{P_{n}^{(1)}\}\) is obtained from a given sequence \(\{\gamma_{n}\}\subset\mathbb{C}.\) We focus our interest in the case of new families \(\{P_{n}^{(1)}\}\) which are eigenpolynomials of some finite order differential operator and also satisfy a \((p+2)\)-term recurrence relation. In the following theorem, a necessary condition for (17) to provide a new family of eigenpolynomials for some finite order operator is given.
**Theorem 3**.: _Assume \(\{P_{n}^{(1)}\}\) and \(\{\lambda_{n}\}\) are eigenpolynomials and eigenvalues, respectively, of some finite order differential operator_
\[L^{(1)}=\sum_{i=0}^{\widetilde{N}}a_{i}^{(1)}(x)\partial_{x}^{i} \tag{18}\]
_of order \(\widetilde{N}\geq N\). Then_
\[\sum_{j=0}^{k-1}(-1)^{j}\left[\sum_{s=1}^{N}\binom{n-k+j}{s-1}s!a_{s,s}\right] b_{n-k+j,n-k}\sum_{r=1}^{k-j}\gamma_{n-k+j+1}\cdots\gamma_{n-k+j+r}E_{k,n,j+r-1}=0 \tag{19}\]
_for any \(n,k\in\mathbb{N}\) such that \(n\geq k>\widetilde{N},\) where_
\[E_{k,n,s}=\left|\begin{array}{ccccc}b_{n-k+s+2,n-k+s+1}&b_{n-k+s+3,n-k+s+1} &\cdots&\cdots&b_{n,n-k+s+1}\\ 1&b_{n-k+s+3,n-k+s+2}&\cdots&\cdots&b_{n,n-k+s+2}\\ 0&1&\cdots&\cdots&b_{n,n-k+s+3}\\ \vdots&0&\ddots&&\vdots\\ \vdots&\vdots&\ddots&\ddots&\vdots\\ 0&0&\cdots&1&b_{n,n-1}\\ \end{array}\right|. \tag{20}\]
We point out the relevance of Theorem 3, since it makes unnecessary to study coefficients \(a_{i}^{(1)}(x)\) and values of \(\widetilde{N},\) reducing the problem of non existence to the verification of a condition that depends only on the sequence \(\{\gamma_{n}\},\) as well as the initial operator (1) and its eigenpolynomials \(\{P_{n}\}.\)
In the last part of this work, we will study a particular case of the linear transformation (17). More precisely, we will focus on the Geronimus transformations, a particular case of Darboux transformations (see [4]). Depending on the field of application, different versions of this kind of transformations have been used in the literature (see the introduction in [9] for more details). We will use the definitions introduced in [2], which have also been used in [1, 3, 5, 6]. For the convenience of the reader, Section 3 will include a brief summary of these concepts. In [11], [13], the application of Geronimus transformations, under certain conditions, to some classical polynomials was studied. More precisely, in [11] it was showed that the Krall polynomials can be obtained from some instances of the Laguerre and the Jacobi polynomials under these transformations. Moreover, in [13], the Laguerre polynomials were studied, and the transformation of these polynomials in eigenfunctions of a finite order differential operator under several applications of Geronimus transformation was proved. The Hermite polynomials, in combination with the Bessel polynomials, were only partially studied in [12]. In our paper, we complement these works by showing that the Geronimus
transformation on the Hermite polynomials does not produce a new family of eigenfunctions for any finite order differential operator. This will be proved using Theorem 3, that is, we will prove that condition (19) is not satisfied.
The rest of the paper is distributed as follows. In Section 2 we analyze the relationships between the sequence \(\{\delta_{n}^{(k)}\}\) given in (10) and the eigenvalues and eigenpolynomials of \(L\). In order to do that, we will prove Theorems 1-2 and, as an example, we will apply these results to the Hermite polynomials. Section 3 is devoted to the study of the linear transformations (17) of the sequence of eigenpolynomials. In this section, several auxiliary results and Theorem 3 are proved. Section 3 is applied to the Hermite polynomials in Section 4, giving in this section some more lemmas and theorems related with condition (19). Finally, in Section 5 we summarize the main conclusions of this work.
## 2 Eigenvalues and eigenpolynomials of \(L\)
We start this section proving Theorems 1-2.
**Proof of Theorem 1:**
Firstly, we prove the equivalence between (2) and (11). We underline that (11) is a very important relationship between the sequence \(\{\delta_{n}^{(k)}\}\) and the sequences of eigenvalues and eigenpolynomials satisfying (2). We point out that (11) can be written as
\[\sum_{k=0}^{N}\left[\sum_{i=k}^{m+k}\binom{m+k}{i}i!a_{i,i-k}\right]b_{n,m+k} =\lambda_{n}b_{n,m},\quad n\geq 0\,,\quad m=0,1,\ldots,n\,, \tag{21}\]
which in the case \(N=2\) is a straightfull interpretation of (11) in [8]. We follow here the lines of this relevant paper of Bochner [8]. For each \(n\in\mathbb{N}\), (9) leads to
\[\partial_{x}^{i}P_{n}(x) = \sum_{i=1}^{n-i+1}(i+s-1)(i+s-2)\cdots sb_{n,i+s-1}x^{s-1}\] \[= \sum_{s=1}^{n-i+1}\binom{i+s-1}{i}i!b_{n,i+s-1}x^{s-1},\quad i=1,\ldots,N\,,\]
where we recall that \(b_{n,j}=0\) if \(j>n\) and we understand that the sum is equal to \(0\) if \(i>n\). Then, (2) is equivalent to
\[\sum_{i=1}^{N}\left(\sum_{k=0}^{i}a_{i,k}x^{k}\right)\left(\sum_{s=1}^{n-i+1} \binom{i+s-1}{i}i!b_{n,i+s-1}x^{s-1}\right)=\lambda_{n}\sum_{r=0}^{n}b_{n,r}x ^{r}. \tag{22}\]
Comparing the coefficients of \(x^{r},r=0,\ldots,\) in both sides of (22) we arrive to (21), as we wanted to prove.
In the second place, we prove (13). With this purpose, we consider (11), which is, for each fixed \(n\),
\[(\delta_{m}^{(0)}-\lambda_{n})b_{n,m}+\delta_{m+1}^{(1)}b_{n,m+1}+\cdots+ \delta_{m+N}^{(N)}b_{n,m+N}=0\,,\quad m=0,1,\ldots,n. \tag{23}\]
The relation (23) can be interpreted as an upper triangular linear system, with matrix of coefficients \(M_{n+1}\), whose unknowns are \(b_{n,0},\cdots,b_{n,n-1},b_{n,n}\). In matrix notation this is (13). Hence the coefficients of \(b_{n}\) define an eigenvector of \(M_{n+1}\) corresponding to the eigenvalue \(\lambda_{n}\).
The identity (14) together with the fact that \(\det(M_{n}-\lambda_{n}I_{n})\neq 0\) prove that the coefficients \(b_{n,0},\cdots,b_{n,n-1}\) are uniquely determined.
Next, we prove Theorem 2, which shows how the sequence \(\{\delta_{n}^{(k)}\}\) provides the explicit expression of the coefficients of eigenpolynomials.
**Proof of Theorem 2:**
Taking \(i=n-m\) in (23) we see
\[\delta_{m}^{(0)}b_{m+i,m}+\delta_{m+1}^{(1)}b_{m+i,m+1}+\cdots+\delta_{m+N}^{ (N)}b_{m+i,m+N}=\lambda_{m+i}b_{m+i,m},\quad i=0,1,\ldots,\]
where, as usual, \(b_{m+i,m+j}=0\) when \(j>i\). Then, for \(m\) fixed and \(i=1,2,\ldots\) we have
\[\begin{array}{ccccccccc}(\lambda_{m+1}-\lambda_{m})b_{m+1,m}&&&&=&\delta^{(1)}_{m +1}\\ (\lambda_{m+2}-\lambda_{m})b_{m+2,m}&-&\delta^{(1)}_{m+1}b_{m+2,m+1}&&&&=& \delta^{(2)}_{m+2}\\ \vdots&&\vdots&&&&\vdots\\ (\lambda_{m+N}-\lambda_{m})b_{m+N,m}&-&\delta^{(1)}_{m+1}b_{m+N,m+1}&-&\cdots &-&\delta^{(N-1)}_{m+N-1}b_{m+N,m+N-1}&=&\delta^{(N)}_{m+N}\\ \vdots&&\vdots&&&&\vdots&&\vdots\\ (\lambda_{s}-\lambda_{m})b_{s,m}&-&\delta^{(1)}_{m+1}b_{s,m+1}&-&\cdots&-&\delta ^{(N-1)}_{m+N-1}b_{s,m+N-1}&=&\delta^{(N)}_{m+N}b_{s,m+N}\\ \vdots&&\vdots&&&&\vdots&&\vdots\end{array} \tag{24}\]
Hence, the first relation in (24) leads to
\[b_{m+1,m}=\frac{\delta^{(1)}_{m+1}}{\lambda_{m+1}-\lambda_{m}},\]
which is (16) when \(n=m+1\) and \(i=m\).
From the second and consecutive relations of (24) we are going to obtain (16) for \(n=m+k,\,i=m,\) and \(k=2,3,\ldots\) In fact, assume that \(b_{m+k,m}\) satisfies (16) for \(k=2,3,\ldots,q\). Then, the \((q+1)\)-th relation of (24) implies
\[(\lambda_{m+q+1}-\lambda_{m})b_{m+q+1,m}=\delta^{(1)}_{m+1}b_{m+q+1,m+1}+ \delta^{(2)}_{m+2}b_{m+q+1,m+2}+\cdots+\delta^{(N)}_{m+N}b_{m+q+1,m+N}. \tag{25}\]
On the other hand, since \(m\) can be substituted in (24) by any \(m+p,\,p\in\mathbb{N}\), we know that
\[b_{m+q+1,m+j}=\sum_{E^{(m+j)}_{m+q+1}}\left(\prod_{s=1}^{k}\frac{\delta^{(i_{s })}_{m+j+i_{1}+\cdots+i_{s}}}{\lambda_{m+q+1}-\lambda_{m+j+i_{1}+\cdots+i_{s-1 }}}\right),\quad j=1,\ldots,N.\]
From this and (25),
\[b_{m+q+1,m}=\sum_{j=1}^{q+1}\frac{\delta^{(j)}_{m+j}}{\lambda_{m+q+1}-\lambda _{m}}\sum_{E^{(m+j)}_{m+q+1}}\left(\prod_{s=1}^{k}\frac{\delta^{(i_{s})}_{m+ j+i_{1}+\cdots+i_{s}}}{\lambda_{m+q+1}-\lambda_{m+j+i_{1}+\cdots+i_{s-1}}} \right),\quad j=1,\ldots,N. \tag{26}\]
Furthermore it is obvious that, if \(\sum_{s=1}^{k}i_{s}=q+1,\) then \(\sum_{i_{s}\neq j}i_{s}=q-j+1\) for each \(j\in\{i_{1},\ldots,i_{k}\}\). Thus, it is easy to see
\[E^{(m)}_{m+q+1}=\bigcup_{j=1}^{q+1}E^{(m+j)}_{m+q+1}.\]
Consequently, from (26) we arrive to
\[b_{m+q+1,m} = \sum_{j=1}^{q+1}\sum_{E^{(m+j)}_{m+q+1}}\frac{\delta^{(j)}_{m+j}} {\lambda_{m+q+1}-\lambda_{m}}\left(\prod_{s=1}^{k}\frac{\delta^{(i_{s})}_{m+j+ i_{1}+\cdots+i_{s}}}{\lambda_{m+q+1}-\lambda_{m+j+i_{1}+\cdots+i_{s-1}}}\right)\] \[= \sum_{E^{(m)}_{m+q+1}}\left(\prod_{s=1}^{k}\frac{\delta^{(i_{s})} _{m+i_{1}+\cdots+i_{s}}}{\lambda_{m+q+1}-\lambda_{m+i_{1}+\cdots+i_{s-1}}} \right),\]
which is (16) for \(n=m+q+1\) and \(i=m\).
**Remark 1**.: _Because \(\delta^{(k)}_{n},\,k=0,1,\ldots,n,\) are polynomials in \(n\), from (16) we recover the well-known fact that the coefficients \(b_{n,j}\) are rational functions of the variable \(n\)._
As an example, we consider the case of Hermite polynomials, which we denote as \(\{H_{n}\}\). The three-term recurrence relation satisfied by these polynomials is
\[\begin{cases}H_{n}=xH_{n-1}-\frac{1}{2}(n-1)H_{n-2},\ n\in\mathbb{N}\\ H_{0}=1,\quad H_{-1}=0,\end{cases}\]
and the banded matrix associated to this recurrence relation is the tridiagonal matrix
\[J=\left(\begin{array}{ccccc}0&1&0&\cdots&&\\ \frac{1}{2}&0&1&\ddots&&\\ 0&1&0&1&\ddots&\\ \vdots&0&\frac{3}{2}&\ddots&\ddots&\\ &&\ddots&\ddots&\end{array}\right). \tag{27}\]
It is well-known [8] that the Hermite polynomials are the classical eigenpolynomials of the differential operator
\[L\equiv a_{1}(x)\partial_{x}+a_{2}(x)\partial_{x}^{2}\]
where \(a_{1}(x)=-2x\) and \(a_{2}(x)=1\). With some straight computations, we obtain
\[\delta_{r}^{(k)}=\left\{\begin{array}{ccc}-2r&\text{if}&k=0\\ 0&\text{if}&k=1\\ r(r-1)&\text{if}&k=2\\ 0&\text{if}&k\geq 3\end{array}\right.,\qquad r=k,k+1,\ldots\]
The eigenvalues are \(\delta_{n}^{(0)}=\lambda_{n}=-2n\), \(n=1,2,\ldots\), and Theorem 2 is applicable because the restriction (12) is fulfilled.
Due to \(\delta_{r}^{(1)}=0\) and \(E_{n}^{(n-1)}=\{1\}\), in (16) we see \(b_{n,n-1}=0\). Also, when \(n-i\) is odd, there exists an index \(i_{s}\) odd in each \((i_{1},\ldots,i_{k})\in E_{n}^{(i)}\) and \(b_{n,i}=0\). Regarding the even coefficients \(b_{m+2s,m},m,s,\in\mathbb{N}\), since \(E_{m+2s}^{(m)}\) can be substituted by \(\{(2,\ldots,2)\}\), we have
\[b_{m+2s,m}=\prod_{j=1}^{s}\frac{\delta_{m+2j}^{(2)}}{\lambda_{m+2s}-\lambda_{m +2j-2}}=\frac{(-1)^{s}}{2^{2s}}\frac{(m+2s)!}{m!s!}\]
or, what is the same,
\[b_{n,n-2s}=\frac{(-1)^{s}}{2^{2s}}\frac{n!}{(n-2s)!s!}. \tag{28}\]
Hence, we arrive to the known expression for the monic Hermite polynomials
\[H_{n}(x)=x^{n}+\sum_{s=1}^{[\frac{n}{2}]}b_{n,n-2s}x^{n-2s}=\frac{n!}{2^{n}} \sum_{s=0}^{[\frac{n}{2}]}\frac{(-1)^{s}}{(n-2s)!s!}(2x)^{n-2s}.\]
## 3 Linear transformations on eigenpolynomials
Given a banded matrix \(J\) as in (4), let \(C\in\mathbb{C}\) be such that the determinants of truncation of the infinite matrix \(CI-J\) formed by its first \(n\) rows and columns satisfy \(\det(CI_{n}-B_{n})\neq 0\) for each \(n\in\mathbb{N}\). In these conditions, it is well known [9] the existence of a lower triangular matrix \(L\) and an upper triangular matrix \(U\) such that
\[J-CI=UL.\]
In the case that concerns us, \(L\) is a \((p+1)\)-banded matrix whose entries in the diagonal we assume equal to 1. We know that there exist \(p\) bidiagonal matrices \(L^{(j)},i=1,2,\ldots,p,\) such that \(L=L^{(1)}L^{(2)}\cdots L^{(p)}\) (see [6]), where
\[U=\left(\begin{array}{ccccc}\gamma_{1}&1&&&\\ &\gamma_{p+1}&1&&\\ &&\gamma_{2p+1}&1&\\ &&&\ddots&\ddots\end{array}\right),\,L^{(i)}=\left(\begin{array}{ccccc}1&&&& \\ \gamma_{i+1}&1&&&\\ &\gamma_{p+i+1}&1&&\\ &&\ddots&\ddots\end{array}\right),\,i=1,\ldots,p.\]
The product \(UL^{(1)}L^{(2)}\cdots L^{(p)}\) is called _bidiagonal Geronimus factorization_ of \(J\) and was introduced in [4]. These factorizations together with the ones introduced in [6] constitute the so-called _bidiagonal Darboux factorization_. We call _Geronimus transformation_ of \(J\) to each \((p+2)\)-banded matrix
\[J^{(s)}:=CI+L^{(p-s+1)}L^{(p-s+2)}\cdots L^{(p)}UL^{(1)}\cdots L^{(p-s)},\quad s =1,2,\ldots,p. \tag{29}\]
For each of these matrices \(J^{(s)}\) there exists an associated sequence \(\{P_{n}^{(s)}\}\) of polynomials verifying a \((p+2)\)-term recurrence relation. These polynomials are called also _Geronimus transformed_ of \(\{P_{n}\}\) (see [4]).
In the following we analyze the relation between the Geronimus transformation and the linear transformation (17), where we assume \(\gamma_{n}\neq 0\) for each \(n\in\mathbb{N}\). We write (17) as \(v^{(1)}(x)=Tv(x),\) where \(v^{(1)}(x)=\left(P_{0}^{(1)}(x),P_{1}^{(1)}(x),\ldots\right)^{T},\,v(x)=(P_{0 }(x),P_{1}(x),\ldots)^{T}\) and
\[T=\left(\begin{array}{ccccc}1&&&\\ \gamma_{1}&1&&\\ 0&\gamma_{2}&\ddots&\\ &\ddots&\ddots&\end{array}\right). \tag{30}\]
Then \(T^{-1}\) is an infinite lower triangular matrix and \(v(x)=T^{-1}v^{(1)}(x).\) We underline in this point the formal sense of \(T^{-1}\). This matrix does not necessarily represent an operator, we just understand \(T^{-1}\) as a table of values satisfying that the formal product of \(T\) times \(T^{-1}\) is \(TT^{-1}=T^{-1}T=I\). Since \(T\) is a bidiagonal matrix, each entry of \(TT^{-1}\) and \(T^{-1}T\) is obtained from a sum with a finite number of terms.
Using (4) and (17),
\[xv^{(1)}(x)=TJv(x)=TJT^{-1}v^{(1)}(x)\]
or, equivalently,
\[\left(TJT^{-1}-xl\right)v^{(1)}(x)=0. \tag{31}\]
In general, it is possible that \(TJT^{-1}\) is not a banded Hessenberg matrix and, in that case, \(\{P_{n}^{(1)}\}\) does not satisfy a \((p+2)\)-term recurrence relation for any \(p\in\mathbb{N}\). However, if
\[J=CI+UL^{(1)}\cdots L^{(p)} \tag{32}\]
is a bidiagonal Darboux factorization of \(J\) then
\[TJT^{-1}=CI+TUL^{(1)}\cdots L^{(p)}T^{-1}.\]
Since \(TUL^{(1)}\cdots L^{(p-1)}\) is a \((p+2)\)-banded Hessenberg matrix, we have that the sequence of monic polynomials \(\{P_{n}^{(1)}\}\) satisfy a \((p+2)\)-term recurrence relation if and only if \(L^{(p)}T^{-1}\) is a diagonal matrix, that is, \(T=L^{(p)}\). Under these conditions, \(TJT^{-1}=CI+TUL^{(1)}\cdots L^{(p-1)}\) coincides with (29) when \(s=1\), that is, \(TJT^{-1}=J^{(1)}\). In other words, \(\{P_{n}^{(1)}\}\) would be the sequence of polynomials corresponding to the first Geronimus transformation of \(\{P_{n}\}.\) This is the case of the Hermite polynomials, as we will show in Theorem 4 of Section 4.
Given the sequence \(\{P_{n}\}\) of eigenpolynomials of the operator in (1), we are interested in studying whether the new polynomials \(\{P_{n}^{(1)}\}\) defined in (17) are eigenfunctions for some finite order differential operator like (1). We will prove in this section Theorem 3, for which we need before some auxiliary results. Firstly, the following lemma gives a relation between the eigenvalues \(\{\lambda_{n}\}\) of \(L\).
**Lemma 1**.: _For \(i,\,j\in\mathbb{N}\) and \(i<j\) we have_
\[\lambda_{j}-\lambda_{i}=\sum_{s=1}^{j}\left[\binom{i}{s-1}+\binom{i+1}{s-1}+ \cdots\binom{j-1}{s-1}\right]s!a_{s,s}, \tag{33}\]
_where we understand \(\binom{m}{s-1}=0\) for \(m<s-1\)._
Proof.: Assume \(i\in\mathbb{N}\) and \(j=i+1\). Then, using (15) and \(\binom{i}{s}+\binom{i}{s-1}=\binom{i+1}{s}\), we have
\[\lambda_{i+1}-\lambda_{i} = \sum_{s=1}^{i+1}\binom{i+1}{s}s!a_{s,s}-\sum_{s=1}^{i}\binom{i}{s }s!a_{s,s} \tag{34}\] \[= (i+1)!a_{i+1,i+1}+\sum_{s=1}^{i}\left[\binom{i+1}{s}-\binom{i}{s} \right]s!a_{s,s}=\sum_{s=1}^{i+1}\binom{i}{s-1}s!a_{s,s}\]
Take now \(i,j\in\mathbb{N}\) and \(j>i+1\). If we write \(\lambda_{j}-\lambda_{i}=(\lambda_{j}-\lambda_{j-1})+\cdots+(\lambda_{i+1}- \lambda_{i})\), we can use (34) in each difference to reach (33).
Also in the study of the transformed polynomials \(\{P_{n}^{(1)}\},\) the sequence \(\{\delta_{n}^{(k)}\}\) defined in (10) plays an important role. Firstly, we show that it is possible to express such sequence explicitly in terms of the eigenvalues and the eigenpolynomials of \(L\).
**Lemma 2**.: _For \(n\in\mathbb{N}\) and \(k=1,\ldots,n,\) we have \((-1)^{k}\delta_{n}^{(k)}=\)_
\[\left|\begin{array}{ccccc}(\lambda_{n-k}-\lambda_{n-k+1})b_{n-k+1,n-k}&( \lambda_{n-k}-\lambda_{n-k+2})b_{n-k+2,n-k}&\cdots&\cdots&(\lambda_{n-k}- \lambda_{n})b_{n,n-k}\\ 1&b_{n-k+2,n-k+1}&\cdots&\cdots&b_{n,n-k+1}\\ 0&1&\cdots&\cdots&b_{n,n-k+2}\\ \vdots&0&\ddots&&\vdots\\ \vdots&\vdots&\ddots&\ddots&\vdots\\ 0&0&\cdots&1&b_{n,n-1}\\ \end{array}\right|. \tag{35}\]
Proof.: The determinant on the right side of (35) is of order \(k\). We proceed by induction on \(n\). If \(k=1\), applying (23) (with \(m=n-1\)), we obtain
\[(\delta_{n-1}^{(0)}-\lambda_{n})b_{n,n-1}+\delta_{n}^{(1)}=0,\]
which is equivalent to
\[(\lambda_{n-1}-\lambda_{n})b_{n,n-1}=-\delta_{n}^{(1)},\]
that is exactly (35). We have just proved that (35) holds for all \(n\in\mathbb{N}\) and \(k=1\). In particular, (35) is true for \(n=1\).
For any \(n\in\mathbb{N}\) and \(k=1,2,\ldots,n\), let \(G_{n}^{(k)}\) be the determinant on the right hand side of (35). Take a fixed \(n\in\mathbb{N}\) and assume
\[\delta_{m}^{(k)}=(-1)^{k}G_{m}^{(k)},\quad k=1,2,\ldots,m, \tag{36}\]
for each \(m=1,2,\ldots,n-1\). We want to prove that (36) holds also for \(m=n\). In fact, expanding \(G_{n}^{(k)}\) when \(k\leq n\) along its last column and taking into account (36),
\[G_{n}^{(k)}=G_{n-1}^{(k-1)}b_{n,n-1}-G_{n-2}^{(k-2)}b_{n,n-2}+\cdots+(-1)^{k}G _{n-k+1}^{(1)}b_{n,n-k+1}+(-1)^{k+1}(\lambda_{n-k}-\lambda_{n})b_{n,n-k}.\]
Therefore
\[(-1)^{k}G_{n}^{(k)}=-\left[\delta_{n-1}^{(k-1)}b_{n,n-1}+\delta_{n-2}^{(k-2)}b _{n,n-2}+\cdots+\delta_{n-k+1}^{(1)}b_{n,n-k+1}+(\lambda_{n-k}-\lambda_{n})b_ {n,n-k}\right]\]
because \(k-i\leq n-i\) for \(i=1,2,\ldots,k-1\). From this and (23) we arrive to (36) for \(m=n\), as we wanted to prove.
Given any monic polynomial \(Q_{n}(x)=x^{n}+q_{n,n-1}x^{n-1}+\cdots+q_{n,1}x+q_{n,0}\) we define
\[\Delta_{k,n,s}(Q_{n})= \tag{37}\] \[\left|\begin{array}{ccccc}\binom{n-k}{s-1}q_{n-k+1,n-k}&\left[ \binom{n-k}{s-1}+\binom{n-k+1}{s-1}\right]q_{n-k+2,n-k}&\cdots&\cdots&\left[ \binom{n-k}{s-1}+\cdots+\binom{n-1}{s-1}\right]q_{n,n-k}\\ 1&q_{n-k+2,n-k+1}&\cdots&\cdots&q_{n,n-k+1}\\ 0&1&\cdots&\cdots&q_{n,n-k+2}\\ \vdots&0&\ddots&&\vdots\\ \vdots&\vdots&\ddots&\ddots&\vdots\\ 0&0&\cdots&1&q_{n,n-1}\end{array}\right|.\]
Furthermore, we denote
\[\Delta_{k,n,s}:=\Delta_{k,n,s}(P_{n}),\quad\Delta_{k,n,s}^{(1)}:=\Delta_{k,n,s }(P_{n}^{(1)}) \tag{38}\]
As an immediate consequence of Lemmas 1 and 2, we obtain the following.
**Lemma 3**.: _For \(n\in\mathbb{N}\) and \(k=1,2,\ldots,n\) we have_
\[(-1)^{k+1}\delta_{n}^{(k)}=\sum_{s=1}^{n}\Delta_{k,n,s}s!a_{s,s}.\]
Moreover, we prove in the following lemma that \(\Delta_{k,n,s}^{(1)}\) and \(\Delta_{k,n,s}\) in (38) are related.
**Lemma 4**.: _With the above notation,_
\[\Delta_{k,n,s}^{(1)}=\Delta_{k,n,s}+\sum_{j=0}^{k-1}(-1)^{j}\binom{n-k+j}{s-1} b_{n-k+j,n-k}\left[\sum_{r=1}^{k-j}\gamma_{n-k+j+1}\cdots\gamma_{n-k+j+r}E_{k,n,j+r-1 }\right],\]
_where \(E_{k,n,j}\) is defined in (20)._
Proof.: Let us start denoting \(D_{k,n,k-1}^{(1)}=b_{n,n-k}^{(1)}\) and
\[D_{k,n,j}^{(1)}=\left|\begin{array}{ccccc}b_{n-k+j+1,n-k}^{(1)}&b_{n-k+j+2, n-k}^{(1)}&\cdots&\cdots&b_{n,n-k}^{(1)}\\ 1&b_{n-k+j+2,n-k+j+1}^{(1)}&\cdots&\cdots&b_{n,n-k+j+1}^{(1)}\\ 0&1&\cdots&\cdots&b_{n,n-k+j+2}^{(1)}\\ \vdots&0&\ddots&&\vdots\\ \vdots&\vdots&\ddots&\ddots&\vdots\\ 0&0&\cdots&1&b_{n,n-1}^{(1)}\end{array}\right|,\quad j=0,1,\ldots,k-2.\]
If we expand the determinant that define \(\Delta_{k,n,s}^{(1)}\) (see (37)-(38)) along its first row, it is easy to see that
\[\Delta_{k,n,s}^{(1)}=\sum_{j=0}^{k-1}(-1)^{j}\binom{n-k+j}{s-1}D_{k,n,j}^{(1)}. \tag{39}\]
Notice that each determinant \(D_{k,n,j}^{(1)}\) is of order \(k-j\). Furthermore, (17) implies that
\[b_{n,i}^{(1)}=b_{n,i}+\gamma_{n}b_{n-1,i},\quad n\in\mathbb{N},\quad i=0,1, \ldots,n\]
(where \(b_{n,n}^{(1)}=b_{n,n}=1,b_{n-1,n}=0\)). Using this equality in the first column of \(D_{k,n,j}^{(1)}\) and expanding this determinant as the sum of two determinants, we obtain
\[D_{k,n,j}^{(1)}=\left|\begin{array}{ccccc}b_{n-k+j+1,n-k}&b_{n-k+j+2,n-k}^{(1) }&\cdots&\cdots&b_{n,n-k}^{(1)}\\ 1&b_{n-k+j+2,n-k+j+1}^{(1)}&\cdots&\cdots&b_{n,n-k+j+1}^{(1)}\\ 0&1&\cdots&\cdots&b_{n,n-k+j+2}^{(1)}\\ \vdots&0&\ddots&&\vdots\\ \vdots&\vdots&\ddots&\ddots&\vdots\\ 0&0&\cdots&1&b_{n,n-1}^{(1)}\end{array}\right|\]
\[+\gamma_{n-k+j+1}b_{n-k+j,n-k}^{(1)}\left|\begin{array}{ccccc}b_{n-k+j+2,n-k +j+1}^{(1)}&\cdots&\cdots&b_{n,n-k+j+1}^{(1)}\\ 1&\cdots&\cdots&b_{n,n-k+j+2}^{(1)}\\ 0&\ddots&&\vdots\\ \vdots&\ddots&\ddots&\vdots\\ 0&\cdots&1&b_{n,n-1}^{(1)}\end{array}\right| \tag{40}\]
Repeating this procedure in the second column of the first addend on the right hand side of (40),
\[D_{k,n,j}^{(1)}=\left|\begin{array}{ccccc}b_{n-k+j+1,n-k}&b_{n-k+j+2,n-k}& \cdots&\cdots&b_{n,n-k}^{(1)}\\ 1&b_{n-k+j+2,n-k+j+1}&\cdots&\cdots&b_{n,n-k+j+1}^{(1)}\\ 0&1&\cdots&\cdots&b_{n,n-k+j+2}^{(1)}\\ \vdots&0&\ddots&&\vdots\\ \vdots&\vdots&\ddots&\ddots&\vdots\\ 0&0&\cdots&1&b_{n,n-1}^{(1)}\end{array}\right|\]
\[+\gamma_{n-k+j+1}b_{n-k+j,n-k}\left|\begin{array}{ccccc}b_{n-k+j+2,n-k+j+1}^ {(1)}&\cdots&\cdots&b_{n,n-k+j+1}^{(1)}\\ 1&\cdots&\cdots&b_{n,n-k+j+2}^{(1)}\\ 0&\ddots&&\vdots\\ \vdots&\ddots&\ddots&\vdots\\ 0&\cdots&1&b_{n,n-1}^{(1)}\end{array}\right|\]
because such addend can be expressed as the addition of two determinants of which the second one is zero. Iterating the procedure we see that the first addend on the right hand side of (40) is
\[D_{k,n,j}:=\left|\begin{array}{ccccc}b_{n-k+j+1,n-k}&b_{n-k+j+2,n-k}&\cdots &\cdots&b_{n,n-k}\\ 1&b_{n-k+j+2,n-k+j+1}&\cdots&\cdots&b_{n,n-k+j+1}\\ 0&1&\cdots&\cdots&b_{n,n-k+j+2}\\ \vdots&0&\ddots&&\vdots\\ \vdots&\vdots&\ddots&\ddots&\vdots\\ 0&0&\cdots&1&b_{n,n-1}\end{array}\right|,\]
(We denote this term by \(D_{k,n,j}\) because it has the same structure than \(D_{k,n,j}^{(1)}\) but switching \(b_{ij}^{(1)}\) by \(b_{ij}\).) Going back to (40), we denote the determinant of the second addend on the right hand side as \(E_{k,n,j}^{(1)}\), following the
similarity with the determinant \(E_{k,n,j}\) defined in (20). Then, we have proved
\[D_{k,n,j}^{(1)}=D_{k,n,j}+\gamma_{n-k+j+1}b_{n-k+j,n-k}E_{k,n,j}^{(1)}, \tag{41}\]
where \(E_{k,k,k-1}^{(1)}=1\). Now, we can repeat with the second addend in (41) the same idea that we used with \(D_{k,n,j}\). That is, at each step we select a column, express all the coefficients \(b_{ij}^{(1)}\) from that column as \(b_{ij}^{(1)}=b_{ij}+\gamma b_{i-1,j}\) and split the determinant in two following the addition given by the former expression. Applying this process to the first column and recalling that \(b_{n-k+j+1,n-k+j+1}=1\), we can express \(E_{k,n,j}^{(1)}\) as
\[E_{k,n,j}^{(1)}=E_{k,n,j}+\gamma_{n-k+j+2}E_{k,n,j+1}^{(1)},\quad j=0,1,\dots, k-1. \tag{42}\]
Applying (42) to \(E_{k,n,j+1}^{(1)}\) we can rewrite (42) itself as
\[E_{k,n,j}^{(1)} =E_{k,n,j}+\gamma_{n-k+j+2}E_{k,n,j+1}+\gamma_{n-k+j+2}\gamma_{n- k+j+3}E_{k,n,j+2}^{(1)}\] \[=\dots=\sum_{r=1}^{k-j}(\gamma_{n-k+j+2}\gamma_{n-k+j+3}\dots \gamma_{n-k+j+r})E_{k,n,j+r-1},\]
where \(\gamma_{n-k+j+2}\gamma_{n-k+j+3}\dots\gamma_{n-k+j+r}=1\) if \(r=1\), and \(E_{k,n,k-1}=1\). Using this expression in (41) we reach
\[D_{k,n,j}^{(1)}=D_{k,n,j}+b_{n-k+j,n-k}\sum_{r=1}^{k-j}(\gamma_{n-k+j+1}\gamma _{n-k+j+2}\gamma_{n-k+j+3}\dots\gamma_{n-k+j+r})E_{k,n,j+r-1}. \tag{43}\]
Finally, we use (43) to rewrite (39) as
\[\Delta_{k,n,s}^{(1)} =\sum_{j=0}^{k-1}(-1)^{j}\binom{n-k+j}{s-1}D_{k,n,j}\] \[\quad+\sum_{j=0}^{k}(-1)^{j}\binom{n-k+j}{s-1}b_{n-k+j,n-k}\left[ \sum_{r=1}^{k-j}(\gamma_{n-k+j+1}\dots\gamma_{n-k+j+r})E_{k,n,j+r-1}\right].\]
Since the first addend is exactly \(\Delta_{k,n,s}\), the proof concludes here.
**Proof of Theorem 3:**
Using (1) and (18), we recall (see (15))
\[\lambda_{n}=\sum_{i=1}^{n}\binom{n}{i}i!a_{i,i}=\sum_{i=1}^{n}\binom{n}{i}i!a _{i,i}^{(1)},\quad n\in\mathbb{N}.\]
Then, taking \(n=1,2,\dots,\) we check \(a_{i,i}=a_{i,i}^{(1)}\) for each \(i\in\mathbb{N}\).
On the other hand, defining
\[\widetilde{\delta}_{n}^{(k)}=\sum_{i=k}^{n}\binom{n}{i}i!a_{i,i-k}^{(1)}, \quad k=0,1,\dots,n,\]
(see (10)) and applying Lemma 3 to \(L^{(1)}\), we obtain
\[\widetilde{\delta}_{n}^{(k)}=(-1)^{k+1}\sum_{s=1}^{n}\Delta_{k,n,s}^{(1)}s!a _{s,s}.\]
Then, using Lemma 4 and again Lemma 3,
\[\widetilde{\delta}_{n}^{(k)} =\] \[=\delta_{n}^{(k)} + (-1)^{k+1}\sum_{s=1}^{n}\sum_{j=0}^{k-1}(-1)^{j}\binom{n-k+j}{s-1 }b_{n-k+j,n-k}\left[\sum_{r=1}^{k-j}\gamma_{n-k+j+1}\dots\gamma_{n-k+j+r}E_{k, n,j+r-1}\right]s!a_{s,s}\] \[=\delta_{n}^{(k)} + (-1)^{k+1}\sum_{j=0}^{k-1}(-1)^{j}\left[\sum_{s=1}^{n}\binom{n-k+j }{s-1}s!a_{s,s}\right]b_{n-k+j,n-k}\sum_{r=1}^{k-j}\gamma_{n-k+j+1}\dots\gamma _{n-k+j+r}E_{k,n,j+r-1}.\]
Since \(\{P_{n}^{(1)}\}\) is the sequence of eigenpolynomials for \(L^{(1)}\), then \(\widetilde{\delta}_{n}^{(k)}=0\) for \(n\geq k>\widetilde{N}\). Also, \(\delta_{n}^{(k)}=0\) for \(n\geq k>N\) (see (10)). Therefore, we arrive to (19).
## 4 Darboux transformations on the sequence of Hermite polynomials
Theorem 3 gives a necessary condition for the Darboux transformation to produce a new sequence of eigenpolynomials for some finite order differential operator with the same sequence of eigenpolynomials as \(L\). In this section we show that (19) is not verified in the case of Hermite polynomials. This proves that Darboux transformations, even in the case of classical orthogonal polynomials, may not lead to new families of eigenfunctions.
With the purpose to analyze Theorem 3 when \(\{P_{n}(x)\}=\{H_{n}(x)\}\) are the Hermite polynomials, we study the sequence \(\{\gamma_{i}\}\) that defines \(\{P_{n}^{(1)}(x)\}=\{H_{n}^{(1)}(x)\}\) in this case.
**Theorem 4**.: _Let \(\{H_{n}(x)\}\) and \(\{H_{n}^{(1)}(x)\}\) be sequences that satisfy (17), that is,_
\[H_{n}^{(1)}(x)=H_{n}(x)+\gamma_{n}H_{n-1}(x). \tag{44}\]
_Then \(\{H_{n}^{(1)}(x)\}\) satisfies a \((p+2)\)-term recurrence relation as (5) if and only if (44) is a Geronimus transformation of \(\{H_{n}(x)\}\). In that case, \(p=1\) and we have_
\[\gamma_{n}=\gamma_{2}+\frac{1}{2\gamma_{1}}-\frac{m-1}{2\gamma_{m-1}},\quad m =2,3,\ldots \tag{45}\]
Proof.: Let \(T\) and \(J\) be the matrices defined in (30) and (27), respectively. Then, \(TJT^{-1}\) is a Hessenberg matrix, that is,
\[TJT^{-1}=\left(\begin{array}{cccccccc}\delta_{0,0}&1&0&\cdots&&&\\ \delta_{1,0}&\delta_{1,1}&1&0&\cdots&&&\\ \vdots&\vdots&\ddots&\ddots&\ddots&&&\\ \delta_{s,0}&\delta_{s,1}&&\delta_{s,s}&1&0&\cdots\\ \vdots&\vdots&&\vdots&\ddots&\ddots&\ddots\\ \end{array}\right)\]
where \(\delta_{r+s,r}=\gamma_{r+s}\alpha_{r+s-1,r}+\alpha_{r+s,r}-\gamma_{r+1}\delta _{r+s,r+1}\) for \(r,s=0,1,\ldots,\) and we understand \(\delta_{r+1}=1\). Here \(\alpha_{i,j}\) represent the entries of \(J\), which are given in (27). Thus, the main diagonal of \(TJT^{-1}\) is given as \(\delta_{r,r}=\gamma_{r}-\gamma_{r+1}\), \(r=0,1,\ldots\) (assuming \(\gamma_{0}=0\)). The first subdiagonal is
\[\delta_{r+1,r}=\frac{r+1}{2}-\gamma_{r+1}\delta_{r+1,r+1}=\frac{r+1}{2}- \gamma_{r+1}\left(\gamma_{r+1}-\gamma_{r+2}\right),\quad r=0,1,\ldots\]
The second subdiagonal is
\[\delta_{r+2,r} = \gamma_{r+2}\frac{r+1}{2}-\gamma_{r+1}\frac{r+2}{2}+\gamma_{r+1} \gamma_{r+2}\left(\gamma_{r+2}-\gamma_{r+3}\right),\quad r=0,1,\ldots,\]
and the following subdiagonals are
\[\delta_{r+s,r}=-\gamma_{r+1}\delta_{r+s,r+1},\quad r=0,1,\ldots,\quad s\geq 3. \tag{46}\]
As in (31), we see that
\[\left(TJT^{-1}-xI\right)\left(\begin{array}{c}H_{0}^{(1)}(x)\\ H_{1}^{(1)}(x)\\ \vdots\\ \end{array}\right)=0.\]
If \(TJT^{-1}\) is the banded matrix corresponding to a \((p+2)\)-term recurrence relation for some \(p\in\mathbb{N}\) then
\[\sum_{k=n-p}^{n-1}\delta_{n,k}H_{k}^{(1)}(x)+\left(\delta_{n,n}-x\right)H_{n} ^{(1)}(x)+H_{n+1}^{(1)}(x)=0,\quad n=0,1,\ldots, \tag{47}\]
with \(\delta_{n,n-p}\neq 0\) for \(n=p,p+1,\ldots\) That is, \(\delta_{n,n-p-1}=0\) for \(n=p,p+1,\ldots,\) and also
\[\delta_{n,n-p-s}=0,\mbox{ for }n=p,p+1,\ldots,s\geq 1. \tag{48}\]
Under these conditions \(p\geq 1\), because \(p=0\) implies \(\delta_{n,n-1}=0\), that is, \(\frac{n}{2}=\gamma_{n}(\gamma_{n}-\gamma_{n+1}),\,n=1,2,\ldots,\) and then we would have \(\delta_{n,n-2}\neq 0\) in (46), which contradicts (48).
On the other hand, (46) indicates that it is not possible to have \(\delta_{n,n-p-1}=0\), if \(\delta_{n,n-p}\neq 0\) for \(n=p+1,p+2,\ldots,\,p\geq 2\). Consequently, \(p=1\) and (47) is a three-term recurrence relation.
Let \(J=CI+UL\) be a Geronimus factorization of \(J\) (see (32)). Since \(TJT^{-1}\) and \(TU\) are tridiagonal matrices, \(LT^{-1}\) must be a diagonal matrix. In addition, the entries in the diagonals of \(L\) and \(T^{-1}\) coincide, so we have \(L=T\). Therefore, \(TJT^{-1}=CI+LU\) is a Darboux (Geronimus) transformation of \(J\).
Reciprocally, if (44) is a Geronimus transformation of \(\{H_{n}(x)\}\) obviously \(TJT^{-1}=CI+TU\) is a tridiagonal matrix and the polynomials \(\{H_{n}^{(1)}(x)\}\) satisfy a three-term recurrence relation.
Finally, we assume that (44) holds. Then (46) provides
\[0=\gamma_{r+2}\frac{r+1}{2}-\gamma_{r+1}\frac{r+2}{2}+\gamma_{r+1}\gamma_{r+2 }\left(\gamma_{r+2}-\gamma_{r+3}\right),\quad r=0,1,\ldots,\]
that is,
\[\gamma_{r+3}=\gamma_{r+2}+\frac{r+1}{2\gamma_{r+1}}-\frac{r+2}{2\gamma_{r+2}} \quad r=0,1,\ldots \tag{49}\]
Note that (49) is equivalent to
\[\gamma_{m}+\frac{m-1}{2\gamma_{m-1}}=\gamma_{2}+\frac{1}{2\gamma_{1}},\quad m =3,4,\ldots,\]
as we wanted to prove.
We recall that in a Geronimus factorization we need to fix two parameters. In fact, firstly we choose \(C\in\mathbb{C}\) such that there exists the \(UL\) factorization of \(J-CI\). Then, \(UL\) depends on a second parameter. This implies that in (49) the sequence \(\{\gamma_{i}\}\) is determined in terms of \(\gamma_{1},\,\gamma_{2}\). In the sequel we assume
\[\gamma_{2}+\frac{1}{2\gamma_{1}}=0.\]
Thus, since (45), if (44) is a Geronimus transformation of \(\{H_{n}\}\) we have
\[\gamma_{m}\gamma_{m+1}=-\frac{m}{2},\quad m=1,2,\ldots, \tag{50}\]
and it is easy to check
\[\gamma_{2m} = -\frac{(2m-1)(2m-3)\cdots 5\cdot 3}{(2m-2)(2m-4)\cdots 4\cdot 2} \frac{1}{2\gamma_{1}}\] \[\gamma_{2m+1} = \frac{2m(2m-2)\cdots 4\cdot 2}{(2m-1)(2m-3)\cdots 5\cdot 3}\gamma_{1} \quad\left\},\quad m=1,2,\ldots \tag{51}\]
(writing \((2m-2)(2m-4)\cdots 4\cdot 2=1,\,(2m-1)(2m-3)\cdots 5\cdot 3=1,\) when \(m=1\)).
In the case of Hermite polynomials, \(a_{11}=-2,a_{ii}=0,\forall i\geq 2\). Moreover, the coefficients of the polynomials were given in (28). Then (19) becomes \(-2\Sigma_{H}(n,k)\), where
\[\Sigma_{H}(n,k):=\sum_{\begin{subarray}{c}j=0\\ j\mbox{ even}\end{subarray}}^{k-1}\frac{(-1)^{j/2}}{2^{j}}\frac{(n-k+j)!}{(n -k)!(j/2)!}\sum_{r=1}^{k-j}\gamma_{n-k+j+1}\ldots\gamma_{n-k+j+r}E_{k,n,j+r-1}. \tag{52}\]
For the rest of this section, our goal will be to prove that (19) is not satisfied. In order to do this, we will show that \(\Sigma_{H}(n,k)\neq 0\) for \(n,\,k\) under the conditions of Theorem 3.
To compute the right hand side of (52), we will use the following result.
**Lemma 5**.: _Let us define_
\[S_{M}(m)=\sum_{r=0}^{M}\left(-\frac{1}{4}\right)^{r}\frac{(2m+2r-1)!}{r!(m+r-1)!} \binom{m-1+M}{m-1+r},\quad m,M\in\mathbb{N}.\]
_Then_
\[S_{M}(1) = \frac{(2\cdot 1-3)(2\cdot 2-3)\cdots(2M-3)}{M!2^{M}} \tag{53}\] \[S_{M}(m) = \frac{(2m-1)!}{(m-1)!}S_{M}(1) \tag{54}\]
_for \(n,M\in\mathbb{N}\). In particular we have \(S_{M}(m)\neq 0\) for all \(m,M\in\mathbb{N}\)._
Proof.: Using
\[\binom{m-1+M}{m-1+r}=\binom{m-2+M}{m-1+r}+\binom{m-2+M}{m-2+r}\]
it is easy to check
\[S_{M}(m)=2(2m-1)S_{M}(m-1),\quad m=1,2,\ldots,M=0,1,\ldots \tag{55}\]
Now, for each \(m,M\) we write
\[S_{M}(m)=\frac{(2m-1)!}{(m-1)!}f(m,M) \tag{56}\]
and since (55) we see \(f(m-1,M)=f(m,M)\). This means that \(f(M):=f(m,M)\) does not depend on \(m\). Furthermore, from (56) it is \(S_{M}(1)=f(M)\) and (56) becomes (54).
On the other hand, writting
\[\binom{M}{r}=\binom{M-1}{r-1}+\binom{M-1}{r},\quad r=0,1,\ldots,M \tag{57}\]
(where we assume \(\binom{M-1}{-1}=\binom{M-1}{M}=0\) ) and substituting this expression in
\[S_{M}(1)=\sum_{r=0}^{M}\left(-\frac{1}{4}\right)^{r}\frac{(2r+1)!}{(r!)^{2}} \binom{M}{r},\]
we have
\[S_{M}(1) = \sum_{r=1}^{M}\left(-\frac{1}{4}\right)^{r}\frac{(2r+1)!}{(r!)^{2 }}\binom{M-1}{r-1}+\sum_{r=0}^{M-1}\left(-\frac{1}{4}\right)^{r}\frac{(2r+1)! }{(r!)^{2}}\binom{M-1}{r}\] \[= \sum_{r=0}^{M-1}\left(-\frac{1}{4}\right)^{r}\frac{(2r+1)!}{(r!) ^{2}}\left(-\frac{1}{2(r+1)}\right)\binom{M-1}{r}=-\frac{1}{2}\sum_{r=0}^{M-1 }\left(-\frac{1}{4}\right)^{r}\frac{(2r+1)!}{(r+1)!r!}\binom{M-1}{r}.\]
We will show
\[S_{M}(1)=\frac{(2\cdot 1-3)(2\cdot 2-3)\cdots(2\cdot q-3)}{2^{q}}\sum_{r=0}^{M -q}\left(-\frac{1}{4}\right)^{r}\frac{(2r+1)!}{(r+q)!r!}\binom{M-q}{r},q=1, \ldots,M. \tag{59}\]
In fact, (58) is (59) for \(q=1\). Proceeding by induction, and considering
\[\binom{M-q}{r}=\binom{M-q-1}{r-1}+\binom{M-q-1}{r}\]
as in (57), we obtain (59) for \(q+1\) when we assume that (59) is satisfied for \(q\). In particular, for \(q=M\) in (59) we arrive to (53).
Also with the aim to study (52), in the following lemma we analyze the determinant \(E_{k,n,j}\) defined in (20).
**Lemma 6**.: _If \(E_{k,n,j+r-1}\) is considered over the Hermite polynomials, then_
\[E_{k,n,j+r-1}=\begin{cases}0,&\text{if $k-j-r$ is odd,}\\ \dfrac{n!}{(n-k+j+r)!((k-j-r)/2)!2^{k-j-r}},&\text{if $k-j-r$ is even.}\end{cases}\]
Proof.: The order of the determinant \(E_{k,n,j+r-1}\) is \(k-j-r\). In the first place, if \(k-j-r\) is odd then \(E_{k,n,j+r-1}\) has an odd number of columns. Taking into account (28), \(E_{k,n,j+r-1}=\)
\[=\begin{vmatrix}0&-\frac{1}{2^{2}}\frac{(n-k+j+r+2)!}{(n-k+j+r+1)!}&0&\frac{1}{ 2^{4}}\frac{(n-k+j+r+4)!}{(n-k+j+r)!2!}&\dots&0\\ 1&0&-\frac{1}{2^{2}}\frac{(n-k+j+r+3)!}{(n-k+j+r+1)!}&0&\dots&\frac{(-1)^{\frac {k-j-r-1}{2}}}{2^{k-j-r-1}}\frac{n!}{(n-k+j+r+1)!((k-j-r-1)/2)!}\\ 0&\ddots&\ddots&\ddots&\vdots\\ \vdots&\ddots&\ddots&\ddots&\ddots&\vdots\\ \vdots&&0&1&0&-\frac{1}{2^{2}}\frac{(n-1)!}{(n-3)!}\\ 0&\dots&\dots&0&1&0\end{vmatrix}\]
Expanding this determinant each time along their first \((k-j-r-1)/2\) odd columns we obtain a determinant of order \((k-j-r+1)/2\) whose entries in the last column are zero. Then \(E_{k,n,j+r-1}=0\), which proves the case \(k-j-r\) odd.
In the second place, if \(k-j-r\) is even then \(E_{k,n,j+r-1}=\)
\[=\begin{vmatrix}0&\frac{-1}{2^{2}}\frac{(n-k+j+r+2)!}{(n-k+j+r)!}&0&\frac{1}{ 2^{4}}\frac{(n-k+j+r+4)!}{(n-k+j+r)!2!}&\dots&\frac{(-1)^{\frac{k-j-r}{2}}}{2 ^{k-j-r}}\frac{n!}{(n-k+j+r)!((k-j-r)/2)!}\\ 1&0&\frac{-1}{2^{2}}\frac{(n-k+j+r+3)!}{(n-k+j+r+1)!}&0&\dots&0\\ 0&\ddots&\ddots&\ddots&\vdots\\ \vdots&\ddots&\ddots&\ddots&\ddots&\vdots\\ \vdots&&&&1&0&-\frac{1}{2^{2}}\frac{n!}{(n-2)!}\\ 0&\dots&\dots&\dots&1&0\end{vmatrix}\]
Expanding each time along the odd columns, \(E_{k,n,j+r-1}=\)
\[=(-1)^{\frac{k-j-r}{2}}\frac{1}{2^{2}}\frac{(n-k+j+r+2)!}{(n-k+j+r)!}&\frac{ 1}{2^{4}}\frac{(n-k+j+r+4)!}{(n-k+j+r)!2!}&\dots&\frac{(-1)^{\frac{k-j-r}{2}}} {2^{k-j-r}}\frac{n!}{(n-k+j+r)!((k-j-r)/2)!}\\ 1&\frac{-1}{2^{2}}\frac{(n-k+j+r+4)!}{(n-k+j+r+2)!}&\dots&\frac{(-1)^{\frac{k -j-r-2}{2}}}{2^{k-j-r-2}}\frac{n!}{(n-k+j+r+2)!((k-j-r-2)/2)!}\\ 1&\dots&\vdots\\ &\ddots&\vdots\\ &&-\frac{1}{2^{2}}\frac{n!}{(n-2)!}\end{vmatrix}.\]
Taking firstly a common denominator out of each row and then a common numerator out of each column,
we have, after simplifying
\[E_{k,n,j+r-1}=\frac{(-1)^{\frac{k-j-r}{2}}n!}{2^{k-j-r}(n-k+j+r)!}\left|\begin{array} []{ccccccc}-1&\frac{1}{2!}&-\frac{1}{3!}&\cdots&\cdots&\frac{(-1)^{\frac{k-j-r} {2}}}{((k-j-r)/2)!}\\ 1&-1&\frac{1}{2!}&\cdots&\cdots&\frac{(-1)^{\frac{k-j-r-2}{2}}}{((k-j-r+2)/2)!} \\ 0&1&-1&\ddots&&\frac{(-1)^{\frac{k-j-r-4}{2}}}{((k-j-r+4)/2)!}\\ &0&1&-1&\ddots&\frac{(-1)^{\frac{k-j-r-6}{2}}}{((k-j-r+6)/2)!}\\ &&\ddots&\ddots&\ddots&\vdots\\ &&&0&1&-1\end{array}\right|.\]
Furthermore, it is easy to prove that the value of the determinant on the right hand side is \(\frac{(-1)^{\frac{k-j-r}{2}}}{\left(\frac{k-j-r}{2}\right)}\), (expanding the determinant along its last column). With this, the case \(k-j-r\) even is also proved.
Using the above lemmas, the expression of \(\Sigma_{H}(n,k)\) in (52) is transformed into
\[\Sigma_{H}(n,k)=\sum_{\begin{subarray}{c}j=0\\ j\text{ even}\end{subarray}}^{k-1}\frac{(-1)^{j/2}}{2^{j}}\frac{(n-k+j)!}{(n-k )!(j/2)!}\sum_{\begin{subarray}{c}s=0\\ s\text{ even}\end{subarray}}^{k-j-1}\gamma_{n-k+j+1}\cdots\gamma_{n-s}\frac{n!}{(n-s)!(s/2)!2^{s}}. \tag{60}\]
If \(\{H_{n}^{(1)}\}\) were a sequence of eigenpolynomials for some differential operator \(L^{(1)}\) with the same sequence \(\{-2n\}\) of eigenvalues corresponding to \(\{H_{n}\}\) then Theorem 3 would imply that \(\Sigma_{H}(n,k)=0\) for some \(N\in\mathbb{N}\), \(\widetilde{N}\geq 2,\) and \(n\geq k>\widetilde{N}\) (see (18)). However, in the following theorem we prove that the former necessary condition does not hold.
**Theorem 5**.: _Let \(n\) be even and \(k\) be odd, \(n\geq k\geq 3\). Then \(\Sigma_{H}(n,k)\neq 0\)._
Proof.: Let \(n\) and \(k\) be fixed numbers that satisfy the hypothesis. Then, (60) can be rewritten as
\[\Sigma_{H}(n,k)=\frac{n!}{(n-k)!}\sum_{\begin{subarray}{c}j=0\\ j\text{ even}\end{subarray}}^{k-1}\frac{(-1)^{j/2}}{2^{j}(j/2)!}\sum_{ \begin{subarray}{c}s=0\\ s\text{ even}\end{subarray}}^{k-j-1}\left(\frac{\gamma_{n-k+j+1}}{n-k+j+1} \right)\left(\frac{\gamma_{n-k+j+2}}{n-k+j+2}\right)\cdots\left(\frac{\gamma_ {n-s}}{n-s}\right)\frac{1}{2^{s}(s/2)!} \tag{61}\]
where \(\left(\frac{\gamma_{n-k+j+1}}{n-k+j+1}\right)\left(\frac{\gamma_{n-k+j+2}}{n- k+j+2}\right)\cdots\left(\frac{\gamma_{n-s}}{n-s}\right)\) has an odd number \(k-j-s\) of factors that can be grouped as
\[\frac{\gamma_{n-k+j+1}}{n-k+j+1}\left(\frac{\gamma_{n-k+j+2}\gamma_{n-k+j+3}}{ (n-k+j+2)(n-k+j+3)}\right)\cdots\left(\frac{\gamma_{n-s-1}\gamma_{n-s}}{(n-s- 1)(n-s)}\right).\]
Due to (50), the above product is equal to
\[\frac{\gamma_{n-k+j+1}}{(n-k+j+1)F(j,s)}\left(-\frac{1}{2}\right)^{\frac{k-j- s-1}{2}},\]
where \(F(j,s):=(n-k+j+3)(n-k+j+5)\cdots(n-s)\). Substituting in (61) and using this notation,
\[\Sigma_{H}(n,k) = \frac{n!}{(n-k)!}\left(-\frac{1}{2}\right)^{\frac{k-1}{2}}\sum_{ \begin{subarray}{c}j=0\\ j\text{ even}\end{subarray}}^{k-1}\frac{1}{2^{j/2}(j/2)!}\left(\frac{\gamma_ {n-k+j+1}}{n-k+j+1}\right)\sum_{\begin{subarray}{c}s=0\\ s\text{ even}\end{subarray}}^{k-j-1}\frac{(-1)^{s/2}}{2^{s/2}(s/2)!F(j,s)}\] \[= \frac{n!}{(n-k)!}\left(-\frac{1}{2}\right)^{\frac{k-1}{2}}\sum_{ r=0}^{k-1}\frac{\gamma_{n-k+2r+1}}{2^{r}r!}\sum_{q=0}^{k-1}\frac{(-1)^{q}}{2^{q}q!F(2r -2,2q)}.\]
Since \(n\) is even and \(k\) is odd, necessarily \(n\geq k+1\). Hence, applying (51),
\[\gamma_{n-k+2r+1}=-\frac{(n-k+2r)(n-k+2r-2)\cdots 5\cdot 3}{(n-k+2r-1)(n-k+2r-3) \cdots 4\cdot 2}\left(\frac{1}{2\gamma_{1}}\right)=-\frac{F(k-n,k-2r)}{2\gamma_{ 1}F(k-n-1,k-2r+1)}.\]
Moreover,
\[F(k-n-1,k-2r+1)F(2r-2,2q)=F(k-n-1,2q)=2^{\frac{n}{2}-q}\left(\frac{n}{2}-q \right)!\]
and
\[F(k-n,k-2r)=\frac{(n-k+2r)!}{2^{\frac{n-k-1}{2}+r}\left(\frac{n-k-1}{2}+r \right)!}.\]
Therefore,
\[\Sigma_{H}(n,k)=\frac{n!}{\gamma_{1}(n-k)!2^{n}}\sum_{r=0}^{\frac{k-1}{2}} \frac{(n-k+2r)!}{2^{2r}\left(\frac{n-k-1}{2}+r\right)!r!}\sum_{q=0}^{\frac{k-1 }{2}-r}\frac{(-1)^{q+1}}{q!\left(\frac{n}{2}-q\right)!}. \tag{62}\]
On the other hand, it is easy to see, using induction on \(s\), that
\[\sum_{i=0}^{m-s}(-1)^{i}\binom{m}{i}=(-1)^{m-s}\binom{m-1}{s-1},\quad 1\leq s \leq m.\]
Considering this and
\[\sum_{q=0}^{\frac{k-1}{2}-r}\frac{(-1)^{q}}{q!\left(\frac{n}{2}-q\right)!}= \frac{1}{(n/2)!}\sum_{q=0}^{\frac{k-1}{2}-r}(-1)^{q}\binom{n/2}{q},\]
and using the notation of (53)-(54), in (62) we have
\[\Sigma_{H}(n,k) = \frac{n!(-1)^{\frac{k+1}{2}}}{\gamma_{1}(n/2)!(n-k)!2^{n}}\sum_{ r=0}^{\frac{k-1}{2}}\frac{(-1)^{r}(n-k+2r)!}{2^{2r}\left(\frac{n-k-1}{2}+r \right)!r!}\left(\frac{\frac{n}{2}-1}{\frac{n-k-1}{2}+r\right)}\] \[= \frac{n!(-1)^{\frac{k+1}{2}}}{\gamma_{1}(n/2)!(n-k)!2^{n}}S_{ \frac{k-1}{2}}\left(\frac{n-k+1}{2}\right)\neq 0\]
as we wanted to prove.
## 5 Conclusions
In this paper, the sequences \(\{\delta_{n}^{(k)}\}\), \(n\in\mathbb{N}\), \(k=1,2,\ldots,n\), are introduced, which are an important connection between the coefficients of the polynomials \(\{P_{n}\}\) and those of \(\{a_{n}\}\) defining a differential operator \(L\) as in (1). In the first place, given a differential operator \(L\), this connection allows, under certain conditions, to guarantee the existence and uniqueness of their eigenvalues and eigenvectors, at the same time that it leads to the explicit expression of such eigenvalues and eigenfunctions. In the second place, we have derived a necessary condition for a linear transformation to provide a new family of eigenpolynomials for some finite order operator. Finally, we have tested this necessary condition for the particular case of Geronimus transformations applied to Hermite polynomials, which led us to the conclusion that Geronimus transformed of Hermite eigenpolynomials are not eigenfunctions of any new differential operator.
**Declaration of competing interest**
There is no competing interest. |
2309.07525 | SingFake: Singing Voice Deepfake Detection | The rise of singing voice synthesis presents critical challenges to artists
and industry stakeholders over unauthorized voice usage. Unlike synthesized
speech, synthesized singing voices are typically released in songs containing
strong background music that may hide synthesis artifacts. Additionally,
singing voices present different acoustic and linguistic characteristics from
speech utterances. These unique properties make singing voice deepfake
detection a relevant but significantly different problem from synthetic speech
detection. In this work, we propose the singing voice deepfake detection task.
We first present SingFake, the first curated in-the-wild dataset consisting of
28.93 hours of bonafide and 29.40 hours of deepfake song clips in five
languages from 40 singers. We provide a train/validation/test split where the
test sets include various scenarios. We then use SingFake to evaluate four
state-of-the-art speech countermeasure systems trained on speech utterances. We
find these systems lag significantly behind their performance on speech test
data. When trained on SingFake, either using separated vocal tracks or song
mixtures, these systems show substantial improvement. However, our evaluations
also identify challenges associated with unseen singers, communication codecs,
languages, and musical contexts, calling for dedicated research into singing
voice deepfake detection. The SingFake dataset and related resources are
available at https://www.singfake.org/. | Yongyi Zang, You Zhang, Mojtaba Heydari, Zhiyao Duan | 2023-09-14T08:49:05Z | http://arxiv.org/abs/2309.07525v2 | # SingleFake: Singing Voice Deepfake Detection
###### Abstract
The rise of singing voice synthesis presents critical challenges to artists and industry stakeholders over unauthorized voice usage. Unlike synthesized speech, synthesized singing voices are typically released in songs containing strong background music that may hide synthesis artifacts. Additionally, singing voices present different acoustic and linguistic characteristics from speech utterances. These unique properties make singing voice deepfake detection a relevant but significantly different problem from synthetic speech detection. In this work, we propose the singing voice deepfake detection task. We first present SingFake, the first curated in-the-wild dataset consisting of 28.93 hours of bonahide and 29.40 hours of deepfake song clips in five languages from 40 singers. We provide a train/val/test split where the test sets include various scenarios. We then use SingFake to evaluate four state-of-the-art speech countermeasure systems trained on speech utterances. We find these systems lag significantly behind their performance on speech test data. When trained on SingFake, either using separated vocal tracks or song mixtures, these systems show substantial improvement. However, our evaluations also identify challenges associated with unseen singers, communication codecs, languages, and musical contexts, calling for dedicated research into singing voice deepfake detection. The SingFake dataset and related resources are available online1.
Footnote 1: Dataset list, audio demo, pre-trained models, etc. will be available upon camera-ready.
Yongyi Zang*, You Zhang*, Mojtaba Heydari, Zhiyao Duan Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY, USA
singing voice deepfake detection, anti-spoofing, dataset, singing voice separation
## 1 Introduction
_"I mean really, how do you fight with someone who is putting out new albums in the time span of minutes."_
-- Stefanie Sun [1]
Quoted from a prominent Singaporean singer, this remark underscores a rapidly emerging challenge in the modern music industry and cultural landscape: the surge of AI-generated singing voices. With the development of singing voice synthesis techniques, AI-generated singing voices sound increasingly natural, align well with the music scores, and can clone any singer's voice with a small amount of training data. Such techniques have been made more accessible with open-source singing voice synthesis and voice conversion projects, such as VISiager [2] and DiffSinger [3]. These techniques and their wide accessibility have raised concerns for artists, record labels, and publishing houses. For example, unauthorized synthetic productions mimicking a singer could potentially undermine the singer's commercial value, leading to potential copyright and licensing disputes. The ever-increasing societal apprehensions accentuate the urgency for developing methods to accurately detect deepfake singing voices.
As singing voice is a type of human vocalization, it is intuitive to explore solutions from an analogous research domain, namely, speech deepfake detection, often referred to as voice spoofing countermeasures (CM). Existing research has been investigating different methods to discern speech spoofing attacks from bonahide human speech. Significant progress has been made in recent years. Contemporary state-of-the-art systems have showcased commendable performance, with some [4, 5, 6] achieving Equal Error Rate (EER) below 1% on ASVspoof2019 [7] test partitions. However, CM systems still suffer from generalization issues to unseen attacks and diverse acoustic environments, having shown strong degradation when evaluated on in-the-wild data [8, 9].
Singing voice deepfake detection, on the other hand, poses a distinct set of challenges not presented in speech. First, different from speech, singing voices typically follow a specific melody or rhythmic pattern, which significantly affects the pitch and duration of different phonemes. Second, singers tend to use more artistic voicing traits and a wider range of timbre in singing than in speaking; the diversity of voicing traits and timbre is further increased due to the large variety of musical genres. Last but not least, while speech signals are often straightforward recordings, singing voices often undergo extensive digital signal processing and are mixed with musical instrumental accompaniments. Recognizing these unique challenges of detecting spoofed singing voices, we question whether countermeasures developed for speech can be directly applied to singing voice spoof detection.
In this paper, we propose the Singing Voice Deepfake Detection (SVDD) task. As the first step, we curate the first in-the-wild dataset named SingFake to support this task. The SingFake dataset contains 28.93 hours of bonafide and 29.40 hours of deepfake song clips gathered from popular user-generated content platforms. Spanning five languages, we collect clips from 40 distinct singers and their AI counterparts. Additionally, we use a source separation model (Demucs [10]) to extract singing vocals from song mixtures, allowing us to examine the effects of singing vocals and song mixtures for SVDD systems separately. We also provide a train/val/test split, where the test set contains a diverse set of scenarios, including unseen singers, languages, communication codecs, and musical contexts. With SingFake, we evaluate four types of leading speech countermeasure systems. We first use their models pretrained on speech utterances, and test them on the test split of SingFake. Results show a notable performance degradation compared to their performance on the ASVspoof2019 benchmark, on both song mixtures and separated vocals. We then retrain these systems on the training split of SingFake in two conditions, on separated vocal tracks and song mixtures, and test them on the test split. Results show significant improvement over the models trained on speech data. More detailed analyses of the results reveal challenges associated with un
seen singers, communication codecs, languages, and musical contexts, underscoring the need for more focused research on crafting robust singing voice deepfake detection systems.
During the process of writing this manuscript, we discovered another very recent study [11] that assesses the performance of speech CMs on clean singing voices and mixtures of them with instrumental music for Chinese songs. The study offers insight into speech CMs' capacity to learn deepfake cues under controlled ideal conditions, while our work focuses on more challenging in-the-wild scenarios.
## 2 Single dataset curation
### Data collection and annotation
We source deepfake singing samples from publicly available popular user-generated content websites where users upload both bonafide and deepfake samples of singing. For every deepfake song sample, we manually annotate its metadata: AI singer, language, website, and label it as a "deepfake" song. The deepfake generation models, if disclosed, are also annotated. We then collect bonafide samples from the corresponding real singer and annotate them as "bonafide". During the annotation process, we observe that most people use SoftVC-VITS2 with different versions. The same uploader usually uses the same model to generate AI singer(s). As manual annotation is a tedious and error-prone process, to ensure the accuracy of metadata labels and correct potential inaccuracies, we employ GPT4 [12] to verify the annotations against song titles and descriptions. We then manually reviewed any discrepancy found by GPT4. This process yields our SingFake dataset, capturing realistic deepfake nuances prevalent in online communities. SingFake contains 40 genuine singer and their corresponding AI singers, with 634 bonafide songs and 671 deepfake songs across multiple languages.
Footnote 2: [https://github.com/svc-develop-team/so-vits-svc](https://github.com/svc-develop-team/so-vits-svc)
### Dataset splits
Our primary guideline was to ensure that the singers were distinct across each section; therefore, we partitioned the data into train/val/test splits. Also, to form a comprehensive evaluation on the robustness of SVDD systems, we employ test subsets T01-T04 with increasing difficulties. Notably, there were many samples from the singer "Stefanie Sun", so we set aside a portion of them, creating the T01 testing scenario to evaluate the system's performance on a seen-in-training singer. T02 testing set contains six unseen-in-training singers, while the T03 testing condition simulates the effect of lossy communication channels by transmitting T02 through 4 compression codecs: MP3 128 Kbps, AAC 64 Kbps, OPUS 64 Kbps, and Vorbis 64 Kbps. The Persian singers stood out as they contain mostly Persian language while showing different musical styles. To investigate the effects of potential disparities in language and musical style, we allocate the Persian singers to a separate T04 test set. As T04 is collected from different platforms from other testing conditions, we believe T04 also contains unseen codecs.
The rest of the dataset is split between training and validation, maintaining a rough song ratio of 6:1:1:2 between training, validation, T01, and T02 subsets. Together, these subsets offer a comprehensive evaluation of the singing voice deepfake detection systems. The final partitioning is illustrated in different colors in Figure 1.
### Data processing
Much of the data we collected comprises pop songs, which typically feature both instrumental sections, such as Intro, Outro and Interlude; and sections containing vocals, such as Verse and Chorus. For the purpose of singing voice deepfake detection, our primary interest lies in segments containing vocals, as only the singing voices are synthesized, rendering pure instrumental sections irrelevant for the SVDD task. Consequently, we narrowed our focus to regions with active singing, treating each as a distinct song clip.
To extract active regions, we first employed the state-of-the-art music source separation model, Demucs [10], which is built on a U-Net convolutional architecture. This model is adept at separating drums, bass, and vocals from other accompaniments. We utilized its open-source implementation3 and specifically employed the checkpoint trained on the MusDB [13] dataset with extra training data.
Figure 1: SingFake dataset partition. Each color represents a subset, and each slice denotes an AI singer. T03 is excluded here since it contains the same song clips as T02 but is repeated 4 times through 4 different codecs.
\begin{table}
\begin{tabular}{c|l c c c} \hline \hline Splits & Description & \# Singers & Languages (Sorted by percentages in the splits) & \# Clips (Real / Fake) \\ \hline Train & Training set & 12 & Mandarin, Cantonese, Japanese, English, Others & 5251 / 4519 \\ Val & Validation set (unseen singers) & 4 & Mandarin, Cantonese, English, Spanish, Japanese & 1089 / 543 \\ T01 & Test set for seen singer Stefanie Sun & 1 & Mandarin, Cantonese, Japanese, English, Others & 370 / 1208 \\ T02 & Test set for unseen singers & 6 & Cantonese, Mandarin, Japanese & 1685 / 1006 \\ T03 & T02 over 4 communication codecs & 6 & Cantonese, Mandarin, Japanese & 6740 / 4024 \\ T04 & Test set for Persian musical context & 17 & Persian, English & 353 / 166 \\ \hline \hline \end{tabular}
\end{table}
Table 1: SingFake statistics for each split.
Notably, this particular checkpoint secured the 2nd position on track B of the MDX challenge [14]. We use the separated vocals from Demucs as the source of our separated song clips.
Next, the separated vocals are processed through the Voice Activity Detection (VAD) pipeline from PyAnnote [15], which provides us with the timecodes for segmentation. These timecodes are subsequently used to segment both mixtures and vocals into individual song clips. All clips are resampled to 16 kHz during training and inference. For those songs originally in stereo, we maintained the stereo quality, but chose a random channel for each clip during training. The average length for clips in the dataset is 13.75 seconds.
The final statistics for all subsets including the splits at clip-level are shown in Table 1. We open-source the datasheet including original user-uploaded media links and our metadata annotations, dataset split generation and data processing code4.
Footnote 4: www.singfake.org
## 3 Experiments
In this section, we first evaluate existing speech spoofing countermeasure systems using SingFake. Subsequently, we retrain these systems from scratch using the SingFake training set and assess their performance across various test scenarios with our dataset splits.
### Experimental setup
We construct four state-of-the-art systems that have demonstrated remarkable performance on speech datasets, representing different levels of input feature abstraction. This allows us to assess the expressiveness of these features on both speech and singing spoof detection task.
**Model architectures**: **AASIST**[4] uses raw-waveform as feature, leverages graph neural networks and incorporates spectro-temporal attention. **Spectrogram+ResNet** uses a linear spectrogram extracted with 512-point FFT, with a hop size of 10 ms. We feed the extracted spectrogram into the ResNet18 [16] architecture. **LFCC+ResNet**[17] uses Linear-Frequency Cepstral Coefficients (LFCC) as speech features, then feeds the LFCC into the ResNet18 model. The 60-dim LFCCs are extracted from each frame of the utterances, with frame length set to 20ms and hop size 10ms. **Wav2vec2+AASIST**[18] is a model leveraging Wav2Vec2 [19], a self-supervised front-end trained on large-scale external speech datasets. Note that we removed the RawBoost data augmentation module from the original paper [18] for fair comparisons between methods, since no other method has such augmentation.
**Evaluation metric**: Each system produces a score for each utterance, indicating the confidence that the given utterance is bonafide. The Equal Error Rate (EER) is determined by setting a threshold on the produced scores, ensuring that the false acceptance rate matches the false rejection rate. EER is a widely used metric for biometric verification systems, and we think it is a good metric for SVDD as well.
### Speech CM heavily degrades on SVDD task
We train and validate all speech CM systems on the speech dataset ASVspoof 2019 logical access (LA) [7] for 100 epochs. The model checkpoint with the best validation performance is selected for evaluation. We use the same train/dev/eval splits as ASVspoof 2019 LA. To form batches, we use 4 seconds of audio. We use repeat padding for shorter trials, and we randomly choose consecutive 4 seconds for longer trials. All of the CM systems achieve good performance on ASVspoof 2019 LA evaluation data, as shown in Table 2.
We then test them on the T02 condition of SingFake to evalute their performance on singing data. All systems show heavy degradation as shown in Table 3. The EERs are near 50% on song mixtures, indicating that the speech deepfake detection systems are not able to distinguish real singers and their corresponding AI singers in the existence of accompanying music. Interestingly, both spectrogram-based and raw-waveform-based systems achieved around 38% EER on the separated singing vocals, much better than the results on song mixtures. This might be due to the fact that singing vocals are more similar to speech compared to song mixtures since there would be nearly no music accompaniment presented after separation. However, the LFCC and Wav2Vec2-based systems are still performing near 50% EER, indicating that these speech features tend to overfit more to the speech data and cannot generalize to singing voices.
### Training on singing voices improves SVDD performance
To investigate whether training on our curated SingFake dataset improves singing voice deepfake detection (SVDD) performance, we trained models under two conditions: using full song mixtures versus separated singing vocals. Training on mixtures provides raw information, while training on separated vocals reduces instrumental distraction but may introduce separation artifacts that mask deepfake cues.
As shown in Table 3, SVDD performance declined from the training set (all seen) to T01 (seen singers, unseen songs) to T02 (unseen singers, unseen songs), indicating increasing task difficulty. All systems achieved good training set performance, showing that SingFake is helpful in learning the SVDD task. We also observed that LFCC+ResNet system achieved lowest training set performance on mixtures and second-best performance on separated vocals, suggesting the instrumental interference may heavily hurt the spectral envelope. However, the noticeable T02 performance decline highlights the challenge of generalizing SVDD to new singers. T01 performance fell between training and T02, suggesting that deepfakes of seen singers are easier to detect in new songs than those of unseen singers.
Compared to CM systems trained on speech, those trained on SingFake have better performance in terms of EER on T02, suggesting that the systems trained on SingFake are better at detecting singing deepfakes. The systems trained on separated vocals in general achieve better performance than those trained on mixtures except Wav2Vec2+AASIST. This suggests that separated singing voices could highlight artifacts for detecting singing deepfakes.
Our results indicate that the Wav2Vec2+AASIST model excels in learning directly from song mixtures, delivering the most superior performance and robustness among all tested systems, similar to results reported for other tasks [18, 20]
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{**Method**} & **ASVspoof2019** & **SingFake-T02** \\ & **LA - Eval** & **Mixture** & **Vocals** \\ \hline AASIST & 0.83 & 58.12 & 37.91 \\ Spectrogram+ResNet & 4.57 & 51.87 & 37.65 \\ LFCC+ResNet & 2.41 & 45.12 & 54.88 \\ Wav2Vec2+AASIST & 7.03 & 56.75 & 57.26 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Test results on speech and singing voice with CM systems trained on speech utterance from ASVspoof2019LA (EER (%)).
### SVDD systems show limited robustness to unseen scenarios
While training set, T01 and T02 represents more and more out-of-distribution sets at singer/song clips level, T03 and T04 sets are designed to evaluate performance in two challenging real-world situations: unseen communication codecs and unseen languages/musical contexts. Significant performance degradation is observed and well-studied under varying transmission and telecommunication codecs for speech CM systems [21, 9]. However, when testing our system on the T03 condition, the performance drop wasn't as much as anticipated. As social media platforms typically employ a diverse set of audio compression codecs to more efficiently stream and deliver user-uploaded content, we believe the SingFake data we collected already utilizes codecs. Thus, when training the SVDD system, the model inherently learns to form a more robust representation that generalizes well across lossy audio compression algorithms.
At the same time, we observe significant performance degradation across all SVDD systems on T04, which is noticeably more pronounced than on both T02 and T03. T04 and T03 vary by unseen language and musical context, hinting that challenges posed by these attributes are still prominent for SVDD systems.
## 4 Discussions
The ability of AI to synthesize highly realistic singing voices demonstrates major technological progress. However, this realism also understandably creates public distrust, sometimes prompting calls to ban such technologies entirely. But stopping advancement is rarely the answer. We believe transparency around content origins is key for establishing public trust, and more research into SVDD systems will allow users to make informed decisions about synthesized content. In this section, we summarize our findings on the strengths and weaknesses of SVDD systems.
**Unseen communication codecs.** Speech deepfakes are often created to spoof someone's identity or spread misinformation. In contrast, deepfake singing voices may aim more for entertainment or novelty rather than explicit deception. The difference in aim of creation would cause speech CM systems and SVDD systems to behave differently, as evidenced in the case of T03, where possibly due to exposure to compression codecs during training, SVDD systems behave robustly against unseen compression codecs. This strength of SVDD systems will shine, most notably with diverse real-world scenarios where various audio compression algorithms are employed.
**Interference from backing tracks.** SVDD systems need to work on mixtures containing vocals and instrumental tracks, where the prominence of the instrumental tracks can make it challenging to detect deepfake vocals, since they may mask deepfake artifacts and introduce new artifacts that cause the systems to fail. While using source separation might mitigate this problem, as discussed in Section 3.3, any less-than-perfect separation result could inadvertently introduce new artifacts or mask deepfake cues, which may then confound the deepfake detection algorithms. As an example, in our use of the Demucs model for vocal separation, we noticed it frequently misinterpreted string instruments as singing voices. The VAD pipeline from PyAnnotate seems to also has a tendency to classify string regions as active voice regions. This misclassification may have contributed to performance degradation on T04, as Persian music is rich in string instrumentation. This vulnerability to interference can be addressed by developing interference-resilient SVDD systems and identifying more robust representations for this task.
**Diverse musical genres.** Singing voices in different genre follows significantly different musical context, exhibiting vastly different patterns of pitch, timbre and rhythm. As such, SVDD systems may fail to generalize to unseen musical genres. By manual inspection, we discovered that the T04 subset contains many songs with heavy Hip-Hop influence, while most of the songs in other sets are rock and ballads. We believe this also contributes to the performance degradation seen on T04. Since music reflects diverse cultural backgrounds, varying musical genres are likely to present in real-world SVDD situations. To address this vulnerability, further research is needed to disentangle musical genre effects from deepfake cues to enable more genre-agnostic SVDD systems.
In summary, singing deepfakes face the same challenges as speech deepfakes: diverse real-world scenarios and novel attacks from rapid generative AI advancement, while contending with additional musical complexities. We call for more research efforts to improve the robustness of SVDD systems. As SVDD systems advance, we anticipate them to help enhance confidence in AI technologies, especially within the music industry, restoring trust that have been eroded by the proliferation of deepfakes.
## 5 Conclusions
In this paper, we proposed the Singing Voice Deepfake Detection (SVDD) task and presented the Singfake dataset, containing a substantial collection of in-the-wild bonafide and deepfake song clips in various languages and singers. We demonstrated that state-of-the-art speech CM systems trained on speech show strong degradation when evaluated on singing voice, while re-training on singing voice leads to substantial improvements, highlighting the necessity of specialized SVDD systems. Additionally, we assessed the strengths and weaknesses associated with unseen singers, communication codecs, different languages and musical contexts, underscoring the need for robust SVDD systems. Through releasing the SingFake dataset and benchmarking systems on the SVDD task, we aim to catalyze more research focused on developing specialized techniques for detecting deepfakes in singing voices.
\begin{table}
\begin{tabular}{c c|c c c c|c} \hline \hline
**Method** & **Setting** & **Train** & **T01** & **T02** & **T03** & **T04** \\ \hline \multirow{2}{*}{AASIST} & Mixture & 4.10 & 7.29 & 11.54 & 17.29 & **38.54** \\ & Vocals & 3.39 & 8.37 & 10.65 & 13.07 & 43.94 \\ \multirow{2}{*}{Spectrogram+ResNet} & Mixture & 4.97 & 14.88 & 22.59 & 24.15 & 48.76 \\ & Vocals & 5.31 & 11.86 & 19.69 & 21.54 & 43.94 \\ \multirow{2}{*}{LFCC+ResNet} & Mixture & 10.55 & 21.35 & 32.40 & 31.85 & 50.07 \\ & Vocals & 2.90 & 15.88 & 22.56 & 23.62 & 39.27 \\ \multirow{2}{*}{Wav2Vec2+AASIST (Joint-finetune)} & Mixture & **1.57** & **4.62** & **8.23** & 13.62 & 42.77 \\ & Vocals & 1.70 & 5.39 & 9.10 & **10.03** & 42.19 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Evaluation results for SVDD systems on all testing conditions in our SingFake dataset (EER (%)) |
2309.08951 | Phase transitions in one-dimensional Riesz gases with long-range
interaction | We provide numerical evidence for the existence of phase transitions with
respect to the temperature in the one-dimensional Riesz gases with non-singular
pair interaction, that is particles on the line interacting via the potential
$-|r|^{-s}$, where $s \in (-1, 0)$. Our numerics hint for the existence of two
distinct phase transitions whose critical temperatures depend on $s$, namely a
first transition which separates between a fluid and a quasisolid phase
reminiscent of the Berezinski-Kosterlitz-Thouless (BKT) transition, and a
second transition below which freezing occurs and the system is in a solid
phase. We determine the phase diagram with respect to $s$ and the temperature
$T$, which we find to be consistent with the known (or expected) results on the
1D Coulomb gas ($s = -1$), known to be a solid at all temperature, and the
Dyson log--gas ($s = 0$) which exhibits a BKT transition at $T = 1/2$ and which
is believed to be a fluid at all positive temperature. | Rodrigue Lelotte | 2023-09-16T11:02:01Z | http://arxiv.org/abs/2309.08951v1 | # Phase transitions in one-dimensional Riesz gases with long-range interaction
###### Abstract
We provide numerical evidence for the existence of phase transitions with respect to the temperature in the one-dimensional Riesz gases with non-singular pair interaction, that is particles on the line interacting _via_ the potential \(-|\boldsymbol{r}|^{-s}\), where \(\boldsymbol{s\in(-1,0)}\). Our numerics hint for the existence of two distinct phase transitions whose critical temperatures depend on \(\boldsymbol{s}\), namely a first transition which separates between a fluid and a quasisolid phase reminiscent of the Berezinski-Kosterlitz-Thouless (BKT) transition, and a second transition below which freezing occurs and the system is in a solid phase. We determine the phase diagram with respect to \(\boldsymbol{s}\) and the temperature \(\boldsymbol{T}\), which we find to be consistent with the known (or expected) results on the 1D Coulomb gas (\(\boldsymbol{s=-1}\)), known to be a solid at all temperature, and the Dyson log-gas (\(\boldsymbol{s=0}\)) which exhibits a BKT transition at \(\boldsymbol{T=1/2}\) and which is believed to be a fluid at all positive temperature.
**Keywords: 1D classical gases, Riesz gases, phase transitions, Berezinski-Kosterlitz-Thouless transition**
## 1 Introduction
It is well-known from the celebrated theorem of Hohenberg-Mermin-Wagner [1, 2, 3, 4], as well as from an earlier result due to Van Hove [5], that in one and two space-dimensions continuous symmetries can never be spontaneously broken at finite temperature, because long-range correlations are destroyed by thermal fluctuations. Nevertheless, a crucial assumption to these results is the short-range nature of the pair interaction. When the interaction remains strong at long distance, long-range fluctuations can
persist in the thermodynamic limit, thus allowing for the existence of phase transitions. A seminal example was given by Dyson [6] who proved the existence of a phase transition in the 1D _Ising Ferromagnet_ with low-decaying interactions between spins. Overall, many examples of 1D statistical lattice models where some breaking of symmetry occurs at finite temperature are known [7, 8, 9, 10, 11, 12]. However, similar examples in the continuum seem much more scarcer in the literature. The 1D _Coulomb gas_, which is a remarkable and completely integrable model, is an example of a one-dimensional system of classical particles in the continuum for which the translational-symmetry is broken at all temperature [13, 14] -- see also [15, 16] in the quantum case. One can also mention the works [17, 18] -- although the nature of the transition is different than that of the one here sought.
In this paper, we consider 1D _Riesz gases_, that is particles on the line interacting through the pair potential \(v_{s}(r)=\pm|r|^{-s}\), together with a uniform neutralizing background in the spirit of Jellium [19]. We focus on the non-singular case, that is where the exponent \(s\) ranges within \((-1,0)\), in which case the sign of the interaction is chosen negative so as to make \(v_{s}\) a repulsive potential. We remark that in this case, the potential does not decay at infinity. We provide numerical evidence which presumably rules in favor of the existence of a phase transition with respect to the temperature occurring at finite temperature. At high temperature, we found the pair correlation \(g(r)\) to converge monotonically to the average density \(\rho\) at large distance as in a fluid, whereas \(g(r)\) displays long-lasting oscillations in the low temperature regime, accounting for the existence of a long-range order. At low enough temperature, we found the system to display crystalline features. From a closer investigation, we were led to suspect the existence of two distinct critical temperatures, hereafter denoted \(\widetilde{T}_{s}\) and \(T_{s}\). The first one separates between a fluid and a quasisolid phase reminiscent of the _Berezinski-Kosterlitz-Thouless_ (BKT) transition [20, 21, 22], and another one below which the system is a true solid -- see Figure 1.
One-dimensional systems, despite their apparent oversimplicative physical traits, have been a continuously renewed source of exciting physics [23, 24]. Such models are usually more accessible to analytical calculations while being able to describe to a certain extent many problems of actual physical relevance. As for the 1D Riesz gases, they are interesting as they can be seen as the most natural interpolation familly between two important integrable models, both of which having received great shares of interest in physics and mathematics. Indeed, in the Coulomb case \(s=-1\), one recovers as mentioned above the _1D Jellium_, which is also called _Coulomb gas_ or 1D _One-Component Plasma_ (1dOCP). This is a beautiful and solvable statistical model which has been rather thoroughly investigated in the literature [13, 14, 25, 26, 27]. In particular, the 1D Jellium is known to be crystallized at all temperatures [13, 14]. On the other hand, in the limit \(s\to 0^{-}\), by considering the first-order term of the pair interaction \(v_{s}(r)\), we recover the _Dyson log-gas_[28, 29, 30, 31], that is particles on the line interacting _via_ the logarithmic interaction \(-\ln|r|\). This model is of particular importance and regularly occurs in different areas of physics and mathematics. In the context of _random matrix theory_, it is referred to as the \(\beta-\)_ensemble_ or _sine-\(\beta\) process_[32, 33]. For the special values \(\beta=1,2\) and \(4\), one recovers respectively the GOE (Gaussian Orthogonal Ensemble), GUE (Unitary, _mutatis mutandis_), and GSE
(Symplectic, _idem_) ensembles [32]. The log-gas is interesting from a statistical physics standpoint as an integrable toy model of particles interacting through a long-range and singular potential. We refer to the rather extensive [19, Sec. V.C] and the references therein, as well as the monograph [32] for a very detailed account on the matter. As for its expected phase diagram, the Dyson log-gas is known to be crystallized at zero temperature [34], and it is believed that translation-invariance can never be broken at finite temperature [35]. A rigorous proof of this statement is given in [36] in the case of stationary point processes [37], thus accounting for the case of the thermodynamic limit of the log-gas on the circle, that is with periodic boundary conditions. Our work is then motivated by a question asked in the recent review [19], where it is wondered whether or not there exists a smooth transition curve between those two limiting cases, namely the Coulomb gas \(s=-1\) and the Dyson log-gas \(s=0\). Our findings confirm this prediction.
**Remark 1** (BKT transition for \(s>0\)).: _In this paper, we only investigate the case of negative exponents \(s\in(-1,0)\). For \(s>0\), it is expected that the translational symmetry will never be broken [19]. This is known rigorously for \(s>2\)[38]. Nevertheless, it might be that the BKT transition -- which is not associated to a broken symmetry -- that appears for the Dyson log-gas at \(T=1/2\) (see Section 2.3 below) and which,
Figure 1: Schematic phase diagram of 1D Riesz gases with respect to the temperature \(T\) and the exponent \(s\) of the interaction, following the intuition of [19]. The 1D Riesz gas exhibits a phase transition at finite temperature \(\tilde{T}_{s}>0\) separating between a fluid phase (blue area) and an ordered phase (red and hatched areas). At low enough temperature \(T\ll\tilde{T}_{s}\), the system displays crystalline order (red area), whereas in the regime \(0\ll T<\tilde{T}_{s}\) we suspect a quasi-ordered phase of a BKT type. Therefore, we believe in the existence of two sets of critical temperatures, namely \(\tilde{T}_{s}\) which separates between a fluid and a quasi-solid phase, and \(T_{s}\) below which the Riesz gas is a solid — _i.e._ crystal. The limiting behaviour of the critical temperatures with respect to \(s\) are consistent with the phase diagrams of the Coulomb gas and the Dyson log-gas, corresponding to \(s=-1\) and \(s=0\) respectively.
according to our results, also exists for \(s<0\) (see Figure 1) does not cease to exist for \(s>0\), at least up to some threshold value of \(s\). It would be interesting to investigate this question._
## 2 Riesz, Coulomb and Dyson gases
In this section, we define the periodic Riesz gases in one space-dimension. We discuss the special cases of the Coulomb gas and of the Dyson log-gas. As a sanity-check of our algorithm, which we will use later to study the Riesz gas with general exponent \(-1<s<0\), we present numerics on the log-gas which are seen to be consistent with known -- or at least suspected -- theoretical results. These numerics might be of independent interest to some readers.
### Definition of the periodic 1D Riesz gases
In the long-range case \(s<1\), the periodic 1D Riesz gas is defined as follows. We consider \(N\) particles constrained to the segment \(\ell_{L}=[0,L]\) and we impose periodic boundary conditions to supress possible boundary effects in our numerical experiments. In the spirit of Jellium [19], we add a compensating uniform background of opposite charge with density \(\rho=\nicefrac{{N}}{{L}}\) to ensure charge neutrality whence summability. In the periodic setting, this amounts to deleting the Fourier zero mode of the interaction potential, see below (1). A key parameter in the study of phase transitions is \(\Gamma^{-1}:=\rho^{-s}T\), where \(T\) is the effective temperature of the system. By scaling we will suppose without loss of generality that \(\rho=1\), otherwise stated that \(N=L\), so that our parameter of interest is the sole effective temperature \(T\). The associated periodic Riesz potential \(\widetilde{v}_{s,L}\) can be analytically expressed using special functions [19] (see Remark 2 below) contrary to higher dimensions where one needs to ressort to some numerical computations often relying on _Ewald summation_. This potential is defined by its Fourier transform [19, Sec. IV.A.2] up to here unimportant multiplicative constant as
\[\widehat{\widetilde{v}_{s,L}}(k)=\sum_{\begin{subarray}{c}k\in 2\pi \mathbb{Z}/L\\ k\neq 0\end{subarray}}\frac{\delta_{k}}{|k|^{1-s}}. \tag{1}\]
The 1D Riesz gas is then formally defined as the system obtained in the thermodynamic limit, that is by considering the large \(N\) limit of the canonical ensemble \(Q_{N}\) defined as the Gibbs measure with density
\[Q_{N}(r_{1},\ldots,r_{N})=\frac{1}{Z(s,\beta,N)}\exp\left(-\beta\sum_{1\leq i< j\leq N}\widetilde{v}_{s,N}(r_{i}-r_{j})\right), \tag{2}\]
where \(Z(s,\beta,N)\) is the usual _partition function_, that is the normalizing constant such that \(Q_{N}\) is a probability measure on the \(N\)-torus. Here \(\beta\) is the inverse temperature, that is \(\beta=\nicefrac{{1}}{{T}}\). We can define the _canonical free energy_ of the 1D Riesz gas as the
thermodynamic limit
\[f(s,\beta):=\lim_{N\to\infty}\frac{-\beta^{-1}\log Z(s,\beta,N)}{N}. \tag{3}\]
The existence of this limit at all temperature was proved in [19] extending an argument of [39]. At zero temperature the energy per unit length is exactly known, and the system is crystallized [40]. We also mention the works of Serfaty _et al._[19, 41] where the cases \(s<0\) are not treated _sic_ but are covered by the theory to some extent1. To the best of our knowledge, no other theoretical results are rigorously known except for those mentioned above. In particular, the convergence of the correlation functions in the thermodynamic limit seems to be unknown for \(-1<s<0\) at the present time. We recall that the _\(k\)-point correlation function_\(\rho^{(k)}(r_{1},\ldots,r_{k})\) is defined as
Footnote 1: S. Serfaty. _Personal communication_
\[\rho^{(k)}(r_{1},\ldots,r_{k})=\frac{N!}{(N-k)!}\int_{\mathbb{R}^{N-k}}Q_{N}(r_ {1},\ldots,r_{k},r_{k+1}^{\prime},\ldots,r_{N}^{\prime})\mathrm{d}r_{k+1}^{ \prime}\ldots\mathrm{d}r_{N}{}^{\prime} \tag{4}\]
While the correlation functions are very important in the study of phase transitions, they are also very useful from a mathematical standpoint as they completely characterise the limiting object obtained from the canonical ensemble \(Q_{N}\) as one considers the thermodynamic limit \(N\to\infty\). This limiting object is a (_Gibbs_) _point process_[19, 37]. We emphasize that the question of its existence and _casu quo_ of its uniqueness -- related to the (non-)existence of phase transitions -- while well-studied in the short-range case \(s>d\) in any dimension \(d\), see [42, 43, 44, 45], is a complicated and subtle problem which remains mainly open in the long-range case \(s<d\), see [19, Sec. III]. In dimension \(d=1\), it was only very recently studied by Dereudre and Vasseur [46] and Boursier [47] in the case \(0<s<1\). In the logarithmic case \(s=0\), it is studied in [48]. To the best of our knowledge, the case of negative exponents \(s<0\) seems to have been eluded in the literature so far, at the exception of the Coulomb case \(s=-1\), which has been extensively studied, see _e.g._[13, 14, 25, 26, 27, 49].
In this work, we will focus our attention on the two-point correlation function \(\rho^{(2)}(r,r^{\prime})\), which we called the _pair correlation_. Here, the correlation between two particles only depends on their distance from one another, so that the pair correlation can be written as a function of a single variable hereafter denoted
\[g(r):=\rho^{(2)}(0,r).\]
The function \(g(r)\) describes how the density of particles varies as a function of distance from a given reference particle. In the case of a crystal, \(g(r)\) is a periodic function with sharp maxima at the lattice site. On the other hand, in the case of a perfect fluid such as an ideal gas, the particles are independent of each other, so that \(g(r)\) is constant. More generally, in the absence of long-range order, the density fluctuations between two particles should decrease rapidly at large distances, that is \(g(r)\) should converge rapidly to the average density \(\rho\), whereas in the presence of long-range order, \(g(r)\) should display a slower decay and/or oscillations at large \(r\).
We will also investigate the (_static_) _structure factor_\(S(k)\)[50, Chap. 4], see also [51, 52, 53, 54]. The structure factor is defined in the thermodynamic limit \(N\to\infty\) as the Fourier transform of the truncated pair correlation \(g(r)-1\), namely
\[S(k):=1+\frac{1}{2\pi}\int_{\mathbb{R}}e^{-irk}(g(r)-1)\mathrm{d}r. \tag{5}\]
In the finite length \(N<\infty\), the above definition should be modified accordingly by considering the (discrete) Fourier transform on the circle \(L^{2}(\mathbb{R}/N\mathbb{Z})\), in which case \(S(k)\) in only defined on \(k\in\mathbb{Z}/N\). If the pair correlation \(g(r)\) is oscillating, the structure factor should have a peak at \(k=1\) or more generally at any multiple of the period of \(g(r)\) -- so called _Bragg peak_ in condensed matter physics and crystallography. On the other hand, if the pair correlation rapidly converges to the average density of our system, as expected in a fluid phase, the structure factor should be a smooth function of the wavenumber \(k\).
**Remark 2** (Periodic Riesz potentials).: _The periodic Riesz potential \(\widetilde{v}_{s,L}\) on the line is defined, as for any periodization of a general potential, as the sum of the interactions between a particle located at \(r\in\ell_{L}\) and its periodic images in the segments \(r_{k}\in\ell_{L,k}\) that is \(r_{k}:=r+kL\) for all \(k\in\mathbb{Z}\) (see Figure 2). Therefore, we have_
\[\widetilde{v}_{s,L}(r)=\sum_{k\in\mathbb{Z}}v_{s}(r+Lk). \tag{6}\]
_We remark that in the short-range case \(s>1\), this infinite sum is convergent and is closely related to the Hurwitz zeta function \(\zeta(s,r)\), as it can be expressed as_
\[\widetilde{v}_{s,L}(r)=\zeta(s,r)+\zeta(s,1-r)\quad\text{when }s>1. \tag{7}\]
_In the long-range case \(s<1\) the resulting series is evidently divergent. To ensure summability -- at least when \(s>-1\) -- one may add in the spirit of Jellium a uniform background of opposite charge over each \(\ell_{L,k}\) in such a way as to ensure charge neutrality of the overall system. The periodic potential is then expressed for all \(s>-1\)
Figure 2: The periodic Riesz potential \(\widetilde{v}_{s,L}\) is obtained by considering that each particles located at \(r\in\ell_{L}\) interacts with its periodic images in the \(\ell_{L,k}\) and the uniform background.
_as_
\[\widetilde{v}_{s,L}(r)=\lim_{q\to\infty}\left(\sum_{|k|\leq q}v_{s}(r+kL)-\rho \int_{\cup_{|k|\leq q}\ell_{L,k}}v_{s}(r-r^{\prime})\,dr^{\prime}\right). \tag{8}\]
_It turns out rather beautifully that this normalization, which we emphasize to be very natural from the viewpoint of physics, exactly corresponds to the meromorphic extension to the half complex plane \(\{\Re(s)>-1\}\) of the periodic Riesz potential in the short-range case \(s>1\) with a pole at \(s=1\)[19, 55, 56, 57, 58]. Therefore, \(\widetilde{v}_{s,L}\) rewrites as in (7) if one agrees to use the meromorphic continuation of the Hurwitz zeta function to the punctured complex plane \(\mathbb{C}\setminus\{1\}\) on the right-hand side. We note that, although this entails that the periodic potential \(\widetilde{v}_{s,L}\) can actually continuated over the entire complex plane -- at the exception of \(s=1\) -- one should be aware that the above formula (8) is a priori only valid when \(\Re(s)>-1\). Pushing \(s\) below this threshold usually requires another kind of normalization. We refer to [19, Section IV. A] for further details on this question and more generally on the analytic continuation of the periodic Riesz potential in arbitrary dimension \(d\geq 1\)._
### The 1D Coulomb Gas
In this section, we review important results regarding the _1D Coulomb gas_, also known as the _1D Jellium_ or 1D _One-Component Plasma_ (1dOCP). This remarkable model, which corresponds to the choice \(s=-1\), was extensively studied by Kunz in [13], where the thermodynamic limit of the free energy (3) and the correlations functions \(\rho^{(k)}(r_{1},\ldots,r_{k})\) -- see (4) -- are computed through transfer matrix techniques [59]. It is to be noted that the Coulomb case is very special, as the force between two distinct particles does not depend on their mutual distance, so that ordering the particles on the line somehow leads to a form of "conditional independence". This fortuitous property was leveraged by Aizenman and Martin in [14] where similar results as of [13] are proved using the electric field as the key variable and appealing to ergodic arguments to conclude. Directly extending theses methods to other values of \(s\) seems complicated, if not impossible. Altogether, the authors of [13, 14] managed to prove that the correlation functions were proper periodic functions at all temperature, and thus that the Coulomb gas is crystallized at all temperature. We shall now briefly explain the strategy of both papers, and verify the results numerically.
The strategy of Kunz in [13] essentially boils down to the celebrated _transfer-matrix method_ in statistical physics [59]. The strategy is used by Kunz both with free and periodic boundary conditions. It should be noted that the argument heavily relies on both the one-dimensional nature of the system and on the very peculiar form of the Coulomb potential in dimension \(d=1\), namely \(-|r|\). Indeed, the Jellium energy -- that is, when the particles interact with each other as well as with the uniform compensating background, see (9) below -- is a quadratic function once restricted to the set of ordered configuration. Indeed, if we suppose that \(-N/2\leq r_{1}\leq\cdots\leq r_{N}\leq N/2\), the Jellium energy in the non-periodic setting rewrites as
\[-\sum_{1\leq i<j\leq N}|r_{i}-r_{j}|+\sum_{i=1}^{N}\int_{-\frac{N}{2} }^{\frac{N}{2}}|r_{i}-r|\mathrm{d}r-\frac{1}{2}\int_{-\frac{N}{2}}^{\frac{N}{2} }\int_{-\frac{N}{2}}^{\frac{N}{2}}|r-r^{\prime}|\mathrm{d}r\mathrm{d}r^{\prime}\] \[=\sum_{i=1}^{N}\left(r_{i}-i+\frac{N+1}{2}\right)^{2}+\frac{N}{2}. \tag{9}\]
In particular, the canonical Gibbs measure \(Q_{N}\) associated to the above energy is a Gaussian once restricted to the set of ordered configurations. Using this property, Kunz was able to rewrite the free energy \(f_{N}(\beta)\) in the finite length \(N\) and at inverse temperature \(\beta\) as
\[f_{N}(\beta)=\left\langle g_{\beta},K^{N}g_{\beta}\right\rangle_{L^{2}(\mathbb{ R}_{+})}. \tag{10}\]
Here, \(K\) is a compact operator with positive kernel over the Hilbert space \(L^{2}(\mathbb{R}_{+})\) which serves as an infinite-dimensional analogous to the so-called _transfer matrix_ and \(g_{\beta}\) is an explicit function in \(L^{2}(\mathbb{R}_{+})\). We emphasize that \(K\) depend on the inverse temperature \(\beta\) but does _not_ depend on the number of particles \(N\). It is given by the operator
\[Kf(r):=\int_{r-1}^{\infty}e^{-\beta u^{2}}f(u)\mathrm{d}u, \tag{11}\]
which is an integrable operator with kernel \(K(r,r^{\prime})=e^{-\beta r^{\prime 2}}\mathds{1}(r^{\prime}\geq r-1)\). Appealing to the _Perron-Frobenius theorem_[13, Lem. 1 in Appendix] it follows from positivity and compactness of \(K\) that it has a simple largest eigenvalue \(\lambda(\beta)\) associated to an unique positive normalized eigenfunction \(\psi_{\beta}\in L^{2}(\mathbb{R}_{+})\). By discretizing the operator \(K\) as defined in (11) above, we can compute numerically the eigenvector \(\psi_{\beta}\), see
Figure 3: We compute the Perron–Frobenius eigenvector \(\psi_{\beta}\) of the operator \(K\) defined in (11) by a straightforward discretization for several temperatures. We then compute the density \(\rho^{(1)}(r)\) using Equation (12). We observe that the density is a proper periodic function of period \(\rho=1\) as proved by Kunz. We also retrieve the properties that \(\psi_{\beta}\) converges to \(1\) as \(r\to-\infty\) and to \(0\) as \(r\to\infty\), and that \(\psi_{\beta}\) converges to the Heaviside function centered at \(r=1\) in the vanishing temperature limit, see [13, Appendix & p. 315]. All figures in this work were made using Julia and the Plots.jl package.
Figure 3. The thermodynamic limit of the free energy can then be readily expressed using those quantities [13, Eq. (17)]. The correlation functions can be dealt with in a very similar manner. For instance, Kunz found that the one-point correlation function \(\rho^{(1)}(r)\) converges to the periodic function given by
\[\rho^{(1)}_{\tau}(r)=\sum_{k\in\mathbb{Z}}\psi_{\beta}(-r-k-\tau)\psi_{\beta}(r +k+\tau) \tag{12}\]
for some \(\tau\in\mathbb{R}\)[13, Eq. (40-41)]. In fact, the scalar \(\tau\) depends on the sequence of the number of particles \(N\)'s considered in the thermodynamic limit, which is very clear manifestation of the breaking of symmetry. Furthermore, Kunz managed to prove that all the correlation functions were periodic. Nevertheless, to ensure that crystallization really happens, one still needs to prove that those functions are properly periodic, that is that they are not constant function. By appealing to analyticity [13, at p. 314], he managed to prove this fact at low enough temperature \(\beta\gg 1\). This was eventually generalized to all temperature \(\beta>0\) by Aizenman and Martin [14]. In Figure 3, we show \(\rho^{(1)}_{\tau}(r)\) computed for several temperatures. Finally, Kunz studied the problem with periodic boundary conditions, in which case he found that the correlation functions \(\rho^{(k)}_{\text{per}}\) were all obtained by averaging their counterparts \(\rho^{(k)}_{\tau}\) in the non-periodic setting over their period, that is
\[\rho^{(k)}_{\text{per}}(r_{1},\ldots,r_{k})=\int_{0}^{1}\rho^{(k)}_{\tau}(r_{ 1},\ldots,r_{k})\text{d}\tau. \tag{13}\]
It follows from (13) that crystallization cannot be detected on \(\rho^{(1)}(r)\) anymore in the periodic setting, and that one should look at the pair correlation \(g(r)\), which will then be a proper periodic function of the distance \(r\). This is a clear manifestation of the breaking of symmetry. In what will follow regarding the Riesz gas with general exponent \(-1<s<0\), we will look at the pair correlation for the same reasons.
Aizenman and Martin [14], building on previous works of Lenard [49, 60] and Edwards-Lenard [61], took a different road than that of Kunz to study the 1D Coulomb gas. Their key idea is to work with the _electric field_\(E(r)\) rather than the particles themselves. Indeed, there is a one-to-one correspondence between the set of configurations and the set of possible electric fields, as proved in [14, Lem. 4]. This can be seen from the fact that, given a configuration of particles \(X=(r_{1},\ldots,r_{N})\), the electric field \(E_{X}(r)\) generated by this configuration has a very simple structure, as it is a piecewise linear function of unit slope with a jump of unit size located at each particle \(r_{j}\), see Figure 4. This allows to view the electric field as a random jump process whose semigroup can be readily expressed [14, Eq. (49) sqq.]. The most important thing to stress out here is the _Markovian_ nature of this process, as can be intuited from Figure 4. This allows yet once more to appeal to Perron-Frobenius to cope with the thermodynamic limit, see [14, Eq. (4.11)]. Finally, using an ergodic theorem, Aizenman and Martin proved the periodicity of the correlations functions for all \(\beta>0\), thus extending the result of [13] mentioned earlier. They were able to show that the associated limiting point process obtained in the
thermodynamic limit \(N\to\infty\) can be defined by the usual set of characterisations such as _Dobrushin-Lanford-Ruelle_ (DLR), _Bogoliubov-Born-Green-Kirkwood-Yvon_ (BBGKY) and _Kubo-Martin-Schwinger_ (KMS) equations -- we refer the reader to [19, 37] on this matter.
We note that the electric field is a very convenient variable in the case of the Coulomb potential because the energy (9) can be expressed as a positive quadratic form in the variable \(E_{X}\) using the _carre du champ_ operation [14, Eq. (2.7) and (2.9)]. For arbitrary exponent \(s\), this very useful _carre du champ_ is no longer available _as-is_. Nevertheless, it is still possible -- although much more involved -- to work with the electric field rather that the particles themselves. We refer to the long line of work initiated by Serfaty and collaborators -- see [41] for a self-contained reference, or the references in [19].
### The Dyson log-gas
The _Dyson log-gas_[28, 29, 30, 31], which corresponds to the choice \(s=0\) and for which we recover the logarithmic interaction \(v_{0}(r)=-\ln(r)\), is also accessible to analytical computations for specific values of the inverse temperature \(\beta\), using the tenacious analogy between log-gases and random matrix models [32]. We recall that, in the context of random matrices, the Dyson log-gas is called the \(\beta\)-ensemble. Using a fairly general theory of one-dimensional quantum fluids of Haldane [62], it was conjectured by Forrester in [63] that the leading term in the expansion of the pair correlation \(g(r)\)
Figure 4: The electric field generated by a configuration of particles \(r_{1},\ldots,r_{N}\) has a very nice structure in the case of the Coulomb potential. It is a piecewise linear function with unit slope and jump of unit size located at each particle \(r_{j}\). The position of the particles can therefore be retrieved from the positions of the jumps.
at large \(r\) would be given by
\[g(r)\underset{r\to\infty}{\sim}\begin{cases}1-\frac{1}{\pi^{2}\beta r^{2}}&\text {for}\ \ \beta<2,\\ 1+\frac{\cos(2\pi r)}{2\pi^{2}r^{2}}-\frac{1}{2\pi^{2}r^{2}}&\text {for}\ \ \beta=2,\\ 1+c\frac{\cos 2\pi r}{r^{4/\beta}}-\frac{1}{2\pi^{2}r^{2}}&\text {for}\ \ \beta>2\end{cases} \tag{14}\]
for some universal constant \(c>0\). The expansion is rigorous for \(\beta=1,2\) and \(4\) as can be shown using the analogy between Dyson log-gas and standard Gaussian ensembles [32]. It is also veraricous in the case of even or rational \(\beta\)'s as proved in [64] and [32, Chap. 13]. We see that the decay of \(g(r)\) in the large \(r\) limit exhibits a transition at \(\beta=2\) from an universal monotonous power-law decay \(r^{-2}\) to an oscillating and non-universal decay whose power depends on the temperature. This is a celebrated example of a _Berezinski-Kosterlitz-Thouless_ (BKT) transition [20], see also [21, 22]. In the vanishing temperature limit \(\beta\to\infty\), as the oscillations become predominant, \(g(r)\) converges to a periodic function and the system is crystallized onto a (floating) _Wigner crystal_[34, 65].
It follows from the expansion (14) that the behaviour of the structure factor \(S(k)\) (5) in the small wavenumber limit \(k\to 0\) is to be given by that of the \(-1/r^{2}\) term to the leading order. Indeed, although the leading term of the (truncated) pair correlation is of order \(1/r^{4/\beta}\) as soon as \(\beta>2\), the cosine term shifts its contribution to the Fourier transform at \(k\sim 1\). More generally, all the oscillating terms \(\cos(2\pi nr)/r^{4n/\beta}\) in the expansion of the pair correlation of the log-gas when \(\beta>2\) -- which we did not write in (14), see [63] -- only contribute to the structure factor at \(k\sim n\). Altogether, the term \(-1/r^{2}\) is the only one which contributes to the behaviour of \(S(k)\) near \(k=0\), so that
\[S(k)\sim 2\beta^{-1}|k|\quad\text{ as }k\to 0. \tag{15}\]
It also follows from the expansion (14) that \(S(k)\) should feature a singularity at \(k=1\) as soon as \(\beta\geq 4\). This singularity will be logarithmic at the threshold \(\beta=4\) and should diverge as an inverse power-law when \(\beta>4\), that is (up to multiplicative constant)
\[S(k)\sim\frac{1}{|1-k|^{1-4\beta^{-1}}}\quad\text{ as }k\to 1^{-}. \tag{16}\]
In any case, we emphasize that this singularity is of an integrable type. This is in clear constrast with what one would expect in a crystal. Indeed, as explained earlier, in the case of a crystal the structure factor \(S(k)\) should have a sharp peak at \(k=1\) corresponding to that of a Dirac mass, as expected from the periodic nature of the pair correlation \(g(r)\).
Although our main goal is this paper is to investigate the long-range situation \(-1<s<0\), we show that the previous claims on the behaviour of \(S(k)\) near \(k=0\) (15) and near \(k=1\) (16) are confirmed numerically. Although we do so as a sanity-check for our algorithm, which is presented in Appendix A2, these results may be
Figure 5: On the left, we display the pair correlation \(g(r)\) for the Dyson log-gas at several temperatures obtained by our algorithm. Below the critical temperature \(\beta=2\), the pair correlation converges monotonically to the average density — here set to \(\rho=1\) — whereas above the critical temperature \(g(r)\) we observe oscillations which eventually vanish as \(g(r)\) converges to \(\rho\) in the large \(r\) limit. On the right, the approximation of \(g(r)\) at \(\beta=2\) obtained numerically is seen to be consistent with the exact formula for the pair correlation [32]. We used \(N=100\) particles and built the pair correlation by binning.
Figure 6: On the left, we display the structure factor \(S(k)\) for the Dyson log-gas at several temperatures, using the pair correlations previously computed and (5). Above the critical temperature \(\beta=2\), the structure factor displays a peak at \(k=1\) as expected from the (15). On the right, the approximation of \(S(k)\) at \(\beta=2\) obtained numerically is seen to be consistent with the exact formula for the pair correlation [32]. We thinned the number of displayed wavenumbers for visual convenience.
of independent interest for some readers. In Figure 5, we display an approximation of the pair correlation \(g(r)\) obtained numerically for several inverse temperatures \(\beta\). We observe that below the critical temperature \(\beta=2\), the correlation converges monotonically to the average density \(\rho=1\). From the critical temperature \(\beta=2\) further on, the pair correlation \(g(r)\) displays damped oscillations whose amplitude strengthen as the temperature is further decreased, and eventually \(g(r)\) converges to the average density in the large \(r\) limit. Our approximation fits perfectly with the exact formula for \(g(r)\) at \(\beta=2\)[32]. In Figure 6, we display the associated structure factor \(S(k)\), which is obtained by computing the (discrete) Fourier transform of the pair correlation \(g(r)\) as in (5). We then regress \(S(k)\) near \(k=0\) to obtain the universal behaviour of the slope at which \(S(k)\) approaches \(k=0\), and we regress at \(k=1\) to obtain the exponent at which \(S(k)\) diverges. The results, displayed in Figure 7, are in clear adequation with (15) and (16) -- and therefore are numerical confirmations of the veracity of the expansion (14) conjectured by Forrester.
## 3 Evidence for the existence of a phase transition
In this section, we investigate numerically the 1D Riesz gases for general exponent \(-1<s<0\). In Section 3.1, we show evidence which strongly advocates for the existence of a phase transition with respect to the temperature \(T\) depending on \(s\). In Section 3.2, we further study the behaviour of the pair correlation \(g(r)\) and most importantly of the structure factor \(S(k)\) to make our claim that there coexists two
Figure 7: On the left, we regress the structure factor \(S(k)\) of the Dyson log-gas near \(k=0\) on \(C|k|\). The slope coefficient \(C\) is found to be close to \(2\beta^{-1}\), which is consistent with (15) — and therefore consistent with the Forrester’s expansion (14). On the right, we regress the structure factor \(S(k)\) near \(k=1\) as \(k\to 1^{+}\) on \(c|1-k|^{\alpha}\) for some here unimportant constant \(c\). We found \(\alpha\) to be closed to \(1-4\beta\) for \(\beta>4\), which is consistent with (15). When \(\beta=4\), the structure factor behaves logarithmically \(S(k)\sim\log|1-k|\), as expected.
separate phase transitions, namely a fluid-quasisolid transition of a BKT-type similar to that of the Dyson log-gas, and a freezing point below which the system is crystallized. A phase diagram is then determined numerically according to a set of criteria summarized in Table 1.
### Long-range order at low enough temperature and
existence of a critical temperature
We compute the pair correlation \(g(r)\) at several temperatures for various exponents \(-1<s<0\). As a first evidence for the (non-)existence of some long-range order, we wonder whether or not \(g(r)\) features persistent oscillations at low enough temperature, or equivalently if the structure factor \(S(k)\) has a sharp peak at \(k=1\). From our experiments, we see that for all \(s\) in the range \(-1\leq s\leq 0\), at high enough temperature \(g(r)\) rapidly and monotonically converges to the average density as \(r\to\infty\), as one would expect in a fluid phase, whereas for small enough temperatures it exhibits long-lasting oscillations which are ever more amplified as the temperature is further lowered, consistent with the fact that the system is crystallized at zero temperature [40]. This is clearly seen in Figure 8, where we display \(g(r)\) and \(S(k)\) for the 1D Riesz gas for \(s=-0.5\) at varying temperature. Furthermore, we found this conspicuous qualitative change of behaviour to occur within a range of temperatures depending on the exponent \(s\). In Figure 9, we fix two temperatures and we vary the exponent \(s\). We observe that the oscillations appear sooner, that is at higher temperature, as the exponent \(s\) gets closer to \(s=-1\), and conversely that they appear later as \(s\) gets closer to \(s=0\). This is consistent with the fact that the Coulomb gas is crystallized at all temperature and with the fact that the Dyson log-gas is expected to be a fluid at all positive temperature.
From what precedes, we are brought to believe in the existence of a critical temperature \(\mathfrak{T}_{s}\) depending on the exponent \(-1<s<0\) which separates between a fluid phase in the high temperature regime \(T>\mathfrak{T}_{s}\) and an ordered-phase in the low temperature regime \(T<\mathfrak{T}_{s}\). The critical temperature should interpolate between the Coulomb gas, that is \(\mathfrak{T}_{s}\to\infty\) as \(s\to-1\), and the Dyson log-gas, that is \(\mathfrak{T}_{s}\to 0\) as \(s\to 0\). Nevertheless, a clear determination of \(\mathfrak{T}_{s}\) is evidently complicated, pertaining to both the underlying limitations of numerics and the absence of an absolute criterion to either rule in favor or out of the appearance of a long-range order. Furthermore, it is unclear whether or not the oscillations which appear in the pair correlation \(g(r)\) eventually vanish in the large \(r\) limit, as in the Berezinski-Kosterlitz-Thouless paradigm. The rest of our paper is dedicated to obtain a better understanding of \(\mathfrak{T}_{s}\).
### Determination of the critical temperature and nature of the transition
It remains to determine the nature of the transition which was put in evidence in the previous section, as well as the behaviour of the transition curve with respect to \(s\), which was loosely denoted \(\mathfrak{T}_{s}\) above. In this section, we give several criteria to determine whether the system is in a fluid, quasisolid or solid phase. In fact, we make the claim that there actually coexists two distinct set of critical temperatures,
denoted hereafter \(\widetilde{T}_{s}\) and \(T_{s}\). The first one separates between a fluid and a quasisolid phase reminiscent of the BKT transition and similar to that of the Dyson log-gas discussed earlier, while the second one corresponds to the point at which the system is frozen onto a true solid. We use these criteria to give a -- at least schematic -- phase diagram of the Riesz gas with respect to the effective temperature \(T\) and the exponent \(-1<s<0\).
#### 3.2.1 Behavior of \(S(k)\) in the limit \(k\to 0\)
In the preceding section, we were attentive to whether or not the pair correlation \(g(r)\) converges to a periodic function in the large \(r\) limit, as this is a clear manifestation of the crystallization. This is evidently related to the appearance in the Fourier space of a peak at \(k=1\), or for that matters at any multiple of the period of \(g(r)\). Nevertheless, it turns out that the breaking of symmetry can be seen on the behaviour of the structure factor in the limit \(k\to 0\). We stress out that this is a non-trivial fact. Indeed, Aizemann, Goldstein and Lebowitz gave in [66] a sufficient condition for translational symmetry to be broken in one-dimensional systems. This result, which is related to the notion of _hyperuniformity_[67, 68], essentially says that if the structure factor \(S(k)\) behaves like \(|k|^{\eta}\) in the small wavenumber vicinity \(k\to 0\) for some \(\eta>1\), then translational symmetry must be broken in the thermodynamic limit. We note that this is _not_ in contradiction with the expected phase diagram of the Dyson log-gas, for which \(\eta=1\) as shown previously.
Figure 8: On the left (resp. right) we display the pair correlation \(g(r)\) (resp. the structure factor \(S(k)\)) at various temperatures for the Riesz gas of exponent \(s=-0.5\). At high enough temperature, \(g(r)\) converges rapidly and monotonically to the average density, whereas in the low temperature regime it features persistent oscillations whose amplitude increases as the temperature is furthered lowered. In this regime, we observe that the structure factor \(S(k)\) has a sharp peak at \(k=1\) whose width (resp. height) decreases (resp. increases) as the temperature is lowered, hinting the presence of a Dirac mass accounting for the periodicity of \(g(r)\) in the large \(r\) limit. We used \(N=150\) particles.
On the other hand, it follows from the extension of a heuristical argument of Forrester [32] -- see also [69] -- that, if the Riesz gas at exponent \(s\) is crystallized, then \(S(k)\) must behave like \(S(k)\simeq C|k|^{1-s}\) in the small wavenumber limit \(|k|\to 0\) for some constant \(C>0\). The exponent \(\eta=1-s\) is very natural, as it fits with the Dyson log-gas, for which \(\eta=1\), and the Coulomb gas for which \(\eta=2\). It also fits with the results of obtained by Boursier [47] in the case where \(0<s<1\). The argument of Forrester, which can be found in [32, Chap. 11] in the case of the log-gas, can be extended as follows for any \(s\). If we pertub our system at equilibrium by a fluctuating charge density \(\epsilon e^{-ikr}\), and if we denote by \(\rho_{\epsilon}(r)\) the density of the pertubed system, then it must be that
\[\rho_{\epsilon}(r)-\rho(r)\sim_{k\to 0}-\epsilon e^{ikr} \tag{17}\]
where \(\rho(r)\) is the density of the original system. We emphasize that this equivalence is only formal and _a priori_ not rigourous. It says that the system responds in an appropriate manner to the perturbation, that is in such a way as to cancel the perturbation and remain in equilibrium. In fact, this can be viewed as characteristic of a crystalline order. Indeed the crystal should be able to remain stable under perturbation of large enough wavelength \(\lambda\gg a\) -- or small enough wavenumber \(k\ll 1/a\), as in (17) -- where \(a\) is the crystal constant.
Figure 9: We fix two temperatures \(T=1\) (left) and \(T=2\) (resp. right) and we vary the exponent \(s\). We observe that the oscillations of \(g(r)\) appear at lower temperatures as the exponent \(s\) gets closer to the Coulomb gas \(s=-1\), which is consistent with the fact that the Coulomb gas is crystallized at all temperature. On the other hand, the oscillations appear at much lower temperature as \(s\) gets closer to the Dyson log-gas \(s=0\), which is consistent with the fact that the Dyson log-gas is expected to be a fluid at all positive temperature. From this, we are led to claim that the critical temperature \(\mathfrak{T}_{s}\) depends on the exponent \(s\) in such a way as to interpolate between the phase diagrams of the 1D Coulomb gas and the Dyson log-gas, as conjectured in [19].
Now, by letting \(\epsilon\to 0\) in (17), and using the well-known relations which links the functional derivatives of the free energy and the correlation functions [50], the left-hand side of (17) can be written as
\[\rho_{\epsilon}(r)-\rho(r)\sim-\epsilon\beta\int_{\mathbb{R}}W_{\epsilon}(r^{ \prime})\rho^{(2)}(r,r^{\prime}) \tag{18}\]
where \(W(r):=-|r|^{-s}*e^{ikr}\) is the potential associated to the charge density which perturbs the system, and \(\rho^{(2)}(r,r^{\prime})\) is the two-point correlation as defined earlier (4). By using the invariance by translation, we get
\[\rho_{\epsilon}(r)-\rho(r)\sim-\epsilon\beta\int_{\mathbb{R}}W_{\epsilon}(r^{ \prime})\rho^{(2)}(r,r^{\prime})\sim\epsilon\beta\frac{1}{|k|^{1+s}}S(k). \tag{19}\]
Putting (17) and (19) yields that \(\eta=1-s\). Furthermore, the coefficient \(C\) should be linear in the temperature, and in fact it should be given by \(C=2\beta^{-1}\) similarly to the Dyson log-gas, as seen in (15). In Figure 10, we observe that at high enough temperature, the structure factor \(S(k)\) converges linearly to \(0\) as \(k\to\infty\). This is seen to be consistent with the truncated pair correlation \(g(r)-1\) decaying as \(-1/r^{2}\) in the large \(r\) limit. On the contrary, as the temperature is decreased, the structure factor \(S(k)\) flattens near the origin, and at low enough temperature it is seen to decrease sublinearly to \(0\) as \(k\to 0\), in fact as \(|k|^{1-s}\) as expected from the above heuristic.
Figure 10: On the left, we regress the structure factor \(S(k)\) near \(k=0\) on \(C|k|^{\eta}\), for the same values as of Figure 8. We allow \(\eta\in\{1,1-s\}\), where \(\eta=1-s\) is the natural exponent for the crystallisation regime, whereas \(\eta=1\) corresponds to that of a liquid. In the high temperature regime, we see that \(S(k)\) behaves linearly as \(k\to 0\). This is coherent with the behaviour of the pair correlation at large \(r\), which decays as \(-1/r^{2}\), as seen on the second figure on the right, where we display \(\ln(1-g(r))/\ln(r)\), which is seen to approach the value \(2\) as \(r\) gets large. At low enough temperature the choice \(\eta=1-s\) is found to yield a smaller regression residue, which is a strong evidence that the Riesz gas is crystallized according to Aizemann _et al._[66]
#### 3.2.2 Behaviour of \(S(k)\) in the limit \(k\to 1\)
In the case of a crystal, the structure factor \(S(k)\) should have a sharp peak at \(k=1\) corresponding to a Dirac mass. We recall that in the case of the Dyson log-gas, the structure factor also has a peak at \(k=1\) for \(\beta>4\), but it is an integrable function and not a Dirac mass -- and, in the case \(2<\beta<4\), the function \(S(k)\) is non-monotonous near \(k=1\). In Figure 10, we see that the appearance of the peak at \(k=1\) precedes that of the flattening of the structure factor near the origin, and that there exists a range of temperatures for which the peak exists but the structure factor seems to behave linearly near the origin. This is clearly seen for \(s=-0.5\) in Figure 11.
We are therefore brought to believe in the existence of two distinct phase transitions. At high enough temperature, the Riesz gas is a fluid. As the temperature is decreased down to a certain threshold, there is a BKT transition similar to that of the Dyson log-gas as discussed earlier, corresponding to the formation of a quasisolid. Eventually, as the temperature is furthered decreased, there is another threshold at which the system is frozen into a true crystal. This is clearly depicted in Figure 11.
To determine whether or not the peak at \(k=1\) is of an integrable type or a Dirac mass, we may look at its behaviour as one varies the number of particles \(N\). Indeed, in the case of a Dirac mass, the height of the peak \(S(1)\) should grow as \(N\). On the other hand, if the structure factor diverges as \(|1-k|^{-\alpha}\) as \(k\to 0\) for some \(\alpha<1\), as in a quasisolid, then the height of the peak should grow as \(N^{\alpha}\). An example is given in Figure 12 and Figure 13, in which we fix \(s=-0.5\) and two different temperatures, namely \(T=0.1\) and \(T=0.6\). We then vary the number of particles and determine how the height of the peak of the structure factor grows with \(N\). When \(T=0.6\), we
Figure 11: From a closer investigation on the structure factor (right), we are led to believe in the existence of three distinct regimes for \(s=-0.5\). At high temperature, the structure factor behaves linearly near the origin and it has no peak at \(k=1\), hinting a fluid phase. As the temperature is decreased, a peak appears at \(k=1\), but the structure factor remains linear in the limit \(k\to 0\). This is characteristic of a quasisolid. Finally, at low enough temperature, \(S(k)\) flattens out near the origin and behaves as \(|k|^{1-s}\) : it is a crystal.
find that the peak grows as \(N^{\alpha}\) for \(\alpha=0.655\), which seems to indicate a quasisolid phase. When \(T=0.1\), we find \(\alpha=0.963\), which is closer to indicating a Dirac mass and therefore a solid phase.
### Phase diagram with respect to the temperature
Using the different criteria as summarized in Table 1, we may draw a schematic phase diagram of the 1D Riesz gas with respect to the effective temperature \(T\) and the exponent \(-1\leq s\leq 0\). The diagram is depicted in Figure 14.
We should emphasize that a precise determination of the transition curves \(\widetilde{T}_{s}\) and \(T_{s}\) is evidently complicated. The transition curve \(\widetilde{T}_{s}\), corresponding to what we believe to be a BKT transition separating between a fluid and a quasisolid phase, can be determined as the threshold temperature at which \(S(k)\) start having a peak at \(k=1\) -- thus becoming non-monotonous near \(k=1\). We see in Figure 14 that the behaviour of \(\widetilde{T}_{s}\) is consistent with the Dyson log-gas for which the BKT transition occurs at \(T=1/2\).
As for the transition curve \(T_{s}\), corresponding to the fluid-solid transition, its values are somewhat harder to determine. According to our criterion, it corresponds to the temperature at which \(S(k)\) flattens at the origin and behaves as \(|k|^{1-s}\) and at which \(S(k)\) as a Dirac mass at \(k=1\). Although a precise determination of this threshold is evidently complicated in the finite length \(N<\infty\), we are confident that the phase diagram depicted in Figure 14 is qualitatively sound.
## 4 Conclusion
We provided numerical evidence for the existence of two distinct phase transitions with respect to the temperature in 1D Riesz gases. The first transition corresponds to
Figure 12: We set \(s=-0.5\) and \(T=0.1\), and we vary the number of particles \(N\). We then regress the successive height of the peaks \(S(1)\) on \(N^{\alpha}\). Here, we found \(\alpha=0.963\), which is therefore rather close to that of a Dirac mass.
\begin{table}
\begin{tabular}{l l l l} \hline \hline & \(S(k)\) near \(k=0\) & \(S(k)\) near \(k=1\) & \(g(r)\) far away \\ \hline Solid & **If**\(S(k)\sim C|k|^{\eta}\) for \(\eta>1\) and some constant \(C>0\), then translational symmetry is broken according to Aizenman _et al._[66]. Conversely, **if** the symmetry is broken then according to a heuristic of Forrester [32], see also [69], it must be that \(\eta=1-s\). & \(S(k)\) should be non-monotonic near \(k=1\). It can be singular but must remain integrable, that is \(S(k)\sim\frac{1}{|k-1|^{\alpha}}\) for \(\alpha<1\). Numerically, the peak should grow as \(N^{\alpha}\). This is reminiscent of the Dyson log-gas, see (16). & \(g(r)\rightarrow\rho=1\) as \(r\rightarrow\infty\) but has oscillations which slowly vanish in the large \(r\) limit. The power of leading order term should depend on the temperature. \\ \hline Fluid & \(S(k)\) behaves linearly in the limit \(k\to 0\). & \(S(k)\) is monotonic near \(k=1\) & \(g(r)\sim 1-1/r^{2}\) in the large \(r\) limit. \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of criteria
Figure 13: Same as in Figure 12, but for \(T=0.6\). For this temperature, we have \(\alpha=0.655\), which would say that at this temperature, the Riesz gas is a quasisolid. We emphasize that it is not evident that the oscillations of the pair correlation \(g(r)\) eventually vanish in the large \(r\) limit, as expected for a quasisolid. This is a well-known problem in the literature, which makes the BKT-type transitions very delicate to detect numerically.
a BKT transition similar to that of the Dyson log-gas. The second one corresponds to the critical temperature below which the system is crystallized. The transition curves interpolate between the known -- or at least expected -- phase diagrams of the 1D Coulomb gas and the Dyson log-gas. Although a precise quantitative determination on the critical temperatures is delicate, using a set of different criteria we were able to draw a schematic phase diagram of the 1D Riesz gases with respect to the temperature \(T\) and the exponent \(s\).
The author is thankful to Mathieu Lewin (CNRS & Ceremade, Universite Paris-Dauphine PSL) as well as David Dereudre (Universite de Lille) for useful discussions. This project has partially received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement MDFT N"725528).
Figure 14: Phase diagram of the one-dimensional Riesz gas with respect to the effective temperature \(T\) and the exponent \(-1\leq s\leq 0\). The blue dots correspond to the couple \((s,T)\) for which we found the system to be a fluid phase. The orange dots correspond to that of the quasi-solid phase, for which the structure factor \(S(k)\) has a (integrable) peak at \(k=1\) and \(S(k)\) behaves linearly in the limit \(k\to 0\). Finally, the red dots correspond to the solid phase, for which \(S(k)\) as a Dirac mass at \(k=1\) and \(S(k)\) behaves as \(|k|^{1-s}\). We then draw schematically the transition curve \(\widetilde{T}_{s}\), corresponding to the BKT transition, and the transition curve \(T_{s}\), corresponding to the solid-phase. Those curves are seen to be consistent with the phase diagrams of the Coulomb gas and the Dyson log-gas, for which \(s-1\) and \(s=0\) respectively.
## Appendix A Implementation
We very briefly comment on the algorithm used in this paper. Our code was written in Julia3. The 1D Riesz gases were simulated using a _random walk Metropolis-Hastings_ with appropriate tuning of the size of the jump proposal to achieve a good acceptance rate. A cluster architecture was used to produce many samples in a parallel fashion. The periodic Riesz potential \(\widetilde{v}_{s,L}\) was pre-computed by tabulation and interpolation so as not to use special functions whose evaluation are rather time consuming. The pair correlation \(g(r)\) is obtained by binning. That is, for each sample generated by the chain, the mutual distances between the particles in the configurations are computed, and those distances are binned into an histogram which is then properly normalized. The number of bins should evidently depend of the number of samples used and the accuracy needed. As a general rule, we chose to consider ten bins _per_ unit length. We actually found this to be usually consistent with the _Freedman-Diaconis rule_[70]. To the best of our knowledge, binning the pair correlation seems to be the most commonly used method, at the exception of the work [71]. Once the pair correlation has been properly binned into an histogram, we compute the structure factor \(S(k)\) by considering the (discrete) Fourier transform of this histogram, which is then only defined on the values \(\mathbb{Z}/N\). Once again, this seems to be the usual procedure in the literature to compute \(S(k)\)[71, 72]. Computations were carried out using a number of particles up to \(N\sim 500\), and we found no significant differences beyond intrinsic noise with the less greedy choice of \(N\sim 100\), so that most computations in this paper were carried out for a number of particles \(N\) of this order.
Footnote 3: Our code is available at [https://github.com/rodriguel/PTRiesz](https://github.com/rodriguel/PTRiesz).
|
2309.12909 | A Kümmer construction for Chern-Ricci flat balanced manifolds | Given a non-K\"ahler Calabi-Yau orbifold with a finite family of isolated
singularities endowed with a Chern-Ricci flat balanced metric, we show, via a
gluing construction, that all its crepant resolutions admit Chern-Ricci flat
balanced metrics, and discuss applications to the search of solutions for the
Hull-Strominger system. We also describe the scenario of singular threefolds
with ordinary double points, and see that similarly is possible to obtain
balanced approximately Chern-Ricci flat metrics. | Federico Giusti, Cristiano Spotti | 2023-09-22T14:52:59Z | http://arxiv.org/abs/2309.12909v3 | # A Kummer construction for Chern-Ricci flat balanced manifolds
###### Abstract.
Given a non-Kahler Calabi-Yau orbifold with a finite family of isolated singularities endowed with a Chern-Ricci flat balanced metric, we show, via a gluing construction, that all its crepant resolutions admit Chern-Ricci flat balanced metrics, and discuss applications to the search of solutions for the Hull-Strominger system. We also describe the scenario of singular threefolds with ordinary double points, and see that similarly is possible to obtain balanced approximately Chern-Ricci flat metrics.
Key words and phrases:complex non-Kahler manifolds, balanced metrics, Chern-Ricci flat metrics, Calabi-Yau manifolds, Hull-Strominger system 2010 Mathematics Subject Classification: 53C55, 53C25, 53C07
## 1. Introduction
With the ultimate aim of geometrizing and classifying, one of the most studied problems in complex geometry is the existence of hermitian metrics that can be regarded as _special_. Through the years, the Kahler case is the one that has been studied and understood the most, however, in the last decades the interest towards the non-Kahler world has been increasing more and more, leading to the search for special metrics also in this particular context. While in the Kahler case special metrics arise naturally, the non-Kahler scenario is too wild to guide us directly towards some central notion of special metric. Nevertheless, one can have indications on the path to follow by watching the Kahler world; more specifically, given an \(n\)-dimensional complex manifold \((M,J)\), if it is Kahler the obvious class of special (on a first level) metrics is given exactly by Kahler metrics - which we recall being hermitian metrics \(h\) whose fundamental form \(\omega:=h(J\cdot,\cdot)\) is \(d\)-closed. In addition, this condition can also be combined with the notion of _Einstein metric_ (thanks to the properties of Kahler metrics) from the general riemannian case, giving rise to the notion of Kahler-Einstein metrics, which are universally regarded as the "most special" in the Kahler world. Likewise, other notions of special Kahler metrics have been introduced and studied (some of them are still central in the study of Kahler geometry), like _constant scalar curvature Kahler_ (cscK) metrics, or the more general class of _extremal Kahler_ metrics (introduced by Calabi in [C]), however they all share the fact that they are giving a curvature condition on the metric, thus this suggests that when searching for special metrics in the non-Kahler case we shall ask for these metrics to be special under two aspects: the cohomological one (satisfying a condition possibly generalizing the Kahler one) and the curvature one.
Regarding the cohomological aspect, several conditions have been introduced that generalize the Kahler one, and one of the most studied is given by \(d\omega^{n-1}=0\), identifying the class of _balanced_
###### Abstract
We consider the \(i\)-th component of the \(i\)-th component of the \(j\)-th component of
Our interest towards Chern-Ricci flat balanced metrics comes actually from the realm of Calabi-Yau geometry. Indeed, for a not necessarily Kahler Calabi-Yau manifold (i.e. a complex manifold endowed with a holomorphic volume form) it was introduced by Hull and Strominger (respectively in [Hu] and [S]) a system of four equations coming from superstring theory known as the _Hull-Strominger system_, whose solutions have proved to be extremely hard to construct (see [GF] for a full presentation of the system and some known solutions, together with several other references such as [AGF], [FuY], [LY3], [P], [TY] and the very recent [CPY2], [FeY] for the invariant case, [PPZ] for a flow approach, and the recent moment map picture from [GFGM]). The problem of solving this system, apart from its physical meaning, carries great geometric interest, since it generalizes the Calabi-Yau condition to the non-Kahler framework, and it holds a central role in the geometrization conjecture for compact Calabi-Yau threefolds known as _Reid's Fantasy_ (see [R]). This last conjecture, in particular, states that all compact Kahler Calabi-Yau threefolds can be connected through a finite number of _conifold transitions_ (introduced by Clemens and Friedman, see [F]), i.e. a procedure consisting of the contraction of a finite family of disjoint \((-1,-1)\)-curves in a compact Calabi-Yau threefold, followed by the smoothing of the ordinary double points obtained from the previous step. These framework motivates further our interest towards Chern-Ricci flat balanced metrics, since it is directly related to one of the equation of the Hull-Strominger system, namely the _conformally balanced equation_, which on a compact Calabi-Yau manifold \((X,\Omega)\) - where \(\Omega\) is the holomorphic volume form - is an equation for hermitian metrics (actually their fundamental forms) \(\omega\) given by \(d(||\Omega||_{\omega}\omega^{n-1})=0\) which is clearly satisfied by balanced Chern-Ricci flat hermitian metrics. Moreover, our result takes a first step towards solving the problem proposed by Becker, Tseng and Yau in (Section 6 of) [BTY].
A natural question that arises from this construction, in the setting of the Hull-Strominger system and Reid's Fantasy, is if this strategy can be adapted to the case of singular threefolds with a finite family of ordinary double points aiming (in some sense) towards "reversing the arrow" in the construction done by Fu, Li and Yau in [FLY] and Collins, Picard and Yau in [CPY1]. Our strategy in this scenario unfortunately carries a complication that is hidden in the asymptotic behaviour of the standard Calabi-Yau metric \(\omega_{co,a}\) (introduced by Candelas and de la Ossa, see [CO]) on the small resolution of the standard conifold, thus in the last section we shall talk in more detail about the difficulties of the case and discuss some possible paths towards a solution of the problem in this other scenario. We are anyway able to achieve some partial result, that is the following.
**Proposition 1.2**.: _Let \((\tilde{M},\tilde{\omega})\) be a smoothable projective Kahler Calabi-Yau nodal threefold (with \(\tilde{\omega}\) a singular Calabi-Yau metric), and let \(M\) be a compact (not necessarily Kahler) small resolution of \(\tilde{M}.\) Then \(M\) admits a balanced approximately Chern-Ricci flat metric \(\omega\) such that_
\[[\omega^{2}]=[\tilde{\omega}^{2}]+\varepsilon^{4}[\mathbb{P}^{1}].\]
The paper is structured as follows. In Section 2, after giving examples, we present the first step of our work, consisting of the construction of a balanced metric on the crepant resolution made with the objects previously introduced, together with the construction of a global holomorphic volume, in order to express the Chern-Ricci potential for our new balanced metric and obtain estimates for it. In Section 3, we apply a deformation argument to obtain a genuine Chern-Ricci flat balanced metric and we discuss its possible applications to the search of solutions for the Hull-Strominger
system. In the last section, i.e. Section 4, we take a look at the case of Ordinary Double Points on threefolds, walk again through the gluing process from Section 2 to produce again a balanced approximately Chern-Ricci flat balanced metrics, and discuss the difficulties that arise if we try to repeat the deformation arguement in this case.
**Acknowledgements.** Both the authors are supported by Villum Young Investigator 0019098.
The authors would like to thank Mario Garcia-Fernandez for useful conversations and remarks.
## 2. The pre-gluing metric
Following several known gluing constructions from the literature (such as [AP], [BM], [J] and many others), our gluing process will be made of two main parts: the construction of a pre-gluing metric (which will be done in this section) obtained from a rough cut-off procedure providing an approximate solution to the problem, and a perturbative argument to obtain a genuine solution.
The goal of this section will be to prove the following:
**Proposition 2.1**.: _Let \(\tilde{M}\) be a Calabi-Yau orbifold with a finite family of isolated singularities, endowed with a Chern-Ricci flat balanced singular metric \(\tilde{\omega}\), and suppose that it admits \(M\) a crepant resolution. Then \(M\) admits \(\omega\) an approximately Chern-Ricci flat balanced metric._
### Chern-Ricci flat balanced orbifolds and their crepant resolutions
Before discussing the construction, we shall establish some notations for the reminder of the paper, and also use the occasion to briefly recall some known results from literature to understand better the framework we will be working in.
Throughout the paper we will denote with \(\tilde{M}\) an \(n\)-dimensional non-Kahler Calabi-Yau orbifold, i.e. a complex orbifold endowed with a holomorphic volume form \(\tilde{\Omega}\), with a finite family of isolated singularities, such that it admits a crepant resolution \(M\).
_Remark 2.2_.: A necessary condition for an orbifold to admit crepant resolutions is that the isotropy groups corresponding to the singularities are subgroups of \(SL(n,\mathbb{C})\), and for \(n=3\) it is also sufficient (see [J]), making it a useful criterion to search for examples.
_Remark 2.3_.: The exceptional set of a crepant resolution of an orbifold singularity is always divisorial, i.e. in codimension 1. Indeed, it is known that orbifold singularities are "mild", meaning that (see for example [KM]) every orbifold is normal and \(\mathbb{Q}\)-factorial. But the existence of a (quasi-projective) small resolution would imply that the orbifold is not \(\mathbb{Q}\)-factorial, i.e. a contradiction.
We will also assume that \(\tilde{M}\) is equipped with a singular balanced Chern-Ricci flat metric \(\tilde{\omega}\), and thus it is worth giving examples of spaces that satisfy our assumptions, in order to ensure that we are working on an actually existing class of spaces.
_Example 2.4_.: A first, trivial example is the one of quotients of tori with isolated orbifold singularities of the form \(\mathbb{C}^{3}/\mathbb{Z}_{3}\). In these cases, we know that the quotient is equipped with a singular Kahler Calabi-Yau metric, and D. Joyce (in [J], for example) has shown that also their crepant resolutions admit Kahler Calabi-Yau metrics, which can be obtained via gluing construction in the same fashion as the one we are about to present. However, since every Kahler Ricci-flat metric is
also balanced Chern-Ricci flat, we can still consider these spaces in our class, and - as we will se ahead - our construction does not ensure that the Chern-Ricci flat balanced metric obtained need to coincide with the Kahler Calabi-Yau, since the cohomology class preserved is going to be the balanced one, on which there are no known uniqueness results.
A possible variation on this argument could be to apply the (orbifold version of) the result of Tosatti and Weinkove in [14], which ensures us that we can find a Chern-Ricci flat balanced metric on the singular quotient of the torus, and thus provides a suitable metric for our construction.
_Example 2.5_.: A more interesting example can be obtained on torus bundles on some algebraic K3 surfaces. Indeed, Goldstein and Prokushkin produced in [13] a family of \(T^{2}\) bundles on \(K3\) surfaces that do not admit Kahler metrics; and they showed that these threefolds can be endowed with a balanced Chern-Ricci flat metric of the form
\[\eta=\pi^{*}\eta_{K3}+\frac{i}{2}\theta\wedge\overline{\theta},\]
where \(\eta_{K3}\) is the Calabi-Yau metric on the \(K3\), and \(\theta\) is a \((1,0)\)-form arising from the duals of the horizontal lift of the coordinate vector fields on the \(K3.\) These bundles \(X\) inherit also a non-Kahler Calabi-Yau structure, i.e. a holomorphic volume form given by
\[\Omega=\Omega_{K3}\wedge\theta.\]
Now, while these are the building blocks of the Fu and Yau solutions for the Hull-Strominger system (see [12]), Becker, Tseng and Yau constructed (in [1], Section 6) a \(\mathbb{Z}_{3}\) action on a subclass of the aformentioned torus bundles for some special choices of algebraic \(K3\)'s, of the form
\[\rho:(z_{0},z_{1},z_{2},z_{3},z_{4},z)\longrightarrow(\zeta^{2}z_{0},\zeta^{2 }z_{1},\zeta z_{2},z_{3},z_{4},\zeta^{2}z),\]
with \(\zeta\) a cubic root of unity different from \(1\), and where the \(z_{i}\)s are the homogeneous coordinates of the \(\mathbb{P}^{3}\) in which the \(K3\) lies, and \(z\) is the fiber coordinate. This action, despite not preserving the Calabi-Yau structures of the base and the fibres, it preserves \(\Omega\), together with the Chern-Ricci flat balanced metric \(\eta\), producing an orbifold with \(9\) isolated singularities of the form \(\mathbb{C}^{3}/\mathbb{Z}_{3}\), i.e. exactly from the family of orbifolds we are interested in working with.
_Example 2.6_.: A further example comes from an action of \(\mathbb{Z}_{4}\) on the Iwasawa manifold, constructed by Sferruzza and Tomassini in [12]. In said paper they showed that the action of \(Z_{4}=\langle\sigma\rangle\) on \(\mathbb{C}^{3}\), where
\[\sigma(z_{1},z_{2},z_{3}):=(iz_{1},iz_{2},-z_{3}),\]
descends to the quotient corresponding the (standard) Iwasawa manifold, producing 16 isolated singular points. Moreover, if we recall the standard coframe of invariant (with respect to the Heisenberg group operation) 1-forms
\[\varphi_{1}:=dz_{1},\quad\varphi_{2}:=dz_{2},\quad\varphi_{3}:=dz_{3}-z_{2}dz_ {1},\]
this can be used to construct a balanced metric
\[\omega:=\frac{i}{2}(\varphi_{1}\wedge\varphi_{\bar{1}}+\varphi_{2}\wedge \varphi_{\bar{2}}+\varphi_{3}\wedge\varphi_{\bar{3}}),\]
which descends to a Chern-Ricci flat balanced metric on the Iwasawa manifold, and is clearly invariant through \(\sigma\), as well as the standard holomorphic volume of \(\mathbb{C}^{3}\). Thus the quotient of the Iwasawa manifold through this action gives again an orbifold satisfying our hypotheses.
Our aim is to work on the crepant resolution \(M\), and obtain via a gluing construction (using Joyce's ALE metrics on the bubble, see [J]) a family of Chern-Ricci flat balanced metrics from \((\tilde{M},\tilde{\omega})\). In the following we will focus on the construction of the pre-gluing metric on \(M\), that will be an _approximately_ Chern-Ricci flat balanced metric. To make the presentation more clear, we will divide the process into three natural steps, and for simplicity assume that \(\tilde{M}\) has just one singularity (the process obviously applies analogously to the case in which the singularities are any finite number). We are also going to compute explicitly a holomorphic volume form for \(M\) (starting from the one on \(\tilde{M}\)), since such form is a crucial ingredient for the deformation argument in the following section, as it can be used to obtain a global expression for the Chern-Ricci potential.
### Pre-gluing - Step 1
We first glue together the metric \(\tilde{\omega}\) with the flat metric \(\omega_{o}\) centered at the singularity so that the resulting metric is balanced. This follows actually from the following remark, which holds for any balanced manifold and recovers a weaker version of the strategy used with normal coordinates in the Kahler case.
**Lemma 2.7**.: _Given \((X,\eta)\) an \(n\)-dimensional balanced orbifold with isolated singularities, for every \(x\in X\) it exists a sufficiently small \(\varepsilon>0\), coordinates \(z\) centered at \(x\) and a balanced metric \(\eta_{\varepsilon}\) such that_
\[\eta_{\varepsilon}=\begin{cases}\omega_{o}&\text{if }\,|z|<\varepsilon\\ \eta&\text{if }\,|z|>2\varepsilon\end{cases},\]
_where \(\omega_{o}\) is the flat metric around \(x\), and such that \(|\eta_{\varepsilon}|_{\omega_{o}}<c\varepsilon\) on \(\{\varepsilon\leq|z|\leq 2\varepsilon\}\)._
Proof.: If \((X,\eta)\) is an \(n\)-dimensional balanced orbifold and we fix any point \(x\in M\), we can choose coordinates \(z\) around \(x\) such that, in a sufficiently small neighborhood of the point, it holds
\[\eta=\omega_{o}+O(|z|),\]
where \(\omega_{o}\) is the flat metric in a neighborhood of \(x\) in the coordinates \(z\). But now this means that if we take the \(n-1\) power we obtain
\[\eta^{n-1}=\omega_{o}^{n-1}+\alpha,\]
where \(\alpha\) is a closed \((n-1,n-1)\)-form (thanks to the facts that \(\eta\) is balanced and \(\omega_{o}\) is Kahler) such that \(\alpha=O(|z|)\). Thus if we restrict to a simply connected neighborhood of \(x\), it exists a form \(\beta\) such that
\[\alpha=d\beta,\]
and it can be chosen to be such that \(\beta=O(|z|^{2})\), since if we decompose \(\beta=\beta_{l}+\beta_{q}\), where \(\beta_{l}\) is the component depending at most linearly on \(|z|\) and \(\beta_{q}\) is the quadratic one, the fact that \(\alpha=O(|z|)\) forces \(d\beta_{l}=O(|z|)\) which holds if and only if \(d\beta_{l}=0\), thus we can always choose
\(\beta=\beta_{q}\).
Hence, if we introduce a cut-off function
\[\chi(y):=\begin{cases}0&\text{if }y\leq 1\\ \text{non decreasing}&\text{if }1<y<2\\ 1&\text{if }y\geq 2\end{cases}\]
and call \(r(z):=|z|\) the (flat) distance from \(x\), we can take \(\chi_{\varepsilon}(y):=\chi(y/\varepsilon)\) and define
\[\eta_{\varepsilon}^{n-1}:=\omega_{o}^{n-1}+d(\chi_{\varepsilon}(r)\beta).\]
Here, the notation \(\eta_{\varepsilon}^{n-1}\) makes sense thanks to [M], since on the gluing region holds
\[|d(\chi_{\varepsilon}(r)\beta)|\leq|d\chi_{\varepsilon}||\beta|+|\chi_{ \varepsilon}||d\beta|\leq c\varepsilon,\]
ensuring that \(\eta_{\varepsilon}^{n-1}>0\). Thus we have obtained a balanced metric \(\eta_{\varepsilon}\) on \(X\setminus\{x\}\) which is exactly flat in a neighborhood of \(x\). The same argument applies to the orbifold points after taking a cover chart.
This also shows that, for example, there is a _canonical_ choice of balanced metric on the blow-up at a point of a balanced manifold, since thanks to this construction, any balanced metric can be glued to the Burns-Simanca metric preserving the balanced condition.
Thus we can start from our Chern-Ricci flat balanced metric \(\tilde{\omega}\) on \(\tilde{M}\) and obtain the corresponding cut-off metric \(\tilde{\omega}_{\varepsilon}\) in a neighborhood of the orbifold singularity \(x\) by chosing coordinates \(z\) on the orbifold cover chart. For our construction, it will however be more convenient to slightly vary the cut-off function and, for \(p>0\), choose
\[\chi_{\varepsilon,p}(y):=\chi(y/\varepsilon^{p})\]
so that the gluing region for \(\tilde{\omega}_{\varepsilon}\) becomes \(\{\varepsilon^{p}<r<2\varepsilon^{p}\}\). Also, using again the results in [M], we can notice that, even though we are cutting at the level of \((n-1,n-1)\)-forms, we have that on the gluing region the metric keeps being close to the flat metric, indeed:
_Remark 2.8_.: Notice that we can choose a basis \(\{e_{j}\}\) of \(1\)-forms diagonalizing simultaneously \(\omega_{o}\) (we can actually assume it to be the identity) and \(\tilde{\omega}_{\varepsilon}\); this means that also \(\omega_{o}^{n-1}\) and \(\tilde{\omega}_{\varepsilon}^{n-1}\) are diagonal (in the sense of \((n-1,n-1)\)-forms, implying that also the term \(O(r)\) is necessarily diagonal with respect to this basis. Thus we can write
\[\tilde{\omega}_{\varepsilon}^{n-1}=\sum_{j=1}^{n}(1+O(r))\widehat{e_{j} \wedge Je_{j}}\]
and applying Michelson's result with \(\Lambda_{j}=1+O(r)\), we obtain \(\tilde{\omega}_{\varepsilon}=\sum_{j=1}^{n}\lambda_{j}e_{j}\wedge Je_{j}\), with
\[\lambda_{j}=\frac{((1+O(r))\cdots(1+O(r)))^{\frac{1}{n-1}}}{1+O(r)}=1+O(r),\]
which implies, again thanks to Michelson's theorem
\[\omega=\sum_{j=1}^{n}\left(1+O(r)\right)e_{j}\wedge Je_{j}=\omega_{o}+O(r),\]
showing also that \(d\omega\) has uniformly bounded norm.
### Pre-gluing - Step 2
In this second step we instead perform the gluing between Joyce's Kahler-Ricci flat ALE metric \(\omega_{ALE}\) recalled in Section 2 and the flat metric \(\omega_{o}\) of \(\mathbb{C}^{n}\), on the crepant resolution \(\hat{X}\) of the singular model \(\mathbb{C}^{n}/G\), and we will actually be able to do it without losing the Kahler condition. To do this we recall that away from the singularity holds
\[\omega_{ALE}=\omega_{o}+Ai\partial\overline{\partial}(r^{2-2n}+o(r^{2-2n})),\]
where \(A>0\) is a constant and \(r\) is the (flat) distance from the singularity. This suggests introducing a large parameter \(R\) and a smooth cut-off function \(\chi_{R}(x):=\chi_{2}(x/R)\) on \([0,+\infty)\) such that
\[\chi_{2}(y):=\begin{cases}1&\text{if }y\leq\frac{1}{4},\\ \text{Non increasing}&\text{if }\frac{1}{4}<y<\frac{1}{2},\\ 0&\text{if }y\geq\frac{1}{4},\end{cases}\]
from which we introduce the family of closed \((1,1)\)-forms
\[\omega_{R}=\omega_{o}+i\partial\overline{\partial}(\chi_{R}(r)(r^{2-2n}+o(r^{ 2-2n})).\]
Once again, on the gluing region \(G_{R}:=\{\frac{R}{4}\leq r\leq\frac{R}{2}\}\) we have
\[|\omega_{R}-\omega_{o}|_{\omega_{o}}\leq|i\partial\overline{\partial}(\chi_{R }(r)(r^{2-2n}+o(r^{2-2n}))|_{\omega_{o}}\leq cR^{-2n}\leq cr^{-2n},\]
which clearly implies the positivity of \(\omega_{R}\) also on \(G_{R}\) (as long as \(R\) is chosen to be sufficiently large) ensuring that \(\omega_{R}\) is a Kahler metric on \(\hat{X}\) which is exactly flat outside of a compact set.
### Pre-gluing - Step 3
In this third and last step we want to glue together the metrics \(\tilde{\omega}_{\varepsilon}\) from Step 1 with the metric \(\omega_{R}\) from Step 2 by matching isometrically the exactly conical regions. In order to do this we are going to need to rescale by a constant \(\lambda>0\) the metric on \(\hat{X}\), and we will now see that this constant is a geometric constant, since it is dictated by the geometries of the two metrics we are gluing together.
In what follows we will denote with \(z\) the coordinates on \(M_{reg}\) nearby the singularity and with \(\zeta\) the coordinates on \(\hat{X}\), both given by the identification with the singularity model \(\mathbb{C}^{n}/G\). We then consider the regions
\[C_{R}:=\{R/4\leq r(\zeta)\leq 2R\}\subseteq\hat{X}\qquad\text{and}\qquad C_{ \varepsilon}:=\{\varepsilon^{p}/4\leq r(z)\leq 2\varepsilon^{p}\}\subseteq M_{reg}\]
and define a biholomorphism between them by imposing
\[\zeta=\left(\frac{R}{\varepsilon^{p}}\right)z.\]
From this expression we have that on the identified region the following identity holds
\[r(\zeta)=r\left(\left(\frac{R}{\varepsilon^{p}}\right)z\right)=\frac{R}{ \varepsilon^{p}}r(z)\]
which yields \(\lambda=\lambda(\varepsilon,R):=\left(\frac{\varepsilon^{p}}{R}\right)^{2}\). From this follows \(\lambda r^{2}(\zeta)=r^{2}(z)\), and thus on the identified conical regions \(C^{\prime}_{R}:=\{R\leq r(\zeta)\leq 2R\}\simeq\{\varepsilon^{p}\leq r(z)\leq 2 \varepsilon^{p}\}=:C^{\prime}_{\varepsilon}\) holds
\[\lambda\omega_{o}(\zeta)=\omega_{o}(z),\qquad\text{and consequently}\qquad \lambda\omega_{\varepsilon,R}=\tilde{\omega}_{\varepsilon}.\]
Hence, \(\lambda\) is the needed rescaling factor, which allows us to define the glued family of balanced metrics on the crepant resolution \(M\) as
\[\omega_{\varepsilon,R}:=\begin{cases}\lambda\omega_{R}&\text{on }r(\zeta)\leq R,\\ \omega_{o}&\text{on }\varepsilon^{p}\leq r(z)\leq 2\varepsilon^{p},\\ \tilde{\omega}_{\varepsilon}&\text{on }r(z)\geq 2\varepsilon^{p}.\end{cases}\]
_Remark 2.9_.: Notice that this first construction implies an Alessandrini-Bassanelli type result (see [1]) since it shows that any compact complex manifold bimeromorphic to a balanced orbifold with isolated singularities is also balanced.
In order to understand better the geometry of this new family of metrics, we shall obtain again some estimates on its distance from the flat metric on the gluing region, and since inside said region there is also an exactly flat part - whose geometry is also understood - which separates the two gluing regions from the first two steps, we can just estimate the distance separately on the two regions from the previous steps and then take the maximum.
Clearly, the metric is unaltered on the gluing region from Step 1, thus we still have on \(G_{\varepsilon}\) that
\[|\nabla^{k}_{\omega_{o}}(\omega-\omega_{o})|_{\omega_{o}}\leq cr^{1-k},\]
for all \(k\geq 0\).
On the other hand, since in this step we had to rescale the metric on \(\hat{X}\), we have to check how it has affected the distance from the cone. To have clearer estimates, we will express also this one in terms of the small coordinates \(z\), and we will relate the parameters \(R\) and \(\varepsilon\) by chosing \(R=\varepsilon^{-q}\), with \(q>0\). We first notice that on \(G_{R}\) (actually the corresponding region through the biholomorphism) it holds
\[\langle\omega_{\varepsilon,R}-\omega_{o},\omega_{\varepsilon,R}- \omega_{o}\rangle_{\omega_{o}}(z) =\lambda^{-2}\langle\lambda(\omega_{R}-\omega_{o}),\lambda(\omega _{R}-\omega_{o})\rangle_{\omega_{o}}(\zeta)\] \[=\langle\omega_{R}-\omega_{o},\omega_{R}-\omega_{o}\rangle_{ \omega_{o}}(\zeta)\]
implying that \(|\omega_{\varepsilon,R}-\omega_{o}|_{\omega_{o}}(z)=|\omega_{R}-\omega_{o}|_{ \omega_{o}}(\zeta)\). From here, we can recall the estimate done in Step 2 and obtain
\[|\omega_{\varepsilon,R}-\omega_{o}|_{\omega_{o}}(z)\leq |\omega_{R}-\omega_{o}|_{\omega_{o}}(\zeta)\] \[\leq cr^{-2n}(\zeta)=c\varepsilon^{2nq}\leq cr^{2nq/p}(z).\]
which implies, on the whole gluing region, that for all \(k\geq 0\) holds
\[|\nabla^{k}_{\omega_{o}}(\omega_{\varepsilon,R}-\omega_{o})|_{\omega_{o}} \leq cr^{m-k},\]
where \(m=\min\{1,2nq/p\}\).
### The Chern-Ricci potential
In order to use this description of the metrics to estimate the Chern-Ricci potential on the gluing region we are also going to need to understand how the holomorphic volume form of the resolution is related to the holomorphic volume of our background Calabi-Yau orbifold.
Before doing it we start by fixing some notation. Denote
* with \(\tilde{\Omega}\) the holomorphic volume of \(M_{reg}\) such that \[\tilde{\omega}^{3}=i\tilde{\Omega}\wedge\overline{\tilde{\Omega}};\]
* with \(\hat{\Omega}\) the rescaled holomorphic volume of the singularity model \(\mathbb{C}^{n}/G\) (and its crepant resolution \(\hat{X}\)) in order to match the metric rescaling, i.e. \(\hat{\Omega}:=\lambda^{3/2}\Omega_{o}\) where \[(\omega_{ALE})^{3}=i\Omega_{o}\wedge\overline{\Omega}_{o}.\]
Now, in a neighborhood of the singularity it exists a holomorphic function \(h\) such that
\[\tilde{\Omega}=h\Omega_{0}.\]
On the other hand, under the rescaling biholomorphism that glues \(\hat{X}\) to \(M\setminus\{x\}\), we identify the \(\Omega_{o}\) around the singularity with \(\hat{\Omega}\), thus we can read \(h\) as a holomorphic function on the singularity model, and hence holomorphically extend it to a holomorphic function on the whole \(\hat{X}\), and thus we can glue together \(h\hat{\Omega}\) with \(\tilde{\Omega}\) to obtain \(\Omega\) a holomorphic volume for \(M\).
We can also obtain information on \(h\) by noticing that, since \(\tilde{\omega}\) is asymptotic to \(\omega_{o}\) around the singularity, we obtain that around \(x\) it holds
\[(1+O(|z|))\omega_{o}^{3}=\tilde{\omega}^{3}=i\tilde{\Omega}\wedge\overline{ \tilde{\Omega}}=|h|^{2}i\Omega_{o}\wedge\overline{\Omega}_{o}=|h|^{2}\omega_{o }^{3}\]
from which follows
\[|h|=1+O(r),\]
from which, by continuity, we have that \(|h|^{2}\equiv 1\) on the exceptional part.
Thus we can define a global Chern-Ricci potential as
\[f=f_{p,q,\varepsilon}:=\log\left(\frac{i\hat{\Omega}\wedge\overline{\tilde{ \Omega}}}{\omega^{3}}\right).\]
and conclude this section by describing the behaviour of \(f\) in all the regions of \(M\), to show that it is suitable to apply a deformation argument similar to the one done in [BM]. We have
* on \(\{r(z)>2\varepsilon^{p}\}\) hold \(\omega=\tilde{\omega}\) and \(\Omega=\tilde{\Omega}\), thus \(f\equiv 0\);
* on \(\{\varepsilon^{p}\leq r(z)\leq 2\varepsilon^{p}\}\) hold \(\omega=\omega_{o}+O(r)\) and \(\Omega\wedge\overline{\Omega}=\Omega_{0}\wedge\overline{\Omega_{0}}+O(r)\), from which we have \[f=\log\left(\frac{\omega_{o}^{3}+O(r)}{\Omega_{0}\wedge\overline{\Omega_{0}}+ O(r)}\right)=\log(1+O(r))=O(r);\]
* on \(\{\frac{1}{2}\varepsilon^{p}\leq r(z)\leq\varepsilon^{p}\}\) hold \(\omega=\omega_{o}\) and \(\Omega\wedge\overline{\Omega}=i(1+O(r))\Omega_{0}\wedge\overline{\Omega_{0}}\), from which follows \(f=O(r)\);
* on \(\{\frac{1}{4}\varepsilon^{p}/2\leq r(z)\leq\frac{1}{2}\varepsilon^{p}\}\) hold \(\omega=\omega_{o}+O(r^{2nq/p})\) and \(\Omega\wedge\overline{\Omega}=\Omega_{0}\wedge\overline{\Omega_{0}}+O(r)\), implying \(f=O(r^{m})\);
* on \(\{r(z)<\varepsilon^{p}/2\}\) hold \(\omega^{3}=i\Omega_{o}\wedge\overline{\Omega}_{o}\) and \(\Omega\wedge\overline{\Omega}=i(1+O(r))\Omega_{0}\wedge\overline{\Omega_{0}}\), giving once again \(f=O(r)\).
Thus we can write globally (on \(M\)) that
\[|f|\leq cr^{m},\]
ensuring that the metric \(\omega\) is an _approximately Chern-Ricci flat_ balanced metric (as wanted in Proposition 2.1), hence a suitable one to perform our gluing construction.
## 3. The deformation argument
In this section we will see that what was built in the previous section are exactly the ingredients we need to introduce a deformation argument in the same fashion as [BM], in order to obtain a balanced Chern-Ricci flat metric on our crepant resolution \(M\). We will also analyze the cohomology class of the metric obtained and see why said metric is interested in the framework of the Hull-Strominger system.
### The strategy
We will now set up the problem for this section. First of all we recall the deformation of the metric that preserves the balanced condition introduced in [FWW] (here taken with a particular ansatz):
\[\omega_{\psi}^{n-1}:=\omega^{n-1}+i\partial\overline{\partial}(\psi\omega^{n- 2}),\quad\psi\in C^{\infty}(M,\mathbb{R})\;\;\text{such that}\;\;\omega_{ \psi}^{n-1}>0.\]
Thus the problem we are interested in solving, following what was done in [BM], is the _balanced Monge-Ampere type_ equation
\[\omega_{\psi}^{n}=e^{f}\omega^{n} \tag{1}\]
for \(\psi\in C^{\infty}(M,\mathbb{R})\) such that \(\omega_{\psi}^{n-1}>0\).
_Remark 3.1_.: The equation introduced above makes sense, because, as we've seen, \(f=O(r^{m})\), thus \(e^{f}=1+O(r^{m})\), meaning that \(e^{f}\omega^{n}\) is nearby \(\omega^{n}\) itself, hence it makes sense to try to obtain it as a small deformation of \(\omega\).
For practicality, it is useful to reformulate our equation as an operator on the space of smooth functions, thus we introduce \(F:C^{\infty}(M,\mathbb{R})\to C^{\infty}(M,\mathbb{R})\) as
\[F(\psi)=F_{\varepsilon}(\psi):=\frac{\omega_{\psi}^{n}}{\omega^{n}}-e^{f}.\]
Our aim is then to solve the equation \(F(\psi)=0\) - which is equivalent to (1) - through a fixed point argument, hence the first step to take towards this argument is to compute the linearization at \(0\) of the operator \(F\). To do this we shall introduce the notation \(\omega_{0}^{\prime}:=\frac{d}{dt}_{|_{t=0}}\omega_{tu}\), where \(\omega_{tu}\) is the curve corresponding to the tangent vector \(u\in C^{\infty}(M,\mathbb{R})\), and compute the derivative at zero of \(\omega_{tu}^{n}\) in two different ways:
\[\frac{d}{dt}_{|_{t=0}}\omega_{tu}^{n}=n\omega^{n-1}\wedge\omega_{0}^{\prime};\]
\[\frac{d}{dt}_{|_{t=0}}\omega_{tu}^{n}=i\partial\overline{\partial}(u\omega^{n-2}) \wedge\omega+\omega^{n-1}\wedge\omega_{0}^{\prime}.\]
Even though none of these two expressions are explicit, we can put them together to obtain an explicit one for the linearization, that is
\[Lu:=L_{\varepsilon}u=d_{0}F(u)=\frac{n}{n-1}\frac{i\partial\overline{\partial} (u\omega^{n-2})\wedge\omega}{\omega^{n}}.\]
Here we can work through a few computations to get a clearer (an much more understandable) expression for the operator.
**Lemma 3.2**.: _The linearized operator \(L\) can be written as_
\[Lu=\frac{n}{n-1}\left(\Delta_{\omega}u-\frac{1}{n!}|d\omega|_{\omega}^{2}u\right) \tag{2}\]
_for all \(u\in C^{\infty}(M)\)._
Proof.: For all \(n\geq 3\) it holds
\[i\partial\overline{\partial}(u\omega^{n-2})= i\partial(\overline{\partial}u\wedge\omega^{n-2}+(n-2)u\overline{ \partial}\omega\wedge\omega^{n-3})\] \[= i\partial\overline{\partial}u\wedge\omega^{n-2}-(n-2)i \overline{\partial}u\wedge\partial\omega\wedge\omega^{n-3}+(n-2)i\partial u \wedge\overline{\partial}\omega\wedge\omega^{n-3}\] \[+(n-2)ui\partial\overline{\partial}\omega\wedge\omega^{n-3}-(n-2 )(n-3)ui\overline{\partial}\omega\wedge\partial\omega\wedge\omega^{n-4},\]
and since the balanced condition \(d\omega^{n-1}=0\) implies \(\partial\omega\wedge\omega^{n-2}=0\), we get
\[i\partial\overline{\partial}(u\omega^{n-2})\wedge\omega\] \[=i\partial\overline{\partial}u\wedge\omega^{n-1}+(n-2)ui\partial \overline{\partial}\omega\wedge\omega^{n-2}-(n-2)(n-3)ui\overline{\partial} \omega\wedge\partial\omega\wedge\omega^{n-3}\] \[=\frac{1}{n}(\Delta_{\omega}u)\omega^{n}+(n-2)u(i\partial \overline{\partial}\omega\wedge\omega^{n-2}-(n-3)i\overline{\partial}\omega \wedge\partial\omega\wedge\omega^{n-3}).\]
Now, applying the operator \(\partial\) to the identity \(\overline{\partial}\omega\wedge\omega^{n-2}=0\), we get
\[0=i\partial\overline{\partial}\omega\wedge\omega^{n-2}-(n-2)i\overline{ \partial}\omega\wedge\partial\omega\wedge\omega^{n-3},\]
that is
\[i\partial\overline{\partial}\omega\wedge\omega^{n-2}=(n-2)i\overline{\partial }\omega\wedge\partial\omega\wedge\omega^{n-3},\]
giving us
\[i\partial\overline{\partial}(u\omega^{n-2})\wedge\omega=(\Delta_{\omega}u) \omega^{n}+ui\partial\overline{\partial}\omega\wedge\omega^{n-2}.\]
On the other hand, by definition of \(d^{c}\) it holds \(dd^{c}\omega=2i\partial\overline{\partial}\omega\), and applying formula [(2.13) from [AI]] in the balanced case and using the definition of the Hodge-\(*\) operator we get
\[-2|d\omega|_{\omega}^{2}\omega^{n}=\langle dd^{c}\omega,\omega^{2}\rangle_{ \omega}\omega^{n}=dd^{c}\omega\wedge*_{\omega}\omega^{2}=n!2i\partial\overline {\partial}\omega\wedge\omega^{n-2},\]
from which we finally obtain the _linearized balanced Monge-Ampere type_ operator
\[Lu=\frac{1}{n-1}\left(\Delta_{\omega}u-\frac{1}{(n-1)!}|d\omega|_{\omega}^{2} u\right),\]
and we can clearly notice that it is bounded (using Remark 2.8) and \(L^{2}\)-self adjoint.
**Proposition 3.3**.: _The linear operator \(L\) introduced above has vanishing kernel on any \(n\)-dimensional (\(n\geq 3\)) compact balanced manifold \((X,\eta)\), with \(\eta\) not Kahler._
Proof.: If \(u\in\text{Ker }L\), then also \(uLu=0\), which integrated on \(X\) gives us (using the balanced condition)
\[0=\int_{M}\left(-u\Delta_{\eta}u+\frac{1}{(n-1)!}|d\eta|_{\eta}^{2}u^{2}\right) \eta^{n}=\int_{M}\left(|\nabla_{\eta}u|^{2}+\frac{1}{(n-1)!}|d\eta|_{\eta}^{2} u^{2}\right)\eta^{n},\]
from which necessarily
\[\begin{cases}|\nabla_{\eta}u|\equiv 0\\ |d\eta|_{\eta}^{2}u^{2}\equiv 0\end{cases}\quad\Leftrightarrow\begin{cases}u \equiv c\in\mathbb{R}\\ c^{2}|d\eta|_{\eta}^{2}\equiv 0,\end{cases}\]
which implies, thanks to \(d\eta\neq 0\), that \(c=0\), and hence \(u\equiv 0\), i.e. \(L\) has vanishing kernel.
Notice that the fact that the metric is not Kahler is crucial for the proof, since the non-vanishing of \(d\eta\) ensures that the constants do not lie in the kernel of the operator.
### Weighted analysis
Our aim is now to study the invertibility of the linear operator \(L\), and we wish to do this in suitable weighted functional spaces. In order to introduce said spaces we shall start by introducing a weight function useful in our situation, and for simplicity we may assume that the neighbourhood of \(x\) on which the \(z\) coordinates are defined contains the region \(\{r(z)\leq 1\}\) (this is true up to a rescaling). Define then
\[\rho=\rho_{\varepsilon}(z):=\begin{cases}\varepsilon^{p+q}&\text{on }r(z)\leq \varepsilon^{p+q},\\ \text{non decreasing}&\text{on }\varepsilon^{p+q}\leq r(z)\leq 2\varepsilon^{p+q}, \\ r(z)&\text{on }2\varepsilon^{p+q}\leq r(z)\leq 1/2,\\ \text{non decreasing}&\text{on }1/2\leq r(z)\leq 1,\\ 1&\text{on }r(z)\geq 1,\end{cases}\]
Using this weight function we can introduce the weighted Holder norm and its corresponding weighted Holder spaces \(C^{k,\alpha}_{\varepsilon,b}(M)\), where \(k\geq 0\), \(\alpha\in(0,1)\) is the Holder constant, \(b\in\mathbb{R}\) is the weight and \(\varepsilon\) indicates the dependence on the metric \(\omega\) obtained by the gluing construction done above. We define
\[||u||_{C^{k,\alpha}_{\varepsilon,b}(M)}:= \sum_{i=0}^{k}\sup_{M}|\rho^{b+i}\nabla_{\varepsilon}^{i}u|_{\omega}\] \[+\sup_{d_{\varepsilon}(x,y)<inj_{\varepsilon}}\left|\min\left( \rho^{b+k+\alpha}(x),\rho^{b+k+\alpha}(y)\right)\frac{\nabla_{\varepsilon}^ {k}u(x)-\nabla_{\varepsilon}^{k}u(y)}{d_{\varepsilon}(x,y)^{\alpha}}\right|_ {\omega},\]
where \(inj_{\varepsilon}\) is the injectivity radius of the metric \(\omega\), and thus interpret \(F\) (and \(L\)) as operators defined as \(F:C^{2,\alpha}_{\varepsilon,b}(M)\to C^{0,\alpha}_{\varepsilon,b+2}(M)\).
Following then the literature, we first wish to prove the following estimate.
**Lemma 3.4**.: _With the same notations as above, for every \(b\in(0,n-1)\) it exists \(c>0\) (independent of \(\varepsilon\)) such that for sufficiently small \(\varepsilon\) it holds_
\[||u||_{C^{2,\alpha}_{\varepsilon,b}}\leq c||Lu||_{C^{0,\alpha}_{\varepsilon,b+ 2}},\]
_for all \(u\in C^{2,\alpha}_{\varepsilon,b}\)._
Proof.: Suppose by contradiction that the above inequality does not hold. This means that for all \(k\in\mathbb{N}\) we can find \(\varepsilon_{k}>0\) and \(u_{k}\in C^{2,\alpha}_{\varepsilon_{k},b}\) such that \(\varepsilon_{k}\to 0\) as \(k\to 0\), \(||u_{k}||_{C^{2,\alpha}_{\varepsilon_{k},b}}=1\) and
\[||Lu_{k}||_{C^{0,\alpha}_{\varepsilon_{k},b+2}}<\frac{1}{k}. \tag{3}\]
In the first place we analyze what happens on \(M_{reg}\), i.e. away from the exceptional part. The properties of the sequence \(\{u_{k}\}_{k\in\mathbb{N}}\) guarantee us that we can apply Arzela-Ascoli's Theorem, and hence up to subsequences we may assume \(u_{k}\to u_{\infty}\) uniformly on compact subsets of \(M_{reg}\) in the sense of \(C^{0,\alpha}_{b}\), with respect to \(\tilde{\omega}\). Moreover, since for any compact set \(K\subseteq M_{reg}\) there exists \(n_{K}\in\mathbb{N}\) such that for all \(k\geq n_{K}\) on \(K\) it holds \(\omega=\tilde{\omega}\), and hence \(\nabla_{\omega}=\nabla_{\tilde{\omega}}\), we actually have \(C^{2,\alpha}_{b}\)-convergence (again uniformly on compact subsets of \(M_{reg}\)). We shall then prove that \(u_{\infty}\) is necessarily identically zero on the whole \(M_{reg}\). Indeed, take \(\delta>0\) and \(B_{\delta}\) a ball of radius \(\delta\) around the singularity, and notice that, calling \(M_{\delta}:=M\setminus B_{\delta}\), we get
\[0=-\int_{M_{\delta}}u_{\infty}L_{\infty}u_{\infty}\tilde{\omega}^{n}=\int_{M_ {\delta}}\left(-u_{\infty}\Delta_{\tilde{\omega}}u_{\infty}+\frac{1}{n!}|d \tilde{\omega}|^{2}_{\tilde{\omega}}u_{\infty}^{2}\right)\tilde{\omega}^{n}, \tag{4}\]
and since \(\tilde{\omega}\) is balanced it holds
\[d(i\overline{\partial}u_{\infty}\wedge(u_{\infty}\tilde{\omega}^{n-1}))=u_{ \infty}i\partial\overline{\partial}u_{\infty}\wedge\tilde{\omega}^{n-1}+i \partial u_{\infty}\wedge\overline{\partial}u_{\infty}\wedge\tilde{\omega}^{n -1},\]
which combined with (4) gives
\[0=\int_{\partial B_{\delta}}u_{\infty}i\overline{\partial}u_{\infty}\wedge \tilde{\omega}^{n-1}+\int_{M_{\delta}}\left(|\nabla_{\tilde{\omega}}u_{\infty }|^{2}+\frac{1}{n!}|d\tilde{\omega}|^{2}_{\tilde{\omega}}u_{\infty}^{2}\right) \tilde{\omega}^{n}. \tag{5}\]
But if we call \(d\hat{V}\) the volume form induced by the flat metric, we get
\[\left|\int_{\partial B_{\delta}}u_{\infty}i\overline{\partial}u_{\infty} \wedge\tilde{\omega}^{n-1}\right|\leq c\int_{\partial B_{\delta}}|u_{\infty}| \omega|\partial u_{\infty}|_{\omega}d\hat{V}\leq c\delta^{2(n-1-b)},\]
thus choosing \(b<n-1\) and taking the limit for \(\delta\to 0\) in (5), we get \(u\equiv 0\) on \(M_{reg}\) by repeating what was done in Remark 3.3.
Let now \(M_{c}:=\{r(z)\geq 1/2\}\subseteq M_{reg}\) be a compact set on which we know that \(u_{k}\to 0\) uniformly in \(C^{2,\alpha}_{b}\). To obtain a contradiction we want to prove that \(\{u_{k}\}_{k\in\mathbb{N}}\) admits a subsequence uniformly convergent to zero in \(C^{2,\alpha}_{b}\) also on \(A:=\{r(z)<1/2\}\).
In order to work in this region, it is simpler to shift to the "large" coordinates \(\zeta\), i.e. the coordinates on the crepant resolution \(\hat{X}\) away from the exceptional part. It is then useful to recall the relations
\[\zeta=\varepsilon^{-(p+q)}z\quad\text{and}\quad r(z)=\varepsilon^{p+q}r(\zeta),\]
from which we can write down the explicit identification
\[\left\{r(z)<\frac{1}{2}\right\}=A\simeq\tilde{A}=\tilde{A}_{\varepsilon}=\left\{ r(\zeta)<\frac{1}{2}\varepsilon^{-(p+q)}\right\}\subseteq\hat{X};\]
this last set \(\tilde{A}\) is the one we will be working on.
The first thing to do is rewrite the weight function in terms of this coordinates on \(\tilde{A}\), resulting in
\[\rho=\begin{cases}\varepsilon^{p+q}&\text{on }r(\zeta)\leq 1,\\ \text{non decreasing}&\text{on }1\leq r(\zeta)\leq 2,\\ \varepsilon^{p+q}r(\zeta)&\text{on }2\leq r(\zeta)\leq 1/2\varepsilon^{-(p+q)}. \end{cases}\]
Notice that the entire gluing region of the metric (from the previous step) is entirely contained inside the third region, i.e. \(\{2\leq r(\zeta)\leq 1/2\varepsilon^{-(p+q)}\}\).
We now go back to our sequence \(\{u_{k}\}_{k\in\mathbb{N}}\). Since \(||u_{k}||_{C^{2,\alpha}_{\varepsilon_{k},b}}=1\) for all \(k\in\mathbb{N}\), we have in particular that on all \(\tilde{A}_{k}:=\tilde{A}_{\varepsilon_{k}}\) holds
\[|\rho^{b}u_{k}|\leq c.\]
Introducing then the new sequence
\[U_{k}:=\varepsilon_{k}^{b(p+q)}u_{k},\]
the above weighted estimates for \(u_{k}\) imply the following ones for this new sequence:
\[\begin{cases}|U_{k}|\leq c&\text{on }r(\zeta)\leq 1,\\ |U_{k}|\leq c&\text{on }1\leq r(\zeta)\leq 2,\\ |U_{k}|\leq cr^{-b}(\zeta)&\text{on }2\leq r(\zeta)\leq 1/2\varepsilon_{k}^{-(p+ q)}.\end{cases}\]
These estimates for \(U_{k}\) suggest us to introduce a new weight function \(\tilde{\rho}=\tilde{\rho}_{k}\) on \(\tilde{A}_{k}\) given by
\[\tilde{\rho}(\zeta)=\begin{cases}1&\text{on }r(\zeta)\leq 1,\\ \text{non decreasing}&\text{on }1\leq r(\zeta)\leq 2,\\ r(\zeta)&\text{on }2\leq r(\zeta)\leq 1/2\varepsilon_{k}^{-(p+q)},\end{cases},\]
with which we get that
\[|\tilde{\rho}^{b}U_{k}|\leq c, \tag{6}\]
and analogous weighted estimates also for \(\nabla U_{k}\) and \(\nabla^{2}U_{k}\), hence again by Ascoli-Arzela theorem we have that \(U_{k}\to U_{\infty}\) uniformly on compact sets of \(\hat{X}\) (since \(\tilde{A}_{k}\to\hat{X}\)) in the sense of \(\tilde{C}^{2,\alpha}_{b}=C^{2,\alpha}_{b}(\tilde{\rho})\), where this last space is the weighted Holder space on \(\hat{X}\) identified by the weight \(\tilde{\rho}\) and the metric \(\omega_{ALE}\).
On the other hand, on any compact subset of \(\hat{X}\), for sufficiently large \(k\) it holds
\[\rho^{b+2}Lu_{k}=\tilde{\rho}^{b+2}\Delta_{\omega_{ALE}}U_{k}, \tag{7}\]
and since \(\frac{1}{k}>||\tilde{L}u_{k}||_{C^{0,\alpha}_{\varepsilon_{k},b+2}}\), taking the limit in (7) we obtain that \(U_{\infty}\) is harmonic with respect to the ALE metric \(\omega_{ALE}\). Moreover, taking the limit in (6) ensures us that \(U_{\infty}\) decays at infinity, from which follows that \(U_{\infty}\equiv 0\) on the whole \(\hat{X}\), and thus \(U_{k}\overset{\tilde{C}^{2,\alpha}_{b}}{\longrightarrow}0\) uniformly on compact sets of \(\hat{X}\).
If we are now able to prove that \(U_{k}\) admits a subsequence converging uniformly to zero on the whole \(\hat{X}\) in the sense \(\tilde{C}_{b}^{0}\) we get our contradiction, and we are done. Indeed, if \(U_{k}\overset{\tilde{C}_{b}^{0}}{\to}0\) uniformly (up to subsequences) on \(\hat{X}\), then scaled Schauder estimates imply that also \(U_{k}\overset{\tilde{C}_{b}^{2,\alpha}}{\to}0\) uniformly, which is the same as saying \(u_{k}\overset{C_{\varepsilon_{k},b}^{2,\alpha}}{\to}0\) uniformly on \(\{r(z)<1/2\}\). Thus \(\{u_{k}\}_{k\in\mathbb{N}}\) up to subsequences is uniformly convergent to zero on the whole manifold \(M\), which is a contradiction with the fact that \(||u_{k}||_{C_{\varepsilon_{k},b}^{2,\alpha}}=1\) for all \(k\in\mathbb{N}\).
Now we will prove that the said uniformly convergent subsequence exists. If by contradiction this was not the case, since we have the uniform convergence on compact sets, we would be able to find \(\delta>0\) and \(\{x_{k}\}_{k\in\mathbb{N}}\subseteq\hat{X}\), \(x_{k}\in\tilde{A}_{k}\), such that \(R_{k}:=r(\zeta(x_{k}))\to+\infty\) and \(R_{k}^{b}U_{k}(x_{k})\geq\delta\) for all \(k\in\mathbb{N}\), and since \(R_{k}\to+\infty\), we can actually assume \(\tilde{\rho}\equiv r\) on the points of the sequence, from which we get that for all \(k\in\mathbb{N}\) holds
\[R_{k}^{b}|U_{k}(x_{k})|\geq\delta. \tag{8}\]
Naming then \(r_{k}:=r(z(x_{k}))\), recalling the relation between the two coordinates we have \(\frac{1}{2}\geq r_{k}=\varepsilon_{k}^{p+q}R_{k}\), thus up to subsequences we can end up into two cases:
1. if \(r_{k}\to l>0\), then \(x_{k}\to x_{\infty}\), and since \(u_{k}\) is uniformly convergent on compact sets on \(M_{reg}\), we get that \(u_{k}(x_{k})\) is bounded, giving \[0<\delta\leq R_{k}^{b}U_{k}(x_{k})=(R_{k}\varepsilon_{k}^{p+q})^{b}u_{k}(x_{k })=r_{k}^{b}u_{k}(x_{k})\underset{k\to\infty}{\longrightarrow}0,\] which is a contradiction;
2. if \(r_{k}\to 0\), let \(X^{*}:=\hat{X}\setminus E\) the singularity model and \(X^{\prime}\) a copy of \(X^{*}\), and we consider the biholomorphisms \(\sigma_{k}:B_{k}\to A\setminus\{0\}\), given by \[\sigma_{k}(z^{\prime}):=r_{n}z^{\prime},\] where \(B_{k}:=\{0<r(z^{\prime})<\frac{r_{k}^{-1}}{2}\}\subseteq X^{\prime}\). Then, if we endow \(B_{k}\) with the metric \[\theta_{k}:=r_{k}^{-2}\sigma_{k}^{*}\omega,\] it is easy to notice that the couple \((B_{k},\theta_{k})\) converges to \((X^{\prime},\omega_{flat})\), i.e. the standard singularity model. If we then introduce the functions \[w_{k}:=r_{k}^{b}\sigma_{k}^{*}u_{k}\] on \(B_{k}\), we notice that the pullback of the weight function \(\rho\) gives \[\rho^{\prime}(z^{\prime})=\sigma_{k}^{*}\rho(z^{\prime})=\begin{cases} \varepsilon_{k}^{p+q}&\text{on }r(z^{\prime})<R_{k}^{-1},\\ \text{non decreasing}&\text{on }R_{k}^{-1}\leq r(z^{\prime})\leq 2R_{k}^{-1},\\ r_{k}r(z^{\prime})&\text{on }2R_{k}^{-1}\leq r(z^{\prime})<\frac{r_{k}^{-1}}{2}, \end{cases}\] from which we get (pulling back the inequality \(\rho^{b}|u_{k}|\leq 1\)) (9) \[r^{b}(z^{\prime})w_{k}(z^{\prime})\leq 1\]
on each \(z^{\prime}\in X\) (assuming \(k\) to be sufficiently large). Hence, this shows that for any compact \(K\subseteq X^{\prime}\), we can choose \(k\in\mathbb{N}\) to be sufficiently large in order to have \(K\subseteq B_{k}\) and \(\rho^{\prime}(z^{\prime})=r_{k}r(z^{\prime})\) on the whole \(K\), and get that \(w_{k}\) is uniformly bounded on \(K\); and since this works for any compact \(K\subseteq X^{\prime}\), we obtain that - up to subsequences - \(\{w_{k}\}_{k\in\mathbb{N}}\) converges uniformly on compact sets of \(X^{\prime}\) to a function \(w_{\infty}\), and from (9) we get that \(w_{\infty}\) is decaying at infinity. Moreover, recalling that \(R_{k}^{b}U_{k}(x_{k})\geq\delta\) for all \(k\in\mathbb{N}\), if we introduce the sequence \(y_{k}:=\sigma_{k}^{-1}(x_{k})\), it is straighforward to notice that from its definition follows that \(w_{k}(y_{k})\geq\delta\) and \(||y_{k}||_{\theta_{k}}=1\) for all \(k\in\mathbb{N}\), thus implying that - up to subsequences - \(y_{k}\to y_{\infty}\in X^{\prime}\), and hence (10) \[w_{\infty}(y_{\infty})>0.\] Now, if we recall the definition of the operator \(L\) and take the pullback with respect to \(\sigma_{k}\) of \(\rho^{b+2}Lu_{k}\), it is immediate to see that on every compact \(K\subseteq X^{\prime}\) we get (11) \[\begin{split}\sigma_{k}^{*}\left(\rho^{b+2}Lu_{k}\right)& =\frac{n}{n-1}r^{b+2}(z^{\prime})\left(\frac{i\partial\overline{ \partial}w_{k}\wedge\theta_{k}^{n-1}}{\theta_{k}^{n}}+|d\theta_{k}|_{\theta_{ k}}^{2}w_{k}\right)\\ &=\frac{n}{n-1}r^{b+2}(z^{\prime})\Delta_{\theta_{k}}w_{k}+|d \theta_{k}|_{\theta_{k}}^{2}w_{k},\end{split}\] from which we have, taking the limit as \(k\to+\infty\), that \[\Delta_{\omega_{flat}}w_{\infty}\equiv 0\quad\text{on }X^{\prime},\] i.e., \(w_{\infty}\) is harmonic on \(X^{\prime}\) with respect to the flat metric. Thus, since it decays at infinity, we obtain \(w_{\infty}\equiv 0\), which is a contradiction as (10) holds.
Thus the proof is complete.
As a direct consequence we get
**Lemma 3.5**.: _The operator \(L:C_{\varepsilon,b}^{2,\alpha}(M)\to C_{\varepsilon,b+2}^{0,\alpha}(M)\) defined above is a linear isomorphism for every \(b\in(0,n-1)\)._
Proof.: Notice that \(L\) is elliptic and shares its index with the laplacian, which is zero. Moreover, by Proposition 3.3 we have that \(L\) is injective, thus we automatically get that \(L\) is also surjective and - from 3.4 - has bounded inverse, thus \(L\) is a isomorphism.
With this result we can now show how to reformulate the original equation as a fixed point problem.
In order to do this we shall consider the expansion
\[F(\psi)=F(0)+L(\psi)+Q(\psi),\]
and thus rewrite the balanced Monge-Ampere type equation as
\[F(0)+L(\psi)+Q(\psi)=0,\]
and using now Lemma 3.5, we get that our equation is therefore equivalent to
\[\psi=L^{-1}(-F(0)-Q(\psi))=:N(\psi), \tag{12}\]
i.e. the search for a fixed point for the operator \(N:C^{2,\alpha}_{\varepsilon,b}(M)\to C^{2,\alpha}_{\varepsilon,b}(M)\). To do this, we will have to identify the open set on which we wish to apply Banach's Lemma, and show that on said open set, the operator \(N\) can be restricted and gives rise to a contraction.
The first thing to do is the following remark.
_Remark 3.6_.: If \(C,\tau>0\), and \(\varphi\) is a function on \(M\) such that \(||\varphi||_{C^{2,\alpha}_{\varepsilon,-2}}\leq C\varepsilon^{\tau}\), thanks to Remark 3.2 it is straightforward to see that
\[||i\partial\overline{\partial}(\varphi\omega)||_{C^{0,\alpha}_{\varepsilon,0 }}\leq||\varphi||_{C^{2,\alpha}_{\varepsilon,-2}}\leq C\varepsilon^{\tau},\]
thus we are guaranteed that, choosing \(\varepsilon\) to be sufficiently small, \(\omega_{\varphi}^{n-1}>0\), and thus its \((n-1)\) root \(\omega_{\varphi}\) exists and is a balanced metric. Moreover, we can apply again the argument used in Remark 2.8, and obtain that if \(||\varphi||_{C^{2,\alpha}_{\varepsilon,-2}}\leq C\varepsilon^{\tau}\), then
\[|\omega_{\varphi}-\omega|_{\omega}\leq c||\varphi||_{C^{2,\alpha}_{\varepsilon,-2}}\leq c\varepsilon^{\tau},\]
which also implies that \(\omega_{\varphi}\to\omega\), as \(\varepsilon\to 0\).
Thanks to this remark, we have a suggestion on how to choose the open set on which apply Banach's Lemma, hence we introduce
\[U_{\tau}:=\{\varphi\in C^{2,\alpha}_{\varepsilon,b}\,|\,||\varphi||_{C^{2, \alpha}_{\varepsilon,b}}<\tilde{c}\varepsilon^{(p+q)(b+2)+\tau}\}\subseteq C ^{2,\alpha}_{\varepsilon,b},\]
and we notice that for every \(\varphi\in U_{\tau}\) it holds \(||\varphi||_{C^{2,\alpha}_{\varepsilon,-2}}\leq C\varepsilon^{\tau}\), with \(C\) independent of \(\varphi\) and \(\varepsilon\).
We will now prove that on \(U_{\tau}\), the operator \(N\) is a contraction. In particular, given \(\varphi_{1},\varphi_{2}\in U_{\tau}\), we want to estimate
\[N(\varphi_{1})-N(\varphi_{2})=L^{-1}((\hat{Q}(\varphi_{2})-\hat{Q}(\varphi_{1 }))).\]
To do so, we notice that by the Mean Value Theorem we can find \(t\in[0,1]\) such that
\[Q(\varphi_{1})-Q(\varphi_{2})=dQ_{\nu}(\varphi_{1}-\varphi_{2})=(L_{\nu}-L)( \varphi_{1}-\varphi_{2}),\]
where \(\nu=t\varphi_{1}+(1-t)\varphi_{2}\in U_{\tau}\), and \(L_{\nu}\) is the linearization of \(F\) at \(\nu\). With the same strategy used to compute \(L\) we can easily obtain an expression for \(L_{\nu}\), and thus get
\[(L_{\nu}-L)(\varphi_{1}-\varphi_{2})=\frac{n}{n-1}\frac{(\omega_{\nu}-\omega )\wedge i\partial\overline{\partial}((\varphi_{1}-\varphi_{2})\omega^{n-2})}{ \omega^{n}}.\]
From here, taking the norms with respect to \(\omega\), we can use the fact that \(\nu\in U_{\tau}\) together with Remark 3.6, to obtain
\[|(L_{\nu}-L)(\varphi_{1}-\varphi_{2})|\leq c|\omega_{\nu}-\omega|_{\omega}|i \partial\overline{\partial}((\varphi_{1}-\varphi_{2})\omega^{n-2})|_{\omega} \leq c\varepsilon^{\tau}|i\partial\overline{\partial}((\varphi_{1}-\varphi_{2 })\omega^{n-2})|_{\omega},\]
and thus, by multiplying the inequality with \(\rho^{b+2}\), get
\[||Q(\varphi_{1})-Q(\varphi_{2})||_{C^{0,\alpha}_{b+2,\varepsilon}}\leq c \varepsilon^{\tau}||\varphi_{1}-\varphi_{2}||_{C^{2,\alpha}_{\varepsilon,b}}, \tag{13}\]
hence, choosing \(\varepsilon\) sufficiently small ensures us that \(N\) is a contraction on \(U_{\tau}\).
We are left with proving that \(N(U_{\tau})\subseteq U_{\tau}\). To do this we shall assume that \(pm-q(b+2)>\tau>0\)
(which can easily be done), and see that for every \(\varphi\in U_{\tau}\), thanks to estimate (13) and Lemma 3.4, we have
\[||N(\varphi)||_{C^{2,\alpha}_{\varepsilon,b}}\leq ||N(\varphi)-N(0)||_{C^{2,\alpha}_{\varepsilon,b}}+||N(0)||_{C^{2, \alpha}_{\varepsilon,b}}\] \[\leq c\varepsilon^{\tau}||\varphi||_{C^{2,\alpha}_{\varepsilon,b}}+||L^ {-1}(1-e^{f})||_{C^{2,\alpha}_{\varepsilon,b}}\] \[\leq c\varepsilon^{\tau}||\varphi||_{C^{2,\alpha}_{\varepsilon,b}}+||f ||_{C^{0,\alpha}_{\varepsilon,b+2}}\] \[\leq c(\varepsilon^{(p+q)(b+2)+2\tau}+\varepsilon^{p(b+2)+pm})\] \[\leq c\varepsilon^{\min\{\tau,pm-q(b+2)-\tau\}}\varepsilon^{(p+q)(b+2) +\tau}\] \[\leq \tilde{c}\varepsilon^{(p+q)(b+2)+\tau},\]
implying that \(N(U)\subseteq U\).
This shows that everything is into place to apply Banach's Lemma on the open set \(U\) and obtain \(\hat{\omega}\) a Chern-Ricci flat balanced metric \(\hat{\omega}\) on \(M\), thus proving Theorem 1.1.
Remark 3.6 also implies:
**Corollary 3.7**.: _The couple \((M,\hat{\omega})\) Gromov-Hausdorff converges to the singular Calabi-Yau metric on \(M_{reg}\) and, up to rescaling, to Joyce's ALE metrics nearby the exceptional curve._
We conclude this part with a few remarks.
_Remark 3.8_.: In light of Remark 2.3, Stokes' Theorem shows that - with the deformation given by the balanced Monge-Ampere type equation - the volume of the exceptional divisors remains the same as the one of the pre-gluing metric, i.e. the (scaled) volume of the ALE metric.
_Remark 3.9_.: Thanks to what is known about Joyce's ALE metrics, if we have \(k\in\mathbb{N}\) orbifold singularities and we call \(E^{i}_{j}\), \(i=1,...,k_{j}\) the exceptional divisors corresponding to the resolution of the \(j\)-th singularity, for \(j=1,...,k\), from our construction we can conclude (in the same way as in [BM]) that
\[[\omega^{n-1}]=[\hat{\omega}^{n-1}]=[\tilde{\omega}^{n-1}]+(-1)^{n-1} \varepsilon^{(2n-2)}(\sum_{i=1}^{k_{j}}\sum_{j=1}^{k}a_{j}^{i}PD[E^{i}_{j}])^{ n-1},\]
where \(PD[E^{i}_{j}]\) denotes the Poincare dual of the class \([E^{i}_{j}]\).
Thus completing the proof of Theorem 1.1.
_Remark 3.10_.: It is known that for a manifold which is Calabi-Yau with holomorphic volume \(\Omega\), the existence of a Chern-Ricci flat balanced metric implies that \(\Omega\) is parallel with respect to the Bismut connection associated to said metric. Among the other things, this implies that the restricted holonomy of the Bismut connection of Chern-Ricci flat balanced metrics is contained in \(SU(n)\).
_Remark 3.11_.: Even though this construction is done to address a non-Kahler situation, it can also be applied when \(\tilde{M}\) is instead Kahler (Ricci flat). In this case we know from Joyce's theorem that \(M\) admits a Kahler Calabi-Yau metric \(\omega_{1}\), hence together with the balanced class induced by our Chern-Ricci flat balanced metric \(\hat{\omega}\) we also have the one induced by \(\omega_{1}\). This two balanced
classes need not be the same, however, even if they are to coincide, there is no uniqueness result that would guarantee that the two metrics have to be the same; moreover, the deformation we used in our construction does not cover the whole balanced class, hence in this case we are not even guaranteed that the two metrics are linked by our chosen deformation.
### Relation to the Hull-Strominger system
As a conclusion of the paper, we would like to briefly relate our construction to the Hull-Strominger system and how we intend to develop our research in this direction, hence we shall first quickly recall the definition of said system (for more details we refer to the notes [GF]).
The framework is given by \((X,\Omega)\) a (not necessarily Kahler) Calabi-Yau manifold, and the first equation of the system, for \(\omega\) a hermitian metric, is known as the _dilatino equation_ and is given by
\[d^{*}\omega=d^{c}\log||\Omega||_{\omega},\]
which easily seen to be equivalent to the _conformally balanced equation_
\[d\left(||\Omega||_{\omega}\omega^{n-1}\right)=0.\]
Hence, this last equation tells us that we need to work with balanced manifolds, and thus we see a first relation to our scenario.
To complete the system we need to pair the dilatino equation with two Hermite-Einstein equations for holomorphic vector bundles, thus, in the same fashion as what we had with the dilatino equation, the presence of the Hermite-Einstein equation in the Hull-Strominger system will limit us to consider only polystable bundles. Finally, adding one last equation, known as the _Bianchi identity_, we can introduce the system.
**Definition 3.12**.: Given a Calabi-Yau manifold \((Y,\Omega)\) and a holomorphic vector bundle \(E\) on \(Y\), we say that the triple \((\omega,\;h,\;\overline{\partial}_{T})\) is a solution of the _Hull-Strominger system_ if it satisfies
\[\Lambda_{\omega}F_{h} =0,\] \[\Lambda_{\omega}R =0,\] \[d^{*}\omega-d^{c}\log||\Omega||_{\omega} =0,\] \[dd^{c}\omega-\alpha(\text{tr}R\wedge R-\text{tr}F_{h}\wedge F_{h}) =0;\]
where, \(\alpha\) is a non-zero constant, \(\omega\) is a hermitian metric on \(Y\), \(h\) is a hermitian metric along the fibers of \(E\), \(\overline{\partial}_{T}\) is a holomorphic structure on the tangent bundle of \(Y\), and \(R\) is the Chern curvature tensor of \(\omega\), read as a hermitian metric on the holomorphic vector bundle \((TY,J,\overline{\partial}_{T})\).
The _Bianchi identity_ (also known as _anomaly cancellation equation_), is the hardest and least understood equation of the system and also the one we wish to address in the development of our research.
It is significant to notice that if we choose \((E,h)\) to be the holomorphic tangent bundle with the metric \(\omega\), and take \(\omega\) a Kahler Ricci-flat metric, we see that this satisfies the system, thus being a solution of the Hull-Strominger system is a condition that generalizes being Kahler Calabi-Yau, hence a very promising candidate class of _special metrics_.
Also, thanks to the equivalence between Hermite-Einstein metrics and Hermite-Yang-Mills connections it is possible to rewrite the system from a gauge-theoretical point of view.
**Definition 3.13**.: Given a Calabi-Yau manifold \((Y,\Omega)\) and a hermitian vector bundle \((E,h)\) (with a fixed holomorphic structure) on \(Y\), the triple \((\omega,\,A,\,\nabla)\) is a solution of the _Hull-Strominger system_ if it satisfies
\[\Lambda_{\omega}F_{A}=0,\quad F_{A}^{0,2}=0,\]
\[\Lambda_{\omega}R_{\nabla}=0,\quad R_{\nabla}^{0,2}=0,\]
\[d\left(||\Omega||_{\omega}\omega^{n-1}\right)=0,\]
\[dd^{c}\omega-\alpha(\text{tr}R\wedge R-\text{tr}F_{h}\wedge F_{h})=0;\]
where \(\alpha\) is a non-vanishing constant, \(\omega\) a hermitian metric on \(Y\), \(A\) is unitary connection on \((E,h)\) and \(\nabla\) is a unitary connection on \((TM,J,g)\).
This second description is useful to notice a series of necessary conditions when \(X\) is compact. Indeed, as already observed we have that \(Y\) has to be necessarily balanced, and given the natural balanced class \(\tau\) given by the dilatino equation, we have that \(E\) and \(TY\) have to be \(\tau\)-polystable. Morever, we have that \(c_{1}(Y)=0\), and also
\[\begin{split} c_{1}(E)\cdot\tau&=0\\ ch_{2}(E)&=ch_{2}(Y)\in H_{BC}^{2,2}(Y,\mathbb{R}), \end{split} \tag{14}\]
where \(ch_{2}\) denotes the second Chern character.
Although several examples of solutions have been studied over time (several can be found, for example, in [GF]), there is still a very poor understanding on the existence of solutions, even on threefolds, where however it was conjectured by Yau in [Y] that conditions (14) are not only a necessary condition but even a sufficient one to the existence of solutions in the case of threefolds.
If we then take our construction, and we view it in the system's scenario, we can make the following final remark in which we explain our ideas on how to expand our construction in this direction.
_Remark 3.14_.: Given \(\hat{\beta}\) a Chern-Ricci flat balanced metric on a Calabi-Yau threefold \((Y,\Psi)\), it holds
\[||\Psi||_{\hat{\beta}}\equiv const.,\]
showing that our metric \(\hat{\omega}\) gives a solution of the conformally balanced/dilatino equation on our crepant resolutions \((M,\Omega)\). Thus our construction gives us two solutions of the dilatino equation on \((M,\Omega)\), that are \(\hat{\omega}\) and \(\omega^{\prime}:=||\hat{\Omega}||_{\omega}^{-2}\omega\), where this last one is the dilatino equation solution associated to the balanced metric \(\omega\) obtained in the first part of the gluing construction. From here, thanks to the fact that this metrics are nearby a Kahler Ricci-flat metric, an idea could be to try and adapt stretegies as in [CPY1] or [DS] to construct a Hermite-Einstein metric on the tangent bundle with respect to the above metrics, and eventually from there try and extend it to a whole solution of the Hull-Strominger system, using - for example - some version of the approach of [AGF].
Other possible paths could instead be related to the orbifold examples from [BTY] and [ST] we recalled in Section 2, on which it could be interesting to see if, again through a gluing process, if it is possible to construct new non-Kahler solutions to the Hull-Strominger system.
## 4. The conifold singularity case
As anticipated in the introduction, it is natural to ask weather or not the construction can be adapted to the case of ordinary double points on threefolds, in order to fit our result in the conifold transition framework. Unfortunately issues show up, hence in the following we shall - after recalling the ingredients on Ordinary Double Points on threefolds - walk through our construction and see what continues to hold, see what fails and discuss ideas on how to eventually solve the issues.
### Ordinary Double Points and their small resolutions
The type of singularity adressed in this case is the one of Ordinary Double Points on threefolds (which are the most common kind of singularities), and are described by the model
\[X:=\{z_{1}^{2}+z_{2}^{2}+z_{3}^{2}+z_{4}^{2}=0\}\subseteq\mathbb{C}^{4},\]
which is known as the \(3\)-dimensional _standard conifold_, whose only singular point is the origin. Then we have:
**Definition 4.1**.: A singular point \(p\) in a singular threefold \(Y\) is called _ordinary double point_ (ODP) if we can find a neighborhood \(p\in U\subseteq M\) and a neighborhood \(0\in V\subseteq X\) such that \(U\) and \(V\) are biholomorphic through a map that sends \(p\) to \(0\).
These singularities arise naturally on threefolds when collapsing \((-1,-1)\)-curves, i.e. rational curves biholomorphic to \(\mathbb{P}^{1}\) whose normal bundle is isomorphic to \(\mathcal{O}_{\mathbb{P}^{1}}(-1)^{\oplus 2}\), and actually this procedure to obtain ODPs covers all the possibilities on threefolds. Indeed, the standard conifold can be constructed in several ways, one of which is the following: consider the rank \(2\) bundle \(\mathcal{O}_{\mathbb{P}^{1}}(-1)^{\oplus 2}\) on \(\mathbb{P}^{1}\) and notice that the map
\[([X_{1}:X_{2}],(w_{1},w_{2}))\mapsto(w_{1}X_{1},w_{1}X_{2},w_{2}X_{1},w_{2}X_{2})\]
maps \(\mathcal{O}_{\mathbb{P}^{1}}(-1)^{\oplus 2}\) onto \(X\) - since \(X\) through a change of coordinates is biholomorphic to the set \(\{W_{1}W_{2}-W_{3}W_{4}=0\}\) - sending the zero section onto the origin. Moreover this map restricted to \(\mathcal{O}_{\mathbb{P}^{1}}(-1)^{\oplus 2}\setminus\mathbb{P}^{1}\) (where \(\mathbb{P}^{1}\) is meant as the zero section) gives a biholomorphism with \(X\setminus\{0\}\), proving our previous statement. This shows us that these singularities always admit small resolutions (with \(\mathbb{P}^{1}\) as the exceptional curve) biholomorphic to \(\hat{X}:=\mathcal{O}_{\mathbb{P}^{1}}(-1)^{\oplus 2}\), and it can be shown that a singular threefold with \(n\) ordinary double points admits exactly \(2^{n}\) small resolutions of this type (every singularity can be resolved with a curve in two distinct bimeromorphic ways).
Regarding instead the metric aspect of this singularities, the standard conifold \(X\) is naturally endowed with a conical structure. Indeed, we can introduce the function on \(\mathbb{C}^{4}\)
\[r(z):=||z||^{\frac{2}{3}},\]
which restricted to \(X\) yields the conical distance to the singularity, and can be used to define the metric
\[\omega_{co,0}:=\frac{3}{2}i\partial\overline{\partial}r^{2},\]
on the smooth part of \(X\), which is clearly Kahler. Moreover, it can be seen that \(\omega_{co,0}\) is actually also Ricci flat, as well as a cone metric over the link \(L:=\{r=1\}\subseteq X\) which can be written as
\[g_{co,0}=\frac{3}{2}(dr^{2}+r^{2}g_{L}),\]
with \(g_{L}\) a Sasaki-Einstein metric on the link \(L\).
This metric structure of the standard conifold, with some further work, yields also a Kahler Calabi-Yau structure on the small resolution. In fact, Candelas and de la Ossa (see [CO]) constructed a family of metrics, depending on the parameter \(a>0\), of the form
\[\omega_{co,a}:=i\partial\overline{\partial}f_{a}(r^{3})+4a^{2}\pi_{\mathbb{P} ^{1}}^{*}\omega_{FS},\]
where \(\omega_{FS}\) is the Fubini-Study metric on \(\mathbb{P}^{1}\), and \(f_{a}\) is smooth function satisfying the ODE
\[(xf_{a}^{\prime}(x))^{3}+6a^{2}(xf_{a}^{\prime}(x))^{2}=x^{2},\qquad f_{a}(x) \geq 0,\]
on \([0,+\infty)\), which immediately gives \(f_{a}(x)=a^{2}f_{1}(x/a^{3})\). Here the function \(r\) is simply the conical distance from the singularity re-read on the resolution, hence portraying the conical distance from the exceptional curve. Moreover, this family of metrics is such that as \(a\to 0\) the metrics \(\omega_{co,a}\) converges, away from the exceptional curve, to the standard cone metric \(\omega_{co,0}\), and it is also asymptotic (at infinity) to the cone metric \(\omega_{co,0}\), and these facts can be seen explicitly with the following expansion from [CPY1].
**Lemma 4.2**.: _For \(x\gg 1\), the function \(f_{1}(x)\) has a convergent expansion_
\[f_{1}(x)=\frac{3}{2}x^{\frac{2}{3}}-2\log(x)+\sum_{n=0}^{+\infty}c_{n}x^{- \frac{2n}{3}}.\]
We shall now move on towards the gluing attempt.
### Gluing attempt and possible solutions
First of all, we lay out the details of the setting and take \(\tilde{M}\) a _smoothable_ Kahler Calabi-Yau singular threefold obtained from the contraction of a finite family of disjoint \((-1,-1)\)-curves in a compact complex threefold (thus the singular set of \(\tilde{M}\) is made of a finite number of Ordinary Double Points) - hence with the regular part \(M_{reg}\) of \(\tilde{M}\) equipped with \(\tilde{\omega}\) a Kahler Calabi-Yau metric - and \(M\) a compact small resolution of \(\tilde{M}\).
_Remark 4.3_.: The reason why we have much stronger assumptions with respect to the orbifold case, is because for this type of singularities we are not aware of a version of Lemma 2.7, thus we need a condition to be able to smoothly cut-off the singular metric at the standard model around the singularity (given in this case by the standard cone metric \(\omega_{co,0}\)), and such condition is given exactly by the smoothability assumption, which allows us to apply the following result from Hein and Sun (Theorem 1.4 and Lemma A.1 from [HS]), which can be simplified for our purpose with the following statement (here written only for threefolds):
**Theorem 4.4** (Hein-Sun).: _Let \(\tilde{M}\) be a smoothable singular threefold whose singular set is a finite family of ODPs endowed with a Kahler Calabi-Yau metric \(\tilde{\omega}\) on its smooth part \(M_{reg}\). Then for
every singular point \(p\in\tilde{M}\setminus M_{reg}\) there exist a constant \(\lambda_{0}>0\), neighborhoods \(p\in U_{p}\subseteq\tilde{M}\) and \(0\in V_{p}\subseteq X\), and a biholomorphism \(P:V_{p}\setminus\{0\}\to U_{p}\setminus\{p\}\) such that_
\[P^{*}\tilde{\omega}-\omega_{co,0}=i\partial\overline{\partial}\varphi,\qquad \text{for some }\varphi\in C^{\infty}_{2+\lambda_{0}},\]
_where \(r\) is the conical distance from the singularities and \(C^{\infty}_{2+\lambda_{0}}\) is the space of smooth functions with decay rate at zero of \(2+\lambda_{0}\) (i.e. an \(f\in C^{\infty}_{2+\lambda_{0}}\) is a smooth function such that nearby zero it holds \(|\nabla^{k}f|\leq cr^{2+\lambda_{0}-k}\) for all \(k\geq 0\))._
Anyway, what follows actually works if we replace the assumption above with: \(\tilde{\omega}\) a singular Chern-Ricci flat balanced metric such that in a neighborhood of each singularity is asymptotic to the standard cone metric \(\omega_{co,0}\).
Now, since our work aims to face the case of compact non-Kahler small resolutions of said \(\tilde{M}\), before describing the gluing attempt it is significant to show that this kind of resolutions are actually a very common situation.
_Remark 4.5_.: Thanks to a result from Cheltsov (see [Ch]) we know that a hypersurface \(\tilde{M}\) in \(\mathbb{P}^{4}\) of degree \(d\) with only isolated ODPs is factorial when \(\tilde{M}\) has at most \((d-1)^{2}-1\) singularities, thus is in particular \(\mathbb{Q}\)-factorial. We can then apply the work from Namikawa and Steenbrink (see [NS]) to obtain that \(\tilde{M}\) is smoothable, and hence, thanks to the results from Friedman (see [F]) we have that any small resolution \(M\) of \(\tilde{M}\) with exceptional curves \(C_{1},...,C_{k}\), \(C_{i}\simeq\mathbb{P}^{1}\), satisfies necessarily a condition
\[\sum_{i=1}^{k}\lambda_{i}[C_{i}]=0\quad\text{in }\;H_{2}(M,\mathbb{R}),\quad \text{where each }\;\lambda_{i}\neq 0,\]
which immediately implies that if \(\tilde{M}\) has only one ODP, then \(M\) can't be Kahler because it contains a homologically trivial curve (note that the generic quintic threefold has exactly one node, and is smoothable since it is a hypersurface in the projective space, hence satisfies this situation).
Moreover, Werner proved in [W] that \(M\) is projective if and only if all \(C_{i}\)s are homologically non-trivial, and since \(M\) is Moishezon, projectivity is equivalent to Kahlerness. Thus the class of examples above lies in a larger one, since every small resolution with at least a homologically trivial exceptional curve is non-Kahler.
Before discussing the construction, if we momentarily drop the curvature condition, it is straightforward from literature to conclude the existence of balanced metrics on the small resolution. Indeed:
_Remark 4.6_.: Thanks to the results from Hironaka and Alessandrini-Bassanelli ([Hi] and [AB2]), we already know that such small resolutions admit balanced metrics, since blowing up the singularites produces a smooth Kahler threefold which is birational to the small resolution. This fact also shows that for the non-Kahler small resolutions we are considering, the Fino-Vezzoni conjecture (see [FV], Problem 3) holds true, since \(M\) is Moishezon, and thus we can apply Theorems B and C from [CRS] to obtain that \(M\) does not admit SKT metrics.
We will now present the gluing attempt. Since the proofs are essentially the same as the ones performed in Sections 2 and 3, we will just avoid them and only state the results. Again for simplicity we will just work with one singularity.
The first thing to do is to produce a pre-gluing metric, and in the same fashion as we have done in Step 2, we do this in three steps.
1. First, we glue the background singular metric \(\tilde{\omega}\) to the standard cone metric around the singularity. To do so, we take a cut-off function \(\chi_{\varepsilon}\) as in Step 1 above, and use Theorem 4.4. Indeed, if we take \(p>0\) and \(\varepsilon>0\) sufficiently small, so that on the region \(\{0<r\leq 2\varepsilon^{p}\}\subseteq X\) exists a constant \(\lambda_{0}>0\) and is defined a function \(\varphi\in C^{\infty}_{2+\lambda_{0}}\) such that \[\tilde{\omega}=\omega_{co,0}+i\partial\overline{\partial}\varphi,\] we can define the smooth real \((1,1)\)-form \[\tilde{\omega}_{\varepsilon}:=\omega_{co,0}+i\partial\overline{\partial}( \chi_{\varepsilon}(r)\varphi),\] which for \(\varepsilon\) sufficiently small defines a Kahler metric on \(M_{reg}\) which is exactly conical around the singularity.
2. Now, we work on the small resolution of the conifold \(\tilde{X}\) and glue the Candelas-de la Ossa metric \(\omega_{co,a}\) to the standard cone metric, away from the exceptional curve, and since it's not possible to do it preserving the Kahler condition, we will do it maintaining the balanced one. This can be done thanks to the fact that the Candelas-de la Ossa metric is not exact at infinity, but its square is so, since it holds \[\omega_{co,a}^{2}=\left(i\partial\overline{\partial}\left(\frac{3}{2}r^{2}+a ^{2}\psi_{a}(r)\right)\right)^{2}+2a^{2}i\partial\overline{\partial}\left(f_ {a}(r^{3})\wedge\pi^{*}\omega_{FS}\right).\] Thus if we introduce a cut-off function \(\chi_{R}\) as in Step 2 above, we can define the family of closed \((2,2)\)-forms \[\omega_{a,R}^{2}=\left(i\partial\overline{\partial}\left(\frac{3}{2}r^{2}+a^{ 2}\chi_{R}(r)\psi_{a}(r)\right)\right)^{2}+2a^{2}i\partial\overline{\partial} \left(\chi_{R}(r)f_{a}(r^{3})\right)\wedge\pi^{*}\omega_{FS},\] which correspond to balanced metrics for sufficiently large \(R>0\).
3. As in Step 3 above, we suitably rescale the metrics \(\omega_{a,R}\) on the bubble with a geometric parameter \(\lambda\) and match the two pieces on their exactly conical regions, and hence define \[\omega=\omega_{\varepsilon,R}:=\begin{cases}\lambda\omega_{a,R}&\quad\text{ on }r(\zeta)\leq R,\\ \omega_{co,0}&\quad\text{ on }\varepsilon^{p}\leq r(z)\leq 2\varepsilon^{p},\\ \omega_{\varepsilon}&\quad\text{ on }r(z)\geq 2\varepsilon^{p}.\end{cases}\] At this stage, as done above, we can just unify the parameters \(\varepsilon\) and \(R\), and choose \(R:=\varepsilon^{-q}\), with \(q>0\), and using Remark 2.8 we can see that on the gluing region \(\{\frac{1}{2}\varepsilon^{p}<r\leq 2\varepsilon^{p}\}\) holds \[\omega=\omega_{co,0}+O(r^{\lambda_{0}})+O(r^{2q/p}\log r).\] Moreover, we can also here match the holomorphic volumes of the singular threefold and of the small resolution to obtain an (almost) explicit holomorphic volume \(\Omega\) for \(M\), which can be used again to define the global Chern-Ricci potential \[f=f_{p,q,\varepsilon}:=\log\left(\frac{i\Omega\wedge\overline{\Omega}}{\omega ^{3}}\right),\]
and obtain that globally on \(M\) holds
\[|f|=O(r^{\lambda_{0}})+O(r^{2q/p}\log r),\]
i.e. a small Chern-Ricci potential.
_Remark 4.7_.: As in Remark 3.14, the existence of this metric gives us immediately a solution to the dilatino equation, that is the metric \(\omega^{\prime}:=||\hat{\Omega}||_{\omega}^{-2}\omega\), which is still quite explicit, thus again a potentially interesting starting point for the construction of a solution to the Hull-Strominger system.
Let us now analyze then the cohomology class naturally associated to the metric \(\omega\) just obtained, i.e. the \((2,2)\)-class
\[[\omega^{2}]\in H^{2,2}_{dR}(M).\]
If we introduce two cut off functions \(\theta_{1},\theta_{2}:[0,+\infty)\to[0,1]\) defined as follows:
\[\theta_{1}(x):=\begin{cases}1&\text{if}\,\,\,x\leq\frac{1}{8}\varepsilon^{-q }\\ \text{non increasing}&\text{if}\,\,\,\frac{1}{8}\varepsilon^{-q}\leq x\leq \frac{1}{4}\varepsilon^{-q}\\ 0&\text{if}\,\,\,x\geq\frac{1}{4}\varepsilon^{-q}\end{cases}\]
and
\[\theta_{2}(x):=\begin{cases}0&\text{if}\,\,\,x\leq 8\varepsilon^{-q}\\ \text{non decreasing}&\text{if}\,\,\,8\varepsilon^{-q}\leq x\leq 16\varepsilon^{-q} \\ 1&\text{if}\,\,\,x\geq 16\varepsilon^{-q};\end{cases}\]
and since for sufficiently small \(\varepsilon\) we have that \(\omega\) is exact on \(K:=\{\frac{1}{8}\varepsilon^{-q}\leq r(\zeta)\leq 8\varepsilon^{-q}\}\), it exists a \(3\)-form \(\beta\) such that
\[\omega^{2}=d\beta\quad\text{on}\,\,K.\]
Introduce then the form
\[\Omega_{c}:=d(\theta_{1}(r(\zeta))+\theta_{2}(r(\zeta)))\beta),\]
which is a smooth compactly supported form. Moreover, the form
\[\beta-(\theta_{1}(r)+\theta_{2}(r))\beta\]
can be extended as zero to the whole \(M\), thanks to the definition of the cut-offs, and thus get that
\[[\omega^{2}]=[\Omega_{c}],\]
i.e. the class \([\omega^{2}]\) admits a compactly supported representative. In addition, the two cut-offs introduced also allow us to decompose \(\Omega_{c}=\Omega_{c}^{\prime}+\Omega_{c}^{\prime\prime}\), such that on \(K\) hold
\[\Omega_{c}^{\prime}=d(\theta_{1}(r)\beta)\quad\text{and}\quad\Omega_{c}^{ \prime\prime}=d(\theta_{2}(r)\beta),\]
and both \(\Omega_{c}^{\prime}\) and \(\Omega_{c}^{\prime\prime}\) are compactly supported and closed; in particular said compact supports are respectively contained in \(\hat{X}\) and \(M_{reg}\) (via the obvious identifications), and from their definition it is straightforward to see that
\[[\Omega_{c}^{\prime}]=\varepsilon^{4(p+q)}[\omega_{co,a}^{2}]\in H^{4}_{c}( \hat{X})\]
and
\[[\Omega_{c}^{\prime\prime}]=[\tilde{\omega}^{2}]\in H_{c}^{4}(M_{reg}),\]
where \(H_{c}\) denotes the compactly supported cohomology group. Also, recalling that \(\hat{X}\simeq\mathcal{O}_{\mathbb{P}^{1}}(-1)^{\oplus 2}\), it is clear that \(\hat{X}\) is homotopically equivalent to \(\mathbb{P}^{1}\); hence applying Poincare duality we get
\[H_{c}^{4}(\hat{X})\simeq H_{2}^{c}(\hat{X})\simeq H_{2}^{c}(\mathbb{P}^{1})=H_ {2}(\mathbb{P}^{1})=\langle[\mathbb{P}^{1}]\rangle,\]
which means that the non-zero class \([\omega_{co,a}^{2}]\), up to multiplicative constants, is the Poincare dual of the generator of \(H_{2}(\mathbb{P}^{1})\) (thus we can "confuse" them with each other), and thus we can write
\[[\omega^{2}]=[\tilde{\omega}^{2}]+\varepsilon^{4(p+q)}[\mathbb{P}^{1}]\quad \text{in}\;\;H_{dR}^{2,2}(M).\]
Finally, we also notice that
\[\int_{\mathbb{P}^{1}}\omega=\varepsilon^{2(p+q)}\int_{\mathbb{P}^{1}}\omega_ {co,a}\underset{\varepsilon\to 0}{\longrightarrow}0,\]
hence the balanced class \([\omega^{2}]\), as \(\varepsilon\to 0\), converges to a _nef class_, i.e. to the boundary of the balanced cone. This completes the proof of Proposition 1.2.
From what was proven above, the pre-gluing metric \(\omega\) appears as suitable for a deformation argument, but unfortunately it is exactly here where the issue lies, and descends from the asymptotic behaviour of the Candelas-de la Ossa metrics.
Indeed, we can again consider the balanced Monge-Ampere type equation (1), obtained with our ansatz for the Fu-Wang-Wu balanced deformation, and obtain the corresponding operator \(F\) and its linearization at zero \(L\) (we use the same names of the operators used above since their expressions are unchanged). At this point, considering analogous weighted Holder spaces as the ones used in Section 3, and a variation of \(F\) (following an argument of [Sz]) given by \(\tilde{F}(\psi):=\frac{\omega_{\psi}^{n}}{\omega^{n}}-e^{f-ev_{x}\psi}\) we obtain, with essentially the same proof, the invertibility of the corresponding linearization \(\tilde{L}\) and an estimate for its inverse (as in Lemma 3.4), i.e.
**Lemma 4.8**.: _For every \(b\in(0,2)\) it exists \(c>0\) (independent of \(\varepsilon\)) such that for sufficiently small \(\varepsilon\) the operator \(\tilde{L}\) is invertible and it holds_
\[||u||_{C^{2,\alpha}_{\varepsilon,b}}\leq c||\tilde{L}u||_{C^{0,\alpha}_{ \varepsilon,b+2}},\]
_for all \(u\in C^{2,\alpha}_{\varepsilon,b}\)._
From here, we see that we can again turn the equation \(\tilde{F}(\psi)=0\) (which still produces Chern-Ricci flat balanced metrics) into a fixed point problem. In order to do this we shall introduce the operators \(\hat{F},E,G:C^{2,\alpha}_{\varepsilon,b}(M)\to C^{0,\alpha}_{\varepsilon,b+2} (M)\) defined as
\[\hat{F}(\psi):=\frac{\omega_{\psi}^{3}}{\omega^{3}},\quad E(\psi):=e^{f+ev_{x} (\psi)}\quad\text{and}\quad G(\psi)=e^{f}ev_{x}(\psi),\]
from which we can write
\[\tilde{F}=\hat{F}-E.\]
Now, we can consider the expansion
\[\hat{F}(\psi)=\hat{F}(0)+L(\psi)+\hat{Q}(\psi),\]
and thus rewrite \(\tilde{F}(0)=0\) as
\[\hat{F}(0)+L(\psi)+\hat{Q}(\psi)-E(\psi)=0.\]
Here, we notice that \(\tilde{L}=L-G\), thus we can rewrite \(\tilde{F}(0)=0\) once more and get
\[\hat{F}(0)+\tilde{L}(\psi)+\hat{Q}(\psi)+G(\psi)-E(\psi)=0,\]
and using the above Lemma, we get that the balanced Monge-Ampere type equation is therefore equivalent to
\[\psi=\tilde{L}^{-1}(E(\psi)-G(\psi)-\hat{F}(0)-\hat{Q}(\psi))=:N(\psi), \tag{15}\]
i.e. the search for a fixed point for the operator \(N:C^{2,\alpha}_{\varepsilon,b}(M)\to C^{2,\alpha}_{\varepsilon,b}(M)\).
At this stage, analogously as above it is easy to check that on a suitable open set \(U_{\tau}\), with \(\tau>0\), given by
\[U_{\tau}:=\{\varphi\in C^{2,\alpha}_{\varepsilon,b}\mid||\varphi||_{C^{2, \alpha}_{\varepsilon,b}}<\tilde{c}\varepsilon^{(p+q)(b+2)+\tau}\}\subseteq C^ {2,\alpha}_{\varepsilon,b},\]
it holds that \(N\) is a contraction operator. Unfortunately, it is impossible to consistently choose \(p\), \(q\) and \(\tau\) to repeat the above proof and obtain that
\[N(U_{\tau})\subseteq U_{\tau},\]
and this is caused by the asymptotic quadratic decay to the cone of the Candelas-de la Ossa metrics (unusual for Calabi-Yau metrics). Actually, what happens is that this quadratic decay is exactly the threshold for this argument to work, since if said decay was (arbitrarily) more than quadratic, the argument would have worked without issues.
Analyzing further the Candelas-de la Ossa metrics, one can see that if we just consider the cut-off metrics \(\omega_{a,R}\) on the small resolution \(\hat{X}\), these are exactly conical at infinity, thus a deformation argument as the one performed above could lead to Chern-Ricci flat balanced metrics with faster decay to the cone, but unfortunately the metric \(\omega_{a,R}\) cannot be used to do this as the "initial error" given by the Chern-Ricci potential of said metric turns out to be blowing up with repsect to the weighted Holder norm, suggesting that there might not be any Chern-Ricci flat balanced metrics in a neighborhood of the Candelas-de la Ossa metrics.
Hence, a possibility that we wish to explore in order to solve this issue, is to understand if it is possible to obtain Chern-Ricci flat balanced metrics on \(\hat{X}\) which have fast decay but are not necessarily near to the Candelas-de la Ossa metrics, and the approach we think might be interesting to use is to try and obtain a balanced version of Conlon-Hein's result (see [CH]) starting from the metric \(\omega_{a,R}\), which would immediately produce the missing ingredient to complete the above failed gluing construction. Obviously such a problem comes with several challenges on the analytic side, as the balanced setting and the definition of the balanced Monge-Ampere type equation do not allow many of the tools typically to obtain Yau's estimates such as the Moser iteration technique, and the non-compact (even though weighted) setting makes it also hard to apply other inequalities that are typically used in non-Kahler settings such as the Cherrier inequality (see [TW1]). Another possible interesting path to take could be to try and understand if the balanced class induced by
the metric \(\omega\) could be a polystable class for the holomorphic tangent bundle. This, thanks to the Hitchin-Kobayashi correspondence, would lead us to the existence of Hermite-Einstein metrics on said bundle, and thus add a block in the construction of a solution to the Hull-Strominger system.
|
2309.15900 | Simulating ionization feedback from young massive stars: impact of
numerical resolution | Modelling galaxy formation in hydrodynamic simulations has increasingly
adopted various radiative transfer methods to account for photoionization
feedback from young massive stars. However, the evolution of HII regions around
stars begins in dense star-forming clouds and spans large dynamical ranges in
both space and time, posing severe challenges for numerical simulations in
terms of both spatial and temporal resolution that depends strongly on gas
density ($\propto n^{-1}$). In this work, we perform a series of idealized HII
region simulations using the moving-mesh radiation-hydrodynamic code Arepo-RT
to study the effects of numerical resolution. The simulated results match the
analytical solutions and the ionization feedback converges only if the
Str\"omgren sphere is resolved by at least $10$--$100$ resolution elements and
the size of each time integration step is smaller than $0.1$ times the
recombination timescale. Insufficient spatial resolution leads to reduced
ionization fraction but enhanced ionized gas mass and momentum feedback from
the HII regions, as well as degrading the multi-phase interstellar medium into
a diffuse, partially ionized, warm ($\sim8000$ K) gas. On the other hand,
insufficient temporal resolution strongly suppresses the effects of ionizing
feedback. This is because longer timesteps are not able to resolve the rapid
variation of the thermochemistry properties of the gas cells around massive
stars, especially when the photon injection and thermochemistry are performed
with different cadences. Finally, we provide novel numerical implementations to
overcome the above issues when strict resolution requirements are not
achievable in practice. | Yunwei Deng, Hui Li, Rahul Kannan, Aaron Smith, Mark Vogelsberger, Greg L. Bryan | 2023-09-27T18:00:00Z | http://arxiv.org/abs/2309.15900v2 | # Simulating ionization feedback from young massive stars: impact of numerical resolution
###### Abstract
Modelling galaxy formation in hydrodynamic simulations has increasingly adopted various radiative transfer methods to account for photoionization feedback from young massive stars. However, the evolution of H ii regions around stars begins in dense star-forming clouds and spans large dynamical ranges in both space and time, posing severe challenges for numerical simulations in terms of both spatial and temporal resolution that depends strongly on gas density (\(\propto n^{-1}\)). In this work, we perform a series of idealized H ii region simulations using the moving-mesh radiation-hydrodynamic code arepo-ft to study the effects of numerical resolution. The simulated results match the analytical solutions and the ionization feedback converges only if the Stromgren sphere is resolved by at least 10-100 resolution elements and the size of each time integration step is smaller than 0.1 times the recombination timescale. Insufficient spatial resolution leads to reduced ionization fraction but enhanced ionized gas mass and momentum feedback from the H ii regions, as well as degrading the multi-phase interstellar medium into a diffuse, partially ionized, warm (\(\sim 8000\,\mathrm{K}\)) gas. On the other hand, insufficient temporal resolution strongly suppresses the effects of ionizing feedback. This is because longer timesteps are not able to resolve the rapid variation of the thermochemistry properties of the gas cells around massive stars, especially when the photon injection and thermochemistry are performed with different cadences. Finally, we provide novel numerical implementations to overcome the above issues when strict resolution requirements are not achievable in practice.
keywords: H ii regions - methods: numerical - radiative transfer - hydrodynamics - galaxies: evolution
## 1 Introduction
Feedback from young massive stars plays a key role in the evolution of giant molecular clouds (GMCs), galaxies, and the intergalactic medium throughout the Universe. Born in their natal clouds, massive stars provide feedback to the ambient interstellar medium (ISM) with ultra-violet (UV) radiation and stellar winds, and die as core-collapse supernovae (SNe). Ionizing radiation, together with stellar winds, from these massive stars alters the ionization state of the ambient gas, and injects comparable thermal energy and momentum as SNe (e.g. Agertz et al., 2013; Geen et al., 2015; Jeffreson et al., 2021). Such an energetic early feedback mechanism can disperse GMCs on a short time scale of \(\sim 1.5\,\mathrm{Myr}\)(Kruijssen et al., 2019) prior to the SN explosion (e.g. Fall et al., 2010; Dale et al., 2012, 2014; Raskutti et al., 2016; Li et al., 2019) and thus is crucial to regulate the star formation in galaxies (e.g. Hopkins et al., 2012; Emerick et al., 2018; Hopkins et al., 2020; Chevance et al., 2022). It is also a key ingredient of galaxy formation simulations and significantly affects predicted galactic properties (e.g. Shapiro, 1986; Whalen et al., 2004; Kannan et al., 2014, 2020, 2022, 2022).
Recently, with the surging interest in the role of radiative feedback in the formation and evolution of galaxies, several radiation-hydrodynamics implementations have been developed (e.g. Aubert and Teyssier, 2008; Petkova and Springel, 2009; Wise and Abel, 2011; Rosdahl et al., 2013; Jaura et al., 2018; Kannan et al., 2019; Smith et al., 2020; Chan et al., 2021; Peter et al., 2023). Sophisticated numerical algorithms have significantly reduced the cost of solving the radiative transfer (RT) equations, making it more affordable for cosmological simulations. However, radiative feedback from individual massive stars originates from very small \(\sim 10^{4}\,\mathrm{K}\) fully ionized regions, namely _initial_ Stromgren spheres (Strongren, 1939). Typically, the mass of the initial Stromgren sphere (initial Stromgren mass) is only on the order of several solar masses and its formation occurs within a short timespan of hundreds of years. Driven by these ionized cores, compact H ii regions expand outward and deposit momentum and kinetic energy to the turbulent ISM until the massive
stars die in a few million years. Capturing such small-scale physics requires high spatial and temporal resolution and thus tremendous computational resources, which is often impractical to treat directly in galaxy formation simulations. Therefore, modelling the impact of individual H ii regions (photo-heating and expansion) from massive stars is numerically challenging.
The tiny mass and rapid formation of the initial Stromgren spheres make them extremely challenging to model in galaxy formation simulations. The largest-volume cosmological simulations of galaxy formation can only afford mass resolutions larger than \(10^{6}\,\mathrm{M}_{\odot}\) for baryonic cells (e.g. Schaye et al., 2015; Pillepich et al., 2018; Kannan et al., 2022, 2023; Hernandez-Aguayo et al., 2022, and see Vogelsberger et al., 2020 for a recent review). Even zoom-in simulations of single Milky Way-like galaxies (e.g. Rosdahl et al., 2015; Hopkins et al., 2018; Marinacci et al., 2019) usually have mass resolutions spanning from \(10^{3}-10^{5}\,\mathrm{M}_{\odot}\) per gas resolution element, which is far larger than the initial Stromgren mass. Although recent dwarf galaxy simulations (e.g. Emerick et al., 2019; Agertz et al., 2020; Lahen et al., 2020; Gutcke et al., 2021) have begun to treat feedback from individual stars in dwarf galaxies with a much higher mass resolution of several solar masses, directly resolving the individual H ii regions still remains a numerical issue. Scaling inversely with density, the Stromgren spheres are normally unresolved in gas with \(\mathrm{\mu_{H}\gtrsim 10\,cm^{-3}}\) for high-resolution galaxy radiation-hydrodynamic simulations (e.g. Rosdahl et al., 2015, 2022). Even for GMC simulations, resolving H ii regions can still be challenging in high-density regimes as both the mass and formation timescale of Stromgren spheres decrease significantly at high density (\(\propto n^{-1}\)).
Inadequate resolution often leads to problematic results in galaxy formation simulations. Historically, the inability to resolve the Sedov-Taylor phase of SN explosion leads to artificially rapid radiative cooling and forming too many stars too early in cosmological simulations (Katz, 1992). This so-called _over-cooling_ problem has been addressed by employing a variety of sub-grid prescriptions (see Naab and Ostriker, 2017 for a recent review). In terms of ionization feedback, several works have explored the consequences of insufficient resolution when simulating idealized H ii regions, including the impact on radial profiles and momentum feedback (e.g. Wise and Abel, 2011; Bisbas et al., 2015; Petkova et al., 2021; Pittard et al., 2022; see also Ivkovic, 2023). However, the physics behind these numerical issues needs to be further clarified, especially in the context of specific RT methods and applications. Furthermore, most studies focus solely on spatial resolution and disregard the influence of temporal resolution. Therefore, there is an urgent need for well-understood solutions to address these resolution problems to enhance the convergence of current RT implementations in a more physical manner.
In this article, we perform a suite of radiation-hydrodynamic simulations of idealized H ii regions with the moment-based M1 closure RT implementation (Kannan et al., 2019) in the moving-mesh code arepo (Springel, 2010). We study both the effects of spatial (mass) and temporal (time-stepping) resolution on the behaviour of simulated H ii regions. For simplicity, our tests are all performed in uniform pure hydrogen media, but the discussions and conclusions based on the hydrogen-only case are also valid for the other species by replacing the coefficients correspondingly. We address three issues arising from the insufficient spatial (mass) and temporal resolution on stellar ionization feedback:
* **Insufficient spatial (mass) resolution** (i) _Over-ionization_: overestimation of the total ionized gas mass due to gas cells that can not be fully ionized, shifting the ionization-recombination balance to the ionization side;
* _Over-heating_: overestimation of energy and momentum deposition due to the artificial heating of a substantial amount of partially ionized gas;
* **Insufficient temporal resolution** (i) _Missing photons_: underestimation of the overall feedback due to excess photons being absorbed without any effect on the ionization state of the gas.
We also emphasize that _missing photons_ can be a very serious problem when photon injection and thermochemistry have different cadences. We explain the physical reasons for these numerical issues and describe novel numerical implementations to solve these issues at an acceptable computational cost.
This paper is organized as follows: We review the physics and analytic results of H ii region expansion in different phases in Section 2. In Section 3, we describe our simulation setup and RT implementations. In Sections 4 and 5, we present the results of our tests for spatial and temporal resolution dependence, respectively. We then introduce several solutions in Section 6. Lastly, we discuss and summarize our results in Section 7.
## 2 Analytical theory of H ii regions
In this section, we review the physics of idealized H ii regions in a uniform medium. Because of the differences in the dominant physical processes and timescales, the evolution of an idealized H ii region around a massive star can be divided into two stages, namely the _formation_ and _expansion_ phase (e.g. Yorke, 1986). Realistic H ii regions expanding in the turbulent environment with a roughly power-law density profile (e.g. Shu, 1977; McKee and Tan, 2003; Lee and Hennebelle, 2018) can expand preferentially toward the rarefied regions and launch a "champagne flow" that rapidly ionizes and heats the surrounding cloud (Franco et al., 1990; Zamora-Aviles et al., 2019; Geen et al., 2020, 2021). Detailed discussions of H ii regions in realistic environments are outside the scope of this work on the impact of numerical resolution, we refer the reader to the aforementioned literature.
### Formation phase
Once a hot massive star ignites in neutral gas, a so-called R(rarified)-type ionization front (I-front), characterized by a rapid expansion and a negligible density change across the I-front, is driven through the gas. This I-front leaves the gas behind the front hot (\(\sim 10^{4}\,\mathrm{K}\)) and ionized but otherwise almost undisturbed. The speed of this I-front is initially supersonic. It slows down gradually until the radius of the H ii region reaches the "initial" Stromgren radius.
This idealized problem of a fully ionized, spherical region of uniform density with the ionization maintained by constantly emitted ionizing photons from a central massive star is known as a Stromgren sphere (Stromgren, 1939). Assuming the gas is pure hydrogen, a steady solution for the radius of the Stromgren sphere (Stromgren radius) can be found by requiring ionization equilibrium,
\[R_{\mathrm{S}}=\left(\frac{3Q}{4\pi n_{\mathrm{H}}^{2}\alpha_{\mathrm{B}}} \right)^{1/3}\approx 0.315\,Q_{48}^{1/3}n_{3}^{-2/3}\alpha_{0}{}^{-1/3}\,\mathrm{pc}\,, \tag{1}\]
where \(\alpha_{\mathrm{B}}\) is the effective hydrogen radiative recombination rate for Case B approximation (Baker and Menzel, 1938), \(\alpha_{0}\equiv\alpha_{\mathrm{B}}/(2.59\times 10^{-13}\,\mathrm{cm^{3}s^{-1}})\), \(Q\) is the rate of emission of ionizing photons with \(Q_{48}\equiv Q/(10^{48}\,\mathrm{s^{-1}})\), and \(n_{\mathrm{H}}\) is the hydrogen density with
\(n_{\rm H}/(10^{3}\,{\rm cm}^{-3})\). Since the initial Stromgren sphere is almost fully ionized, the initial mass of ionized gas, _initial Stromgren mass_, is
\[M_{\rm S}=\frac{4}{3}\,{\it R}_{\rm S}^{3}n_{\rm H}m_{\rm H}=\frac{m_{\rm H}}{ \alpha_{\rm B}}\frac{Q}{n_{\rm H}}\approx 3.25\,Q_{48}n_{3}^{-1}\alpha_{0} ^{-1}\,{\rm M}_{\odot}\,. \tag{2}\]
Assuming the I-front is infinitely thin (\(I_{\rm mfp}\lesssim 10^{-4}n_{3}^{-1}\,{\rm pc}\)), in the non-relativistic limit, the evolution of the initial Stromgren sphere in the formation phase follows (Spitzer, 1978)
\[r_{i}=R_{\rm S}\left(1-e^{-t/t_{\rm rec}}\right)^{1/3}\,, \tag{3}\]
where \(r_{i}\) is the radius of the I-front and \(t_{\rm rec}\) is the recombination timescale given by
\[t_{\rm rec}=\frac{1}{\alpha_{\rm B}n_{\rm H}}\approx 122.3\,n_{3}^{-1}\alpha_{ 0}^{-1}\,{\rm yr}\,. \tag{4}\]
Equations (1), (2), and (4) show that the typical initial Stromgren spheres have sizes under a few parsecs, masses of several solar masses, and form within several hundreds of years. Such spatial and time scales are much smaller and shorter than the scales of the subsequent expansion phase in most astrophysical environments.
### Expansion phase
Once the I-front reaches the initial Stromgren radius (\(R_{\rm S}\)), the high pressure of the ionized gas drives the expansion of the H ii region. It undergoes a rapid transition from R-type to D(dense)-type front: The ionized gas in photoionization equilibrium expands, driven by the pressure gradient relative to the neutral gas and causes a shock to separate from the I-front and proceed into the surrounding neutral gas. The neutral gas thus begins to accumulate between the shock front and the I-front. This expansion phase itself can also be divided into two phases: the _early_ phase, during which the pressure between the ionized gas and surrounding neutral gas is large; and a _later_ phase when the internal pressure asymptotically balances with the thermal pressure of the surrounding gas.
Assuming a thin swept-up shell (\(v_{I}\sim v_{\rm shock}\)) and applying the conservation of momentum flux across the I-front and shock front, Spitzer (1978) gave the classic solution of the time evolution of the expanding I-front. However, this solution deviates from the expansion at very early time because it ignores the inertia of shocked gas. Incorporated the inertia, Hosokawa & Inutsuka (2006) obtained a solution which shows better agreement with the numerical results (Bisbas et al., 2015):
\[r_{i}=R_{\rm S}\left(1+\frac{7}{4}\sqrt{\frac{4}{3}}\frac{c_{i}t}{R_{\rm S}} \right)^{4/7}\,. \tag{5}\]
As the Stromgren sphere expands into the uniform medium, its ionized mass continues to increase following a simple scaling relation (Bisbas et al., 2015):
\[M_{i}(t)=M_{\rm S}\left(\frac{r_{i}(t)}{R_{\rm S}}\right)^{3/2}\,. \tag{6}\]
H ii regions embedded in very dense (\(\sim 10^{6}\,{\rm cm}^{-3}\)) or warm (\(>100\,{\rm K}\)) gas can enter their later phase of expansion (Raga et al., 2012). In the later phase, the expansion of the H ii region is stagnant while the pressure is equilibrated with the ambient gas so the classic analytic formulas are no longer valid. In hydrodynamic simulations, there is no clear boundary between the early and later phases because we follow the gas dynamics self-consistently. Since the momentum feedback from H ii regions is injected mostly in the early phase, we will therefore focus on the early phase expansion in this work.
## 3 Methods
In this section, we briefly describe the setup of the initial conditions and the terminology used to infer the definition of the mass and temporal resolution. We also recap the arepo-rt implementation that is relevant for later discussions.
### Initial conditions
To test the dependence of simulation results on the spatial and temporal resolution, we simulate idealized H ii regions formed by an individual massive star in uniform pure neutral hydrogen gas. This is a purely radiation-hydrodynamical test, i.e. gravity and magnetic fields are not included. For each set of tests, a stellar particle with a steady ionizing photon emission rate \(Q\) is placed at the centre of a box with size \(L\) at least 3.6 times larger than the radius of the H ii region at the end of the simulations. The gas cells are initially arranged as a regular staggered mesh, which is constructed by overlapping two Cartesian meshes with a displacement of \(0.45\Delta x\) along each axis from each other. Such a mesh configuration is adopted to stabilize the construction of the Voronoi tessellation and minimize density fluctuations in aero.
We initialize the gas cells as pure hydrogen to directly compare with analytical results introduced in Section 2. The initial temperature and density are different in our three tests, we will introduce each of them at the beginning of the corresponding section.
#### 3.1.1 Definitions of mass and temporal resolution
Since the main goal of this paper is to explore the effects of numerical resolution on the ionization feedback of H ii regions, here we define explicitly the meaning of the mass and temporal resolution in our numerical experiments.
In quasi-Lagrangian codes like arepo where the mass of gas cells are close to the target gas mass \(M_{\rm cell}\), we define the _mass resolution_\({\cal R}_{i}\) of the initial Stromgren sphere as the ratio between the initial Stromgren mass and the target gas mass:
\[{\cal R}_{i}=\frac{M_{\rm S}}{M_{\rm cell}}=\frac{m_{\rm H}}{\alpha_{\rm B}} \frac{Q}{n_{\rm H}M_{\rm cell}}\,. \tag{7}\]
Since the expansion of H ii regions is driven by the ionized gas inside the initial Stromgren sphere, \({\cal R}_{i}\) is critical to determine whether the Stromgren sphere and its feedback are resolved.
The mass resolution has a direct connection to the physics of the evolution of H ii regions so it is much more useful and convenient than the spatial resolution in this work. Still, we define the conversion between mass and spatial resolution via
\[\Delta x=\left(\frac{Q}{\alpha_{\rm B}n_{\rm H}^{2}{\cal R}_{i}}\right)^{1/3}\,. \tag{8}\]
For convenience, we use the terms'spatial resolution' and'mass resolution' interchangeably.
The _temporal resolution_ is determined by the size of the time integration steps \(\Delta t\). To examine how the results depend on temporal resolution, we fix the time steps to a given value in each run for all the cells, and both RT and hydro solvers in the main tests. In realistic simulations, gas cells and operations (e.g. hydrodynamics and RT) are allowed to have different time steps, enabling subcycling of processes with short characteristic timescales. For simplicity, we assume a uniform and fixed time step \(\Delta t\) for all cells and operations throughout the simulation, except in Section 5.1.2 and related sections. In these sections, we will illustrate how the temporal resolution issues
can be notably amplified due to the discrepancy between the sizes of photon injection steps (\(\Delta\)\(I_{\star}\)) and RT/thermochemistry steps (\(\Delta\)\(I_{\rm RT}\)).
### Radiative transfer implementation
We use arepo-rt(Kannan et al., 2019), the radiation-hydrodynamic extension of the moving-mesh hydrodynamic code arepo(Springel, 2010) to handle the propagation of radiation and calculation of thermochemistry. arepo-rt adopts a moment-based RT scheme with the M1 closure relation (Levermore, 1984) to solve the set of hyperbolic conservation equations for photon density and flux. The UV radiation continuum is usually divided into three separate bins between energy intervals of [(13.6; 24.6; 54.4, \(\infty\)) eV mirroring the hydrogen and helium ionizing photon groups. With these three bins, \(4\times 3\) additional variables are stored in each cell to describe the volume density of photons \(n_{\gamma}^{i}\) and the flux vector \(\mathbf{F}_{i}\). The index \(i\) here is the energy band corresponding to ionic species H i, He i, and He ii. For simplicity, these frequency-dependent coefficients are reduced to their effective value as only one single \(>13.6\) eV bin in our discussion if not specified. Moreover, the ionization state of each species in each cell is also stored as \(x_{\rm H\,\textsc{ii}}\)-\(x_{\rm He\,\textsc{ii}}\), and \(x_{\rm He\,\textsc{iii}}\), with \(x_{\rm H\,\textsc{i}}\) and \(x_{\rm He\,\textsc{i}}\) set by detailed balancing. We adopt the on-the-spot approximation (OTSA) which assumes all recombination photons are re-absorbed immediately in the surrounding medium processes (case B, Osterbrock & Ferland, 2006). We note that although the OTSA may not capture the ionization profiles accurately in the optically thin regime near the source (Raicevic et al., 2014), it is a reasonable approximation in the high-density mostly neutral regions where the resolution issues occur (see Section 7.1).
The RT equations are solved with an operator-splitting strategy over a pre-determined time-step \(\Delta t\). Each RT step is decomposed into three sub-operations executed successively. Firstly, inject photons from the star into the neighbouring cells. Secondly, solve the RT equations and propagate photons in space. Lastly, solve the thermochemistry equations to determine the ionization states and conduct the heating and cooling processes in each cell. Since arepo uses a two-stage second-order Runge-Kutta integration scheme (i.e. Heun's method, Pakmor et al., 2016), the propagation and thermochemistry are in fact split into two halves within an RT step.
#### 3.2.1 Photon injection
In this work, we focus on the ionization feedback from an individual star. Each test or simulation contains a single _stellar particle_ with a fixed ionizing photon emission rate \(Q\) and ionization spectrum, which corresponds to an idealized massive star. For a steady stellar source, its ionizing photon emission rate \(Q\) is divided into three frequency bins weighted by a given spectrum and \(\sum_{i}Q_{i}=Q\). At time-step \(\Delta t\), the number of injected photons from a stellar particle in the UV bin \(i\) is \(\Delta N_{\gamma}^{i}=Q_{i}\Delta t\). These \(\Delta N_{\gamma}^{i}\) photons from the stellar particle are then weighted among a given number of its nearest neighbouring gas cells (\(N_{\rm nb}=32\) by default). The weight factor \(w_{k}\) for cell \(k\) is calculated by the solid angle of the cell opened to the stellar particle. Therefore, the number of ionizing photons dumped into a cell \(k\) close to a star at time-step \(\Delta t\) is \(\Delta N_{\gamma,k}^{i}=w_{k}\Delta N_{\gamma}^{i}\) where \(\sum_{k}w_{k}=1\).
#### 3.2.2 Radiation transport
arepo-rt simulates the propagation of radiation by solving the set of hyperbolic conservation equations consisting of the zeroth- and first-order moments, which takes the form of
\[\frac{\partial n_{\gamma}^{i}}{\partial t}+\nabla\cdot\mathbf{F}_{i}=0\,,\] \[\frac{\partial\mathbf{F}_{i}}{\partial t}+\mathcal{E}^{2} \nabla\cdot\mathbb{P}_{i}=0\,, \tag{9}\]
where \(\mathbb{P}_{i}\) is the pressure tensor related to \(n_{\gamma}^{i}\) by the Eddington tensor. The true speed of light is reduced to \(\mathcal{E}\) to prevent too small time-steps, which is known as the reduced speed of light approximation (Gnedin & Abel, 2001). These equations are closed with the M1 closure relation. There are two frequently-used schemes to evaluate the interface fluxes, the Harten-Lax-van Leer (HLL, Harten et al., 1983) flux function and the global Lax-Friedrich (GFL, Rusanov, 1961) function. The GFL function is more straightforward but makes the RT more diffusive, while the HLL function has its inherent directionality but makes isotropic radiation from stars asymmetric (Rosdahl et al., 2013). For the sake of simplicity and symmetry, we adopt the GFL implementation in this work.
#### 3.2.3 Thermochemistry, cooling, and heating
In arepo-rt, to calculate the number density of different ionic species in each cell, a series of equations are solved by a semi-implicit time integration approach based on the method outlined in Petkova & Springel (2009) after the photon injection and transport are finished in each step. Only if the internal energy changes by more than 10 per cent during a time-step, the equations will be solved implicitly by calling the functions from the SUNDIALS CVODE package (Hindmarsh et al., 2005).
For the pure hydrogen case, as the simplest example, the thermochemistry equation is
\[\frac{\mathrm{d}n_{\rm H\,\textsc{ii}}}{\mathrm{d}t}=-\alpha_{\rm H\,\textsc{ ii}}n_{\rm H\,\textsc{ii}}n_{e}+\sigma_{\rm eH\,\textsc{ii}}n_{e}n_{\rm H\, \textsc{i}}+\tilde{c}n_{\rm H\,\textsc{i}}\sum_{i}\sigma_{\rm H\,\textsc{ ii}}n_{\gamma}^{i} \tag{10}\]
where \(n_{\rm H\,\textsc{ii}}\), \(n_{\rm H\,\textsc{ii}}\), and \(n_{\gamma}^{i}\) are the volume densities of neutral, ionized hydrogen, and photons in bin \(i\), respectively, and \(n_{\rm H\,\textsc{i}}=(1-x_{\rm H\,\textsc{ii}})n_{\rm H}\), \(n_{\rm H\,\textsc{ii}}=x_{\rm H\,\textsc{ii}}n_{\rm H\,\textsc{ii}}\), \(n_{\gamma}^{i}=N_{\gamma}^{i}/V\). \(\sigma_{\rm H\,\textsc{ii}}\) is the mean photoionization cross-sections for bin \(i\), while \(\sigma_{\rm eH\,\textsc{ii}}\) and \(\alpha_{\rm H\,\textsc{ii}}\) are the collisional ionization cross-section and recombination rate taken from Katz et al. (1996). The three terms on the RHS correspond to the number density rates for H ii recombinations, H i collisional ionizations, and H i photoionizations, respectively.
When conducting the heating and cooling, the internal energy per unit mass \(u\) is stored in each cell. arepo-rt calculates the heating (\(\Gamma\)) and cooling (\(\Lambda\)) rate per unit volume with parameters stored in each cell. The variation of thermal energy \(\Delta u\) for a cell in a time-step \(\Delta t\) is
\[\Delta u=\frac{1}{\rho}(\Gamma-\Lambda)\Delta t\,. \tag{11}\]
The main heating process in the context of idealized H ii regions is photoionization heating by the deposited energy of the ionizing photons. The photoheating rates in a finite-frequency bins form can be given as
\[\Gamma_{j}=n_{j}\sum_{i}\int_{\nu_{i1}}^{\nu_{i2}}\frac{4\pi J_{\nu}}{h\nu} \sigma_{j\nu}(h\nu-h\nu_{j})\,\mathrm{d}\nu\,, \tag{12}\]
where \(h\) is the Planck constant and \(h\nu_{j}\) is the ionization potential of the ionic species \(j\). The total photoheating rate can therefore be given as \(\Gamma=\sum_{j}\Gamma_{j}\).
We mainly consider the following cooling processes in the pure hydrogen H ii regions: recombination (\(\Lambda_{\rm rec}\)), collisional excitation
(bound-bound, \(\Lambda_{\rm bb}\)), collisional ionization (bound-free, \(\Lambda_{\rm bf}\)), and Bremsstrahlung (free-free, \(\Lambda_{\rm ff}\)). The cooling equations we use follow Cen (1992) and are listed in Appendix A. The total cooling rate is therefore calculated by
\[\Lambda=\Lambda_{\rm rec}+\Lambda_{\rm bb}+\Lambda_{\rm bf}+\Lambda_{\rm ff}\,. \tag{13}\]
## 4 Effects of Different Mass (Spatial) Resolution
In this section, we study the resolution effects on the evolution of idealized H ii regions in their formation and expansion phases, respectively.
### Test 1 - Formation phase and over-ionization
In the first test, we investigate the impact of resolution on the formation of the initial Stromgren sphere. To do this, we perform static Stromgren sphere tests by placing a steady source of hydrogen ionizing photons with a rate of \(Q=10^{63}\,{\rm Gyr}^{-1}=3.17\times 10^{46}\,{\rm s}^{-1}\) (unity rate in argeb-at) in a uniform pure hydrogen neutral medium of number density \(n_{\rm H}=1\,{\rm cm}^{-3}\). The effective H i ionization cross-section \(\sigma_{\rm H_{1}}=3.0\times 10^{-18}\,{\rm cm}^{2}\). For this test, we turn off the cooling and heating of gas and the temperature is fixed at \(T=10^{4}\) K. The speed of light is reduced to \(\bar{c}=0.001c\). The chosen initial condition prioritizes simplicity, while the results can be scaled to any arbitrary initial conditions using the dimensionless resolution \({\cal R}_{i}\). To test the effect of mass resolution \({\cal R}_{i}\), we set a series of initial conditions with cell mass from about \(10^{-3}\,{\rm M}_{\sun}\) to \(10^{3}\,{\rm M}_{\sun}\), equivalent to \({\cal R}_{i}\) from \(10^{5}\) to \(0.1\) or \(\Delta x\) from \(0.34\,{\rm pc}\) to \(34\,{\rm pc}\). All the cells are initially at rest and arranged in a regular staggered mesh.
In Figure 1, we present the projected maps of the H ii fractions for simulations with different resolutions from \({\cal R}_{i}=10^{5}\) to \({\cal R}_{i}=0.1\) resolution with a spacing of \(10^{2}\). The \({\cal R}_{i}=10^{5}\) and \({\cal R}_{i}=1000\) Stromgren spheres exhibit an ideal spherical morphology, indicating the ability of our code to capture the characteristic features of H ii regions. Compared with the analytic Stromgren radius (\(R_{\rm S}\), blue dotted circles), the high-resolution Stromgren spheres are almost fully ionized and concentrated inside \(r\lesssim R_{\rm S}\) with a sharp boundary. On the contrary, the low-resolution Stromgren spheres are much larger than \(R_{\rm S}\), filled with partly ionized gas with much lower H ii fractions.
In the top (middle) panels of Figure 2 we show the steady neutral (ionized) fraction profiles at \(t\approx 15\,t_{\rm rec}\). The profile of the ionization fraction can be described by (Osterbrock & Ferland, 2006)
\[\frac{x_{\rm H_{1}}(r)}{4\pi r^{2}}Qe^{-\tau(r)}\sigma_{\rm H_{1}}=x_{\rm H _{1}}^{2}(r)n_{\rm H}\sigma_{\rm H\,ii}\,, \tag{14}\]
where \(x_{\rm H_{1}}(r)=n_{\rm H_{1}}(r)/n_{\rm H_{1}},x_{\rm H\,ii}(r)=1-x_{\rm H_{1 }}(r)\), the optical depth \(\tau(r)=n_{\rm H}\sigma_{\rm H\,ii}\int_{0}^{\infty}x_{\rm H_{1}}(r^{\prime}) dr^{\prime}\), and the cross-section \(\sigma_{\rm H\,i}\) here is \(3\times 10^{-18}\,{\rm cm}^{2}\). This analytic solution is plotted in Figure 2 with the grey dashed curves as a benchmark. The analytic profile is well-reproduced only in the \({\cal R}_{i}\geq 10^{5}\) simulation. With decreasing resolution, the profile becomes more and more smooth, the fractional ionized region becomes more extended while the fully ionized core region shrinks. In high-resolution cases, the ionization fraction \(x_{\rm H\,ii}\) reaches unity in the fully ionized core and there is no significant difference until about \({\cal R}_{i}=10\). However, once the H ii region transitions to being marginally resolved and then unresolved, the situation is very different. The effective size of one cell is larger than the initial Stromgren sphere so that the total ionized mass is only a small portion of the mass of one cell. Consequently, the ionization fraction is reduced to \(\sim M_{\rm S}/M_{\rm cell}={\cal R}_{i}\). As we presented in Figure 2, \(x_{\rm H\,ii}\) falls down quickly to \(\lesssim 0.1\) in the \({\cal R}_{i}=0.1\) case.
Although the ionization fractions are averaged down and ionization profiles are smoothed away, the total mass of ionized gas \(M_{i}\) is not conserved during the resolution smoothing. Assuming the Stromgren sphere is fully ionized, the time evolution of the I-front follows equation (3) and the total ionized mass can be described as
\[M_{i}(t)=M_{\rm S}\left(1-e^{-t/t_{\rm rec}}\right)\,. \tag{15}\]
In the bottom panel of Figure 2, we present the evolution of \(M_{i}\) for different resolutions \({\cal R}_{i}\), comparing with the analytic result of equation (15). There is no significant difference among the \({\cal R}_{i}=10^{5}\), \(10^{4}\), and \(10^{3}\) simulations, and the initial Stromgren sphere can be regarded as well-revolved when the mass resolution is larger than 100, at least from the perspective of the ionized mass. However, the total mass of ionized gas is enhanced in lower-resolution cases. In marginally resolved cases like \({\cal R}_{i}=10\) or 1, the equilibrium mass of ionized gas is slightly larger than the actual mass, while in the unresolved case of \({\cal R}_{i}=0.1\) it is more than ten times larger. We refer to such overestimation of total ionized gas mass in low resolution cases as _over-ionization_. The time for the Stromgren sphere to reach a steady state also becomes longer with decreasing resolution, especially for the \({\cal R}_{i}\leq 1\) simulations.
#### 4.1.1 Over-ionization
The time lag to reach a steady state is not very important to the subsequent evolution of H ii regions, since most processes we are interested in occur on time-scales far longer than the formation of the initial Stromgren sphere. However, the _over-ionization_ issue we mentioned before can significantly increase the total mass of ionized gas in low-resolution galactic and cosmological simulations if we adopt a direct RT feedback implementation.
Equation (10) regulates the thermochemistry in the pure hydrogen medium, and in the context of UV ionization, the second term on the RHS, corresponding to the collisional process, can be neglected, while the third term can be simplified to a single H i ionization photon band. The balance equation of the ionization state of the gas can thus be written as
\[\alpha_{\rm B}n_{\rm H\,ii}n_{\rm Fe}\approx\bar{c}n_{\rm H\,i}\sigma_{\rm H\,i}n _{\rm H\,i}\gamma\,, \tag{16}\]
where \(n_{\rm H\,ii}=1-n_{\rm H\,i}=n_{\rm Fe}=x_{\rm H\,ii}n_{\rm H}\). Substituting the number densities with ionization fractions, we obtain
\[\alpha_{\rm B}n_{\rm H\,ii}^{2}x_{\rm H\,ii}^{2}\approx\bar{c}(1-x_{\rm H\,ii})n _{\rm H}\sigma_{\rm H\,ii}n_{\rm H\,i}\,, \tag{17}\]
where the LHS corresponds to the recombination rate \(R_{\rm rec}\) and the RHS corresponds to the photoionization rate \(R_{\rm ion}\).
In high-resolution cases, we suppose that the initial Stromgren sphere can be resolved by a series of thin shells. For a cell in the \(k\)th shell with radius \(r_{k}\), the equilibrium solution of photon density is \(n_{\gamma}=Qe^{-\tau(r_{k})}/4\pi r_{k}c\), where \(\tau(r_{k})=n_{\rm B}\sigma_{\rm H\,ii}\sum_{k}^{k}\tau_{\rm H\,i}(r_{k}) \Delta r\) and \(\Delta r\sim\Delta r\). With sufficiently high resolution, equation (17) will approach the analytical solution given by equation (14). However, in the low-resolution cases with mass resolution \({\cal R}_{i}<1\), the cell size \(\Delta x\) will be larger than \(R_{\rm S}\). Assuming the entire initial Stromgren sphere is embedded in a single cell, the equilibrium balance equation becomes
\[\alpha_{\rm B}n_{\rm H\,ii}^{2}x_{\rm H\,ii}^{2}\approx Q/V_{\rm cell} \tag{18}\]
substituting \(V_{\rm cell}=M_{\rm cell}/n_{\rm H}m_{\rm H}\) with the definition of mass resolution \(\mathcal{R}_{i}\) (equation 7), we have
\[x_{\rm H\,{\textsc{ii}}}\approx\mathcal{R}_{i}^{1/2}. \tag{19}\]
Thus, in fully unresolved cases (\(\mathcal{R}_{i}\ll 1\)) the equilibrium ionization fraction is proportional to \(\mathcal{R}_{i}^{1/2}\) and the mass of ionized gas is \(\sim M_{\rm cell}\mathcal{R}_{i}^{1/2}=M_{S}\mathcal{R}_{i}^{-1/2}\), overestimated by a factor of \(\mathcal{R}_{i}^{-1/2}\). Therefore, _over-ionization_ is a direct consequence of recombinations and ionizations attempting to become balanced in a Stromgren sphere diluted by the impact of insufficient spatial resolution.
In other words, in low resolution cases, the ionization fraction is supposed to saturate at \(x_{\rm H\,{\textsc{ii}}}\sim\mathcal{R}_{i}\) to obtain the correct ionized mass \(M_{S}\). However, with this ionization fraction, the ionization rate \(R_{\rm ion}\) is still much larger than the recombination rate \(R_{\rm rec}\) at this time. More gas will be ionized before the \(R_{\rm ion}\) and \(R_{\rm rec}\) are finally balanced at \(x_{\rm H\,{\textsc{ii}}}\sim\mathcal{R}_{i}^{1/2}\), resulting in the _over-ionization_. In section 6.1.2, we will introduce a method to prevent this issue by enforcing the correct balance between recombination and ionization rates. In section 7.1.1, we will discuss the conditions for which this correction is needed.
Similar resolution issues make a significant difference in the expansion phase of the H ii region as well - we will turn to the early time expansion and discuss these numerical issues in the next section.
### Test 2 - Early time expansion and over-heating
In the next series of tests, we simulate the expansion of an H ii region in a uniform density medium. The simulation box is initialized with pure hydrogen gas of density \(n_{\rm H\,{\textsc{ii}}}=100\,{\rm cm}^{-3}\) and temperature \(T=100\) K. A steady source is placed at the centre of the box that emits a blackbody spectrum with \(T_{\rm eff}=35\,000\) K at a rate of \(Q=10^{48}\,{\rm s}^{-1}\), which is similar to a \(\sim 30\,{\rm M}_{\odot}\) massive star and the effective H i ionization cross-section is \(3.8\times 10^{-18}\,{\rm cm}^{2}\). We use a reduced speed of light with \(\bar{c}=0.01\,c\) and a multifrequency RT scheme as described in Section 3.2. We vary the mass resolution \(\mathcal{R}_{i}\) across a range of 0.01 to 1000 corresponding to \(M_{\rm cell}(\Delta x)\) from \(4800\,{\rm M}_{\odot}(12\,{\rm pc})\) to \(0.048\,{\rm M}_{\odot}(0.27\,{\rm pc})\). To temporally resolve the formation of the initial Stromgren sphere, we force the hydrodynamic time-step to be shorter than \(0.1\,t_{\rm rec}\) (0.12 kyr).
In Figure 3, we show the profiles of ionization fraction, temperature, density, pressure, and velocity at 1 Myr for all the simulations in this test. At this time, the Stomgren sphere has expanded to \(r_{i}\approx 5R_{\rm S}\). In high-resolution simulations like the \(\mathcal{R}_{i}=1000\) run, the profiles show characteristics of a typical D-type front. After the D-type expansion begins, the I-front is always preceded by a hydrodynamical shock front sweeping outward. The interior of the H ii region is evacuated by the expansion and becomes much more rarefied than the surrounding neutral gas. The \(\mathcal{R}_{i}=10\) simulation retains these features with visually acceptable flattening, while lower resolutions gradually flatten these away. In the \(\mathcal{R}_{i}=0.1\) simulation, the gas density is almost uniform, implying that the shock is not captured.
Some delicate features can be captured in the highest-resolution simulation, like a secondary pressure peak behind the shock front and a knee of temperature in the transient region. However, only the \(\mathcal{R}_{i}\geq 100\) simulations can reproduce these secondary structures, as the \(\mathcal{R}_{i}=10\) simulation has already smoothed them out. Nonetheless, the profiles can be roughly reproduced in marginally-resolved cases (\(\mathcal{R}_{i}=10\) and 1). In simulations conducted at lower resolutions, the initial Stromgren sphere is completely unresolved. The insufficient resolution causes all the profiles to appear flatter and results in incomplete ionization mirroring the scenario discussed in Section 4.1. Moreover, a substantial fraction of ionized gas also fails to be heated to \(10^{4}\) K, and the density inside the I-front stays within the same order of magnitude as the background density, and the pressure becomes much higher.
In Figure 4, we present the evolution of the ionization radius, ionized mass, injected momentum, and injected energy. The ionization radius \(r_{i}\) (top left panel) is defined as the radius of the shell where \(x_{\rm H\,{\textsc{ii}}}=x_{\rm H\,{\textsc{ii}}}\) since the profile of the I-front is flattened. The \(\mathcal{R}_{i}\leq 1\) runs have ionization fractions that never exceed 0.5 so they are unable to be plotted in this panel. The grey dashed curves in the top-left and top-right panels indicate the Hosokawa-Inutsuka solution (equation 5). The ionization radius \(r_{i}\) matches well with the analytic solution even for the marginally-resolved case (\(\mathcal{R}_{i}=1\)), and the ionized mass tends to converge to this solution as the H ii region expands. However, the _over-ionization_ issue makes a significant difference in the ionized mass at very early times, as discussed in Section 4.1. A surging of the ionized mass can be found in the \(\mathcal{R}_{i}=10\) to 0.01 results, and it takes an increasing amount of time to relax to the analytic solution with decreasing \(\mathcal{R}_{i}\). This suggests that the expansion of H ii regions can counteract the overestimation of ionized mass caused by the _over-ionization_ to some extent. Even so, the expansion only increases the ionized mass slowly and it can take a considerable time, even longer than the lifetime of the massive star, to counteract the initial surging if the _over-ionization_ is too severe.
Figure 1: Structure of the Strömgren spheres (projected H ii fractions) in different mass resolution from the highest (\(\mathcal{R}_{i}=10^{5}\)) to lowest (\(\mathcal{R}_{i}=0.1\)) resolution simulations (Test 1, Section 4.1). The high-resolution Strömgren spheres are nearly spherical, demonstrating the ability of our code to obtain well-defined H ii regions with correct Strömgren radius that is consistent with the analytical result as shown in the blue dotted circles. Notice that the spatial axes are rescaled for each map, while the H ii fractions are shown at the same scale.
In the bottom panels of Figure 4 we present the cumulative radial momentum and injected energy summed over the entire simulation box. In the low-resolution cases, the momentum and energy injections are also enhanced. Unlike the overestimated ionized mass which tends to asymptote to the analytic solution, the momentum and energy remain consistently larger than the high-resolution solution. This is because once momentum and energy are over-injected, no mechanism can eliminate the excess momentum or energy, unlike the ionization state of the gas, which can regulate itself by balancing the photoionization and recombination process.
#### 4.2.1 Over-heating
In this section, we illustrate that the enhancement of momentum and energy injected in low-resolution cases is a result of the artificial heating of a substantial amount of partially ionized gas, raising its temperature to several thousand Kelvin. We refer to this phenomenon as _over-heating_.
The hot gas in the initial Stromgren sphere is heated by photoionization and reaches equilibrium with cooling processes at \(T\gtrsim 10^{4}\) K. Then the expansion of the H ii region is driven by the large pressure gradient between the \(10^{4}\) K hot ionized gas and the surrounding background neutral gas. As the expansion is a hydrodynamic response to the initial ionization, the _over-heating_ issue is a secondary disaster of the _over-ionization_ issue we have discussed in Section 4.1.1.
When the star ignites, the neighbouring gas cells will begin to be ionized and the deposited energy of ionization photons is converted to thermal energy in each step, following equation 12. On the other hand, the hot gas will lose internal energy efficiently through various cooling processes. The difference between photoheating and radiative cooling results in a net increase of the internal energy (equation 11).
In low-resolution cases, the _over-ionization_ issue leads to an overestimation of the mass of ionized gas but this gas can only be partially ionized because ionization balance cannot be established properly (Section 4.1). Similarly, we now discuss the balance between cooling and heating in the initial Stromgren sphere. In equilibrium, the heating and cooling rates satisfy
\[\Gamma=\Lambda=\Lambda_{\rm rec}+\Lambda_{\rm bb}+\Lambda_{\rm bf}+\Lambda_{\rm ff }\,, \tag{20}\]
where each rate has the following dependence on the ionization fraction (equations 19, 20, 21 and 22):
\[\Gamma\propto(1-x_{\rm H{\textsc{ii}}})\,,\ \Lambda_{\rm rec/ff}\propto x_{\rm H{ \textsc{ii}}}^{2}\,,\ \mathrm{and}\ \Lambda_{\rm bbbf}\propto x_{\rm H{\textsc{ii}}}(1-x_{\rm H{ \textsc{ii}}})\,. \tag{21}\]
In low-resolution cases (\(\mathcal{R}_{i}\ll 1\)), \(x_{\rm H{\textsc{ii}}}\sim\mathcal{R}_{i}\) and the heating and cooling rates can be approximated as follows:
\[\Gamma\propto 1\,,\ \Lambda_{\rm rec/ff}\propto\mathcal{R}_{i}^{2}\,,\ \mathrm{and}\ \Lambda_{\rm bbbf}\propto\mathcal{R}_{i}\,. \tag{22}\]
The heating rate stays high in the gas cells even though it should drop down in high-resolution cases where these cells are fully ionized. On the other hand, the cooling rates are all reduced by insufficient ionization, especially those for recombination and Bremsstrahlung.
This scenario bears resemblance to the _over-ionization_ issue, yet an additional numerical concern exacerbates matters. Besides the ionization fraction, all cooling functions involved are sensitive to the temperature (equations 19, 20, 21, and 22), while the temperature of a gas cell is calculated based on the internal energy \(u\) and mean molecular weight \(\mu\) by
\[T=\frac{2u}{3k_{\rm B}\mu m_{\rm H}}. \tag{23}\]
In our pure hydrogen case, the mean molecular weight equals to
\[\mu=1/(1+x_{\rm H{\textsc{ii}}})\sim 1-\mathcal{R}_{i}\,,\ \mathrm{when}\, \mathcal{R}_{i}\ll 1\,. \tag{24}\]
If the initial ionization cannot be resolved, the _over-ionization_ will lead to a larger ionized mass but lower ionization fraction, thus higher molecular weight \(\mu\) and lower temperature \(T\). In the \(\mathcal{R}_{i}\ll 1\) unresolved cases the ionization fraction \(x_{\rm H{\textsc{ii}}}\) falls from unity to \(\sim 0\) so that \(T\) can be reduced by half for the same internal energy. For example, in the \(\mathcal{R}_{i}=0.1\) case, the molecular weight is \(0.91\) and the temperature of the ionized gas only reaches \(0.55\) of that found in the high-resolution case, assuming the same net energy injection. As we presented in Figure 3, in the \(\mathcal{R}_{i}=10\) to \(1000\) simulations, the central temperature reaches \(\gtrsim 1.5\times 10^{4}\) K, while in unresolved cases it falls to less than \(8000\) K.
Figure 2: Results of the formation test (Test 1, Section 4.1). _Top (middle)_ panels: profiles of neutral (ionized) fraction as a function of radius for different mass resolutions at \(t\approx 15\,t_{\rm rec}\); _Bottom_ panel: evolution of ionized mass as a function of time for different mass resolutions.
The most efficient cooling mechanism in the \(10^{4}\) K primordial ISM is the bound-bound collision (equations A4), which has the largest rate coefficients (\(7.5\times 10^{-19}\,\rm erg\,cm^{-3}\,\rm s^{-1}\)). However, it is also the most sensitive to the temperature
\[\Lambda_{\rm bb}\propto\frac{e^{-11348/T}}{1+T_{5}^{1/2}} \tag{25}\]
Therefore, \(\Lambda_{\rm bb}\) will be drastically reduced if the temperature cannot reach its proper value. For instance, if the temperature inside the H ii region is \(T=8000\,\rm K\), the bound-bound collision cooling rate will be reduced with a factor of \(5.5\times 10^{-4}\) compared to the \(T=16\,000\,\rm K\) case.
Consequently, all four primordial cooling mechanisms fail to cool the gas under the combined effect of the above two factors. The total cooling rate is thus drastically underestimated while the heating rate is enhanced. Therefore, the injected thermal energy will be significantly overestimated and leads to an _over-heating_ problem.
It is crucial to avoid _over-ionization_ and _over-heating_ problems in simulations because they both lead to divergent feedback when H ii regions are unresolved, i.e. unresolved H ii regions will provide more feedback rather than less feedback, which hinders the convergence of simulations with different resolutions. In Section 7.1.1, we will discuss the specific physical conditions in which _over-ionization_ and _over-heating_ problems become particularly relevant.
#### 4.2.2 Multi-phase gas
In Figure 5, we present the mass-weighted phase diagram for the \({\cal R}_{i}=1000\), \(10\), and \(0.1\) simulations at \(t=1\,\rm Myr\). The gas shows a continuous distribution over the phase space in the high- and medium-resolution simulations, which separates into several "islands" at low resolution. The gas density in the high-resolution case spans a range of two orders of magnitude. The inflection at \(\sim 10^{2.5}\,\rm cm^{-3}\) corresponds to the shock-compressed gas that has accumulated in front of the D-type I-front. Lowering the resolution flattens this inflection, and the gas in the \({\cal R}_{i}=0.1\) run is distributed close to the vertical line \(n_{\rm H}=100\,\rm cm^{-3}\) (initial density) with \(T<10^{4}\,\rm K\). This suggests that the shock transition zone and the compressed gas layer between the shock and I-front are not properly resolved.
In Figure 6_top_ panel, we present mass-weighted histograms of the gas mass as a function of temperature for the high (\({\cal R}_{i}=1000\)), medium (10), and low (0.1) resolution runs at 1 Myr. As a comparison, we also present the H ii fraction (_middle_ panel) and cooling rate (_bottom_ panel) as functions of temperature obtained from these runs.
In the high- and medium-resolution runs, the temperature has a continuous distribution on the 100 K to 16 000 K interval of temperature and there are four populations of the different gas phases. The largest population on the left is the undisturbed background gas. The hottest gas populated from about \(\sim 10\,000\)-15 000 K is the fully ionized gas inside the Stromgren sphere. The population at \(\sim 8000\)-10 000 K is partially ionized and there is a ramp of heated warm neutral gas ranging from \(\sim 100\)-8000 K. These two populations are the shock-heated gas between the I-front and shock front during the D-type expansion, and they are accumulating as a bump around 9000 K because of the inefficient cooling. The cooling rate reaches its maximum value at 10 500 K, which results in a valley between the two bumps of populations.
At lower resolutions, the temperature distribution becomes discrete and the valleys are unpopulated. In the \({\cal R}_{i}=0.1\) run, only the \(<2500\,\rm K\) neutral population and \(\sim 8000\,\rm K\) low ionization population still exist and the \(T>10^{4}\,\rm K\) fully ionized population disappears. This gives an intuition for the impact of the _over-ionization_ and _over
Figure 3: Results of the expansion test (Test 2, Section 4.2). Shock parameters at 1 Myr: the ionization (top left panel), temperature (top middle panel), density (top right panel), pressure (bottom left panel), and velocity (bottom middle panel) profiles at 1 Myr with different resolutions.
heating_ issues, as discussed in Section 4.2.1. The neutral gas fails to become fully ionized and heat to \(>10^{4}\) K but stops at \(\sim 8000\) K with a low ionization fraction, which is inefficient for cooling. Notice the y-axis is in log-scale so the total mass of ionized gas at low resolution is actually larger than that at high resolution though the H ii fraction is low at that temperature.
In summary, the insufficient spatial resolution flattens out the shock structures and the _over-heating_ issue changes the multi-phase gas structure of the ISM by turning the highly-ionized, hot (\(>10^{4}\) K) gas within H ii regions into partially-ionized, warm (\(\sim 8000\) K) gas. Such errors in the gas phase structure can be problematic in line luminosity estimates (Smith et al., 2022; Tacchella et al., 2022). Failure to resolve the low-density bubble evacuated by the ionization feedback and the swept-up shell also leads to an inability to reproduce their enhanced or weakened effects on the final momentum output of SN explosions when all these feedback channels are nonlinearly coupled (e.g. Walch and Naab, 2015; Haid et al., 2016). This leads to uncertainties in the total energy/momentum injection from stellar feedback and the amount of gas blown out of a galaxy.
## 5 Effects of different temporal resolution
Choosing a proper time step is crucial to obtain correct simulation results and minimise computational cost. We now examine the dependence of the simulation results on the choice of the time step. arepo-rt is a moment-based RT implementation treating photons with finite speed of light. The most well-known time-stepping criterion for convergent time integration is the Courant criterion (Courant et al., 1928), and it is modified to involve the reduced speed of light (Kannan et al., 2019)
\[\Delta t_{\rm C}<\eta\,\frac{\Delta x}{\dot{c}+|v_{\rm cell}|}\,, \tag{26}\]
Figure 4: Results of the expansion test (Test 2, Section 4.2). Evolution of expanding H ii regions: _Top left_ panel: the Strömgren radius as a function of time for simulations with different resolutions; _Top right_, _bottom left_, and _bottom right_ panels: the time evolution of ionized mass, momentum injection, and energy injection with different resolutions.
where \(\Delta x\) and \(\mathbf{v}_{\rm cell}\) are the width and velocity (in the lab frame) of a cell and \(\eta\sim 0.3\) is the Courant factor. In expanding H ii regions, the velocity of a D-type shock is \(v_{s}\lesssim 2c_{i}\ll\bar{c}\) so
\[\begin{split}\Delta t_{\rm C}&\approx\eta\frac{ \Delta x}{\bar{c}}=\frac{\eta}{\bar{c}}\left(\frac{Q}{\alpha_{\rm B}n_{\rm H}^{ 2}\mathcal{R}_{i}}\right)^{1/3}\\ &\approx 230\,\text{yr}\,\left(\frac{\eta}{0.3}\right)\left(\frac{ \bar{c}}{10^{-3}c}\right)^{-1}\left(\frac{\mathcal{R}_{i}}{10}\right)^{-1/3}Q _{48}^{1/3}n_{3}^{-2/3}\,.\end{split} \tag{27}\]
It turns out in practice that such a criterion ensures a stable convergence for hydrodynamic simulations. However, since the mean free path (MFP) of photon ionization is very hard to spatially resolve (\(\Delta x<I_{\rm mfp}=1/\sigma_{\rm H_{1}}n_{\rm H}\)), the Courant criterion almost always fails to catch the ionization timescale
\[\begin{split} i_{\rm ion}&=\frac{1}{\bar{c}n_{ \rm H}\sigma_{\rm H_{1}}}=\frac{\alpha_{\rm B}}{\bar{c}\sigma_{\rm H_{1}}}t_{ \rm rec}\\ &\approx 1.4\times 10^{-3}\left(\frac{\bar{c}}{10^{-3}c}\right)^{-1}t_{ \rm rec}=0.17\,\text{yr}\left(\frac{\bar{c}}{10^{-3}c}\right)^{-1}n_{3}^{-1}\,. \end{split} \tag{28}\]
In a forming H ii region, the ionization state of gas is changing drastically, and in a microscopic view, the gas ionization timescale is essentially the \(t_{\rm ion}\) presented above. Such a short timescale is difficult to temporally resolve, which can lead to errors in the solutions. We emphasize that this is a general issue across numerical RT methods and explicit time-stepping schemes though we introduce it from a moment-based scheme with a finite speed of light.
### Missing photons issue
In most thermochemistry solvers (including arepo-rf), to reduce the computational cost and ensure numerical stability, the photon transport and the thermochemistry are solved with an operator-splitting strategy (e.g. Petkova and Springel, 2009; Wise and Abel, 2011; Rosdahl et al., 2013; Kannan et al., 2019; Chan et al., 2021). In the thermochemistry networks, the photon number density \(n_{\gamma}\) is not coupled in as a variable but is considered to be a known quantity as the solution of the last photon transport step. The absorption of photons is simply calculated independently of solving the thermochemistry equation with the optical depth in a cell by
\[\frac{\text{d}n_{\gamma}}{\text{d}t}=-\tilde{c}n_{\rm H_{1}}\sigma_{\rm H_{1} }n_{\gamma}=\frac{n_{\gamma}}{t_{\rm ion}}\,, \tag{29}\]
where \(n_{\rm H_{1}}\) takes the value at the beginning of the current step. Once a package of photons is damped or propagated into a neutral gas cell, the solution for the above equation is simply
\[n_{\gamma}(\Delta t)=n_{\gamma,0}\,e^{-\Delta t/t_{\rm ion}}\,, \tag{30}\]
where \(n_{\gamma,0}\) is the photon density at the beginning of the timestep.
In this paper, we refer to these simplified thermochemistry solvers where photon density is not coupled in the chemistry network as "uncoupled solvers". This simplification has the advantage of efficiency especially when multiple RT bins and transfer of momentum from radiation to gas are implemented and it has been commonly used in various RT implementations. However, we will show it can lead to a severe numerical error by overestimating the number of absorbed photons, turning them to "_missing photons_".
#### 5.1.1 Missing photons when radiation encounters neutral gas
When the star ignites, ionizing photons are dumped into the predominantly neutral gas and propagate outwards. We use a one-zone case to demonstrate the _missing photons_ issue when photons are absorbed in a fully neutral neighbouring cell of \(n_{\rm H}=10^{9}\,\text{cm}^{-3}\) during the first step after a \(Q=10^{48}\,\text{s}^{-1}\) star ignites. In Figure 7, we plot the number of photons absorbed (\(\Delta N_{\gamma}\)) in a fully neutral neighbouring cell during the first step after the star ignites, as a function of the time step size \(\Delta t\) (assuming the sizes of all the relative time steps match each other). The red curve is obtained by an accurate thermochemical solver solving the photon and ion numbers simultaneously (see Section 6.2.1), we regard this solution as the accurate one for our question. On the other hand, the green curve is obtained by the widely used uncoupled solver.
In each step when radiation encounters neutral gas, the number of photons absorbed in this step is (equation 30)
\[\Delta N_{\gamma}=N_{\gamma}(1-e^{-\Delta t/t_{\rm ion}})\approx N_{\gamma}\,, \tag{31}\]
Figure 5: Results of the expansion test (Test 2, Section 4.2). Mass-weighted phase diagrams for the \(\mathcal{R}_{i}=1000\), \(10\), and \(0.1\) simulations in Test 2 at \(t=1\,\text{Myr}\).
where \(N_{\gamma}\) is the number of photons prior to absorption, \(\Delta t/t_{\rm ion}\gg 1\) in most cases so the second equality holds. This equation demonstrates that almost all photons dumped or propagated into the gas will be consumed when the \(t_{\rm ion}\) (or MFP) is unresolved. As shown by the green curve in Figure 7, the number of absorbed photons converges to the number of injected photons \(Q\Delta t/N_{\rm nb}\) when the time step size is larger than several \(t_{\rm ion}\).
On the other hand, the total number of neutral hydrogen atoms in a cell is
\[N_{\rm H_{1}}=\frac{M_{\rm cell}}{m_{\rm H}}=\frac{Q}{\alpha_{\rm B}n_{\rm H} \mathcal{R}_{i}}. \tag{32}\]
Assuming one photon can ionize one neutral hydrogen atom (MFP unresolved), in this case, if \(\Delta N_{\gamma}>(1+f_{\rm rec})N_{\rm H_{1}}\), then \(\Delta N_{\gamma}\) photons will be absorbed but only \(N_{\rm H_{1}}\) atoms can be ionized, the photon number is no longer conserved. Here \(f_{\rm rec}\) is a factor accounting for the recombination of ions. In other words, \(\delta N_{\gamma}\) photons simply disappear without any effect on the gas state, they become the _missing photons_ (pink-filled region in Figure 7). Specifically, the number of missing photons is
\[\delta N_{\gamma}=\Delta N_{\gamma}-(1+f_{\rm rec})N_{\rm H_{1}}\,, \tag{33}\]
where \(f_{\rm rec}\) can be estimated as \(\Delta t/t_{\rm rec}\) because \(\Delta t\sim t_{\rm rec}\gg t_{\rm ion}\). If the size of the time step is too large, \(\Delta N_{\gamma}\propto Q\Delta t\gg(1+f_{\rm rec})N_{\rm H_{1}}\), almost all the photons are lost, having no effect on the ionization state when they enter a neutral gas cell.
In Appendix B, we track the step-by-step events for the _missing photons_ issue by considering an idealized scenario (Figure 11). We find that the following criterion should be satisfied to avoid the occurrence of _missing photons_ if the photon density is not coupled in the thermochemistry solver:
\[\Delta t\lesssim t_{\rm rec}\frac{N_{\rm nb}}{\mathcal{R}_{i}}\qquad({\rm or} \quad\mathcal{R}_{i}\lesssim N_{\rm nb}\quad{\rm if}\quad\Delta t\gtrsim t_{ \rm rec}). \tag{34}\]
This criterion is stringent because it suggests the mass resolution should not be higher than \(N_{\rm nb}\) or the size of the time step should be far smaller than \(t_{\rm rec}\). Nonetheless, the _missing photons_ issue may not be as troublesome as long as the photon injection and thermochemistry have the same cadence. This only occurs when a large number of photons are being dumped or propagated into a substantially neutral cell. However, realistic implementations usually conduct photon injection, radiative transport, and thermochemistry with different cadences. In the next section, we will show that the _missing photons_ issue can lead to disastrous consequences in these cases.
#### 5.1.2 Mismatching cadences make things worse
In realistic simulations, it is common to encounter mismatching cadences between photon injection, radiative transport, and thermochemistry. Typically, the frequency of photon injection is determined by the star particle time step, while the RT and thermochemistry time steps of the neighbouring cells can be further refined. Consequently, photon dumping is typically performed with a relatively low cadence compared to RT, i.e., each photon injection step can be associated
Figure 6: Results of the expansion test (Test 2, Section 4.2). _Top_ panel: mass-weighted histograms of gas mass as a function of temperature for the \(\mathcal{R}_{i}=1000,10,\text{and}\,0.1\) simulations in Test 2 at \(t=1\,\text{Myr}\). Middle and bottom panels: the H ii fraction and cooling rate as functions of temperature obtained from these simulations.
Figure 7: Number of photons absorbed (\(\Delta N_{\gamma}\)) in a fully neutral neighbouring cell during the first step after the star ignites, as a function of the time step size \(\Delta t\) (assuming the sizes of all the relative time steps match each other). The red and green curves are the accurate solution obtained with the coupled solver (see Section 6.2.1) and the problematic solution obtained with the uncoupled solver. The pink-filled region presents the _missing photons_ described by equation (33).
with several RT steps. Further subcycling for thermochemical integration is also common when the gas properties change drastically in a single RT step, leading to a more severe mismatch between photon injection and thermochemistry cadences.
To illustrate the problem, we consider the case of multiple sub-cycles of RT and thermochemistry conducted within one injection time step (as described in Kannan et al., 2019). In our tests, we use an equal time-stepping scheme, so the injection time step of the star \(\Delta t_{\star}\) is equal to the hydrodynamic time step \(\Delta t\), and the subcycled RT step \(\Delta t_{\rm RT}=\Delta t/N_{\rm sub}\). Even if one does not intentionally subcycle the RT, the two-stage Heun's integration method introduces at least two "substeps" for RT, making this a potential issue for the similar variant of second-order implementations, regardless of whether they adopt explicit RT subcycling or not.
In Figure 8, we present a similar one-zone test as the one in the last section but now assuming \(\Delta t_{\star}=2\Delta t_{\rm RT}\). We evolve the system to the end of the second injection step after the star ignites. It is important to note that the _missing photons_ issue now occurs not only in the first RT step ('RT step 1') immediately after the first injection of photons but also in subsequent injection steps. The difference is that fresh photons are not injected immediately after this RT step. However, in the remaining second RT step ('RT step 2'), the photon density \(n_{\gamma}\) is almost 0 if the _missing photons_ issue has occurred. As a result, ionization processes become negligible and the gas begins to recombine. The H ii fraction before the next injection step \(x_{\rm inj}\) can be estimated as
\[x_{\rm inj}=\frac{1}{1+\Delta t_{\rm RT}/t_{\rm rec}}\,. \tag{35}\]
Consequently, the gas will not be fully ionized even after the occurrence of the _missing photons_ issue. The partially neutral gas will keep eliminating photons that enter this cell in subsequent time steps. Similar to equation (33), the number of _missing photons_ in the first substep of the next sourcing step (denoted as step 2) is
\[\delta N_{\gamma,2} =N_{\gamma,2}\left[1-e^{-(1-x_{\rm inj})}\lambda_{\rm RT}/t_{\rm tot }\right]\] \[-\left[1-x_{\rm inj}+f_{\rm rec}(\Delta t_{\rm RT})\right]\,N_{ \rm H1}\,, \tag{36}\]
where \(N_{\gamma,2}\) is the number of photons in the second step before absorption.
As we present in Figure 8 with the 'RT step 3' curve, the second injection step will meet a severe _missing photons_ issue if \(\Delta t_{\star}\gtrsim 0.1t_{\rm rec}\). This leads to all \(2Q\Delta t_{\star}\) photons injected in these two steps being absorbed after the third RT step ('RT step 3'), leaving almost no photons in the gas. If the number of survived photons is insufficient to suppress the recombination in the fourth RT step ('RT step 4'), the H ii fraction will decrease again as equation (35). The _missing photons_ issue has now led to an instability: the H ii fraction oscillates between 1 and \(x_{\rm inj}\) (equation 35) with a period of two RT steps and the photon density does not increase after the first occurrence of _missing photons_ but remains near zero. In the most severe case (e.g. the \(t_{\star}/t_{\rm rec}\gtrsim 0.2\) part of Figure 8), all photons injected subsequently suffer the same severe _missing photons_ issues caused by the unstable H ii fraction as that in the first two injection steps. Even though a fraction of photons may escape, they potentially also meet the same issue in subsequent neutral cells. Consequently, a substantial amount of photons emitted by the star can be dissipated (see Figure 11) and lead to an incorrect H ii region and significantly reduced feedback.
Recall that \(\Delta t_{\rm C}\) is typically \(10^{3}\) times larger than \(t_{\rm ion}\) (equations 27 and 28), therefore even a relatively high but not fully ionized H ii fraction can consume a large fraction of photons in the first RT step. In Figure 11, we present the number of surviving photons as a function of \(\Delta t_{\rm RT}\), where the survival ratio decreases rapidly as \(\Delta t_{\rm RT}\) increases.
Generalizing equation (35), if we have \(N_{\rm sub}>1\) but only inject photons once at the beginning, the H ii fraction after subcycling is \(x_{\rm inj}=1/\left[1+(N_{\rm sub}-1)\Delta t_{\rm RT}/t_{\rm rec}\right]\). Thus, we obtain a time-stepping criterion \(\Delta t_{\star}\lesssim 0.1t_{\rm rec}N_{\rm sub}/(N_{\rm sub}-1)\), which can ensure \(x_{\rm inj}>0.9\) to avoid oscillations in the H ii fraction and repeated _missing photons_. We validate this criterion quantitatively in Appendix C, and we also show this leads to converged results in Section 5.2. This also suggests that having a different cadence between photon injection and thermochemistry may lead to serious consequences, but increasing the number of subcycles should not alter the behaviour significantly.
### Test 3 - Consequences of repeating missing photons issue
In this experiment, we simulate the formation and expansion of an H ii region in the uniform medium with a density of \(10^{9}\,\rm cm^{-3}\). Such a high density is set to break the Courant criterion and it has been observed in hyper compact H ii regions (e.g. Moscadelli et al., 2021). The simulation box is initialized with pure hydrogen gas of temperature \(T=100\,\rm K\) and the radiation source is the same as that used in Section 4.2 with \(T=35\,000\,\rm K\) and \(Q=10^{48}\,\rm s^{-1}\). The reduced speed of light is set as \(0.001c\) for a larger \(\Delta t_{\rm ion}\). The mass resolution \(\mathcal{R}_{i}\) is set as \(10^{4}\) to exclude the effects of low spatial resolution. We minimize the number of RT subcycles as \(N_{\rm sub}=2\), which is an intrinsic requirement of the second-order scheme of AEEPO. We use a forced equal time-stepping scheme so that the RT time steps for all the cells are the same throughout the simulation and equal to the system integration time step.
With this simulation setup, we will present the reduced feedback issue when the repeating _missing photons_ issue happens and test the maximum available time-step to obtain correct momentum feedback
Figure 8: Similar to Figure 7, but assume \(\Delta t_{\star}=2\Delta t_{\rm RT}\) and evolve it to the end of the second injection step after the star ignites. The red and green curves are the cumulative number of photons absorbed in these two inject steps (four RT steps) obtained by the accurate and uncoupled solver. The slightly thinner curves are the number of photons absorbed in the four RT steps with the uncoupled solver, the summation of these four curves is the green line.
with an uncoupled solver. The recombination time is \(t_{\rm rec}=1.22\times 10^{-4}\) yr such that the ionization and Courant times are \(\{t_{\rm ion},t_{\rm C}\}=(2.23\times 10^{-3},18.9\eta)\ t_{\rm rec}=\{2.77\times 1 0^{-7},2.31\times 10^{-3}\eta\}\) yr.
To determine the maximum available time step for simulating the feedback from the H ii region, we run simulations with different time steps \(\Delta t\) to investigate the effect of temporal resolution. We focus on the radial momentum injection as a quantitative way to evaluate the impact on feedback.
In the left panels of Figure 9, we present radial profiles of the H ii fraction and temperature from the \(\Delta t_{\star}=0.5\,t_{\rm ion}\) (fiducial) and \(\Delta t_{\star}=\{0.1,0.2,0.4,0.8,1.6,3.2\}\,t_{\rm rec}\) runs at \(t=1000\,t_{\rm rec}\). In the right panel, we show the evolution of the cumulative injected momentum over \(1000\,t_{\rm rec}\). We find that only runs with a time-step of \(\Delta t_{\star}\leq 0.2\,t_{\rm rec}\) can correctly reproduce the fiducial result. When increasing the time-step to \(0.4\,t_{\rm rec}\), the Stromgren sphere becomes smaller and the momentum injection is reduced by a factor of 1.5. With larger \(\Delta t_{\star}\), the size of the H ii region and momentum injection suddenly plunge to below half of the fiducial solution. This is because the \(0.8\,t_{\rm rec}\) and larger \(\Delta t\) runs fall due to the severe repetition of the _missing photons_ issue, as presented in Section 5.1.2.
In summary, we find that \(\Delta t_{\star}\lesssim 0.1\,t_{\rm rec}N_{\rm sub}/(N_{\rm sub}-1)\) could be a useful criterion for the RT time-step to obtain the correct H ii region feedback under the current implementation of a deep-vt. A larger RT time step will result in too little momentum injection, ionization, and heating.
We note that while our ideal experiment can present convergent results with a time step that satisfies this criterion, in real simulations, gas cells are also allowed to have different time steps. This creates the possibility of photons flowing from a cell with a longer time step into one with a shorter time step, resulting in the occurrence of recurring _missing photon_ issues. Therefore, it is crucial to devise a method to correct this issue. In section 6.2, we will discuss how to deal with this difficulty with the photon density coupled thermochemistry solver and a corrected uncoupled solver.
## 6 Correcting for inadequate resolution
In this section, we will introduce several methods to correct these spatial and temporal resolution problems.
### Spatial correction
#### 6.1.1 Lowering number of neighbours during photon injection
In Section 4, we have found that \(\mathcal{R}_{i}\gtrsim 10\) is needed to ensure the correct ionized mass and feedback while lower resolution leads to increased ionized mass and enhanced feedback. The simplest way to correct the resolution issues is to reduce the number of neighbouring cells that photons from the star are injected into.
In the photon injection routine, \(\Delta N_{\gamma}\) photons from the star particle are dumped into a given number of its nearest neighbouring gas cells in a weighted fashion. In our default setup, it is set to 32, in order to enshroud the stellar particle. If one does not care about the morphology of individual H ii regions but only their cumulative feedback to the ISM and galaxy, the spatial resolution problem can be partly solved by lowering the value of \(N_{\rm ab}\) to \(N_{\rm ab}<\mathcal{R}_{i}\). In practice, \(N_{\rm ab}\) can be simply set as 2 to maximize the probability of resolving the Stromgren spheres (e.g. Kannan et al.2020).
However, this correction cannot eliminate the possibility of _overionization_ so that highly unresolved H ii regions will still lead to divergent feedback. Also, this correction is a global method which is hard to adjust dynamically based on the local resolution. Therefore, we will introduce a novel correction method in the next section, which corrects the resolution effects and provides convergent results by dynamically reducing the ionization rate in the neighbouring cells.
#### 6.1.2 Accurate ionization and heating balances
In this section, we describe an alternative way to correct spatial resolution issues based on enforcing the correct balance between recombination and ionization rates (equation 17).
Recall in Section 4, when we discussed the resolution effects in low-resolution cases, we assumed that the entire initial Stromgren sphere is embedded in a single cell. In this case, when the ionized mass reaches the Stromgren mass \(M_{\rm S}\), the ionization fraction in this cell only reaches \(\mathcal{R}_{i}\) rather than \(\sim 1\), which leads to an ionization rate \(R_{\rm ion}\) larger than the recombination rate \(R_{\rm rec}\). Our modification aims to reduce the ionization rate by a factor \(f_{\rm cor}\) so that the ionization and recombination are balanced once the ionized mass reaches \(M_{\rm S}\), i.e.
\[\alpha_{\rm B}n_{\rm H}^{2}n_{\rm H}^{2}\dot{\rm H}_{\rm H}=f_{\rm cor} \tilde{c}(1-x_{\rm H\,\textsc{ii}})n_{\rm H}\sigma_{\rm H\,\textsc{ii}}n_{ \gamma}\,, \tag{37}\]
when \(M_{i}=M_{\rm S}\).
In practice, photons from the star are dumped into \(N_{\rm ab}\) cells with a weight factor \(w_{k}\) for each neighbour, e.g. based on solid angle weighting. Here we assume that photons from the star are always trapped in the nearest neighbours of the star when the resolution is lower than \(\mathcal{R}_{i}=N_{\rm ab}\). In this case, each neighbouring cell \(k\) can thus be regarded as an individual unresolved H ii region with a source rate of \(Q_{k}=w_{k}Q\). By definition of the mass resolution (equation 7), we have \(\mathcal{R}_{i,k}\) for cell \(k\) of
\[\mathcal{R}_{i,k}=\frac{m_{\rm H}}{\alpha_{\rm B}}\frac{w_{k}Q}{n_{\rm H}M_{ \rm cell}}=w_{k}\mathcal{R}_{i}\,. \tag{38}\]
When the initial Stromgren sphere is formed, the ionized gas of mass \(M_{i}\) is hosted by these \(N_{\rm nb}\) cells, and \(M_{i}=\sum_{k}x_{\rm H\,\textsc{ii},k}M_{\rm cell}\) where \(x_{\rm H\,\textsc{ii},k}\) is the ionization fraction of cell \(k\). Because \(\sum_{k}w_{k}=1\), we have \(x_{\rm H\,\textsc{ii},k}\approx\mathcal{R}_{i,k}\) at the time when \(M_{i}=M_{\rm S}\). This is similar to the idealized one-zone case we discussed in Section 4, except that we use \(\mathcal{R}_{i,k}\) to account for every neighbour. Replacing \(x_{\rm H\,\textsc{ii}}\) with \(\mathcal{R}_{i,k}\) in equation (37), we have the target balance equation for each cell
\[\alpha_{\rm B}n_{\rm H}\mathcal{R}_{i,k}^{2}\approx f_{\rm cor}\tilde{c}\alpha_ {\rm H\,\textsc{ii}}n_{\gamma}\,. \tag{39}\]
Therefore, the correction factor \(f_{\rm cor}\) that is required to establish an accurate balance once \(M_{i}\) reaches \(M_{\rm S}\) is
\[f_{\rm cor}=\frac{\alpha_{\rm B}n_{\rm H}\mathcal{R}_{i,k}^{2}}{\tilde{c}\sigma_ {\rm H\,\textsc{ii}}n_{\gamma}}\,. \tag{40}\]
The steps to implement this correction are listed below:
1. We use an attribute \(\mathcal{R}_{i,l}\) to store the local mass resolution of each gas cell \(l\) and initialize it as \(\mathcal{R}_{i,l}=-1\) at the beginning of each step.
2. When looping through neighbouring gas cells around the star in the photon injection routine, evaluate the local mass resolution \(\mathcal{R}_{i,k}\) for each cell \(k\) based on its weight factor \(w_{k}\) and gas attributes (equation 38).
3. In the thermochemistry routine, for each cell \(l\), we check whether \(0<\mathcal{R}_{i,l}<1\). If so, we go to the next step to conduct the resolution correction. Otherwise, the cell is either already resolved (\(\mathcal{R}_{i,l}>1\)) or not a neighbouring cell (\(\mathcal{R}_{i,l}=-1\)).
4. If cell \(l\) requires a resolution correction, activate the correction by checking if \(x_{\rm H\,\textsc{ii},l}>\mathcal{R}_{i,l}\), calculate the correction factor \(f_{\rm cor}\) (equation 40), and reduce the ionization and heating rates by multiplying them with \(f_{\rm cor}\).
In this way, the spatial resolution correction is implemented dynamically only for cells in the neighbour lists of the stars and only based on their own gas attributes. This correction algorithm can thus be easily applied to realistic simulations with non-uniform gas density. The analogous correction factors for more species can also be obtained by modifying the detailed balance equations. We derive and describe the correction factors for He and H\({}_{2}\) in Appendix D.
In Figure 10, we present the low resolution runs in the static Stromegren sphere test (Test 1, Section 4.1) with (dash curves) and without (solid curves) our correction. With the correction, both the marginally resolved (\(\mathcal{R}_{i}=1\) and 100) and unresolved (\(\mathcal{R}_{i}=0.1\) and 0.01) Stromegren spheres can obtain equilibrium ionized masses within a factor of 2 of the expected analytic result. We only conduct resolution corrections for cells in the neighbour list of the star, the ionized mass is increasing over time slowly because a portion of photons leaks out of these cells. However, we will soon see that this only leads to a minor effect on the overall momentum feedback.
In Figure 11, we present the momentum and energy feedback with and without the spatial resolution correction at 1 Myr. For the marginally-resolved cases (\(0.1\leq\mathcal{R}_{i}\leq 10\)), the momentum and energy feedback after correction perfectly match the high resolution results. In the case of significant unresolved simulations (\(\mathcal{R}_{i}=0.01\)), the inadequate mass resolution results in an artificial enhancement of momentum and energy feedback by over one order of magnitude arising from the effects of _over-ionization_ and _over-heating_. With the resolution correction, the momentum and energy feedback are much under control and are even reduced by a factor of a few compared to the true solution. This is because our correction not only ensures the correct initial Stromgren mass but also partly fixes the _over-heating_ issue by limiting the photoheating rate and correcting the balance between heating and cooling so the internal energy is approximately conserved (this is not guaranteed because the reduced cooling rate is not corrected). In return, the unresolved Stromgren spheres reach even lower temperatures, several hundred Kelvin for the \(\mathcal{R}_{i}=0.01\) run, leading to insufficient pressure gradients to drive the expansion by converting thermal energy to kinetic energy. Therefore, the highly unresolved H ii regions simply dissolve into the background ISM and provide less amount of feedback.
To maximize the possibility of resolving ionization feedback, we reduce \(N_{\rm{th}}\) to 2 (Section 6.1.1) along with our resolution correction
Figure 10: Evolution of the ionized mass from low mass resolution static Stömgren sphere simulations (Test 1, Section 4.1) with (dash curves) and without (solid curves) our correction
(Section 6.1.2). The magenta triangle in Figure 11 shows the corrected results of the \(\mathcal{R}_{i}=0.01\) run combining these two methods. By concentrating the deposition of ionizing photons in fewer cells, we achieve more accurate momentum feedback and energy injection compared to only using the correction.
1.3 Caveat: Trade-off between correcting ionization feedback and achieving accurate recombination line emission
Although enforcing the correct balance between recombination and ionization rate can help us obtain the correct ionized mass and feedback from unresolved H ii regions, this can lead to additional issues in the post-processing prediction of recombination line emission, as it preserves the conversion of ionizing photons into line photons (e.g. Smith et al., 2022).
Without our correction, the balance between the total ionizing flux and integral recombination rate
\[Q=\int R_{\rm rec}{\rm d}V=4\pi\int\,\alpha_{\rm H}u_{\rm H}^{2}v_{\rm H}^{2}u ^{\tau}r^{2}{\rm d}r\,, \tag{41}\]
which holds for all resolutions because the photon number is conserved. However our correction equivalently reduces \(Q\) with a factor of \(\mathcal{R}_{i,k}\) to ensure
\[M_{i}=4\pi m_{\rm H}\int n_{\rm H}x_{\rm H\,ii}r^{2}{\rm d}r=M_{\rm S}\,. \tag{42}\]
Thus, there is a trade-off between obtaining the correct ionized momentum/energy on the fly based on the correct \(M_{i}\) and obtaining precise recombination line emission with post-processing based on the recombination integral (equation 41).
In Figure 12, we present the radial integration of recombination rate as the function of radius obtained from the runs presented in Figure 10. The recombination integrals in the uncorrected runs converge to unity at all the resolutions while those in the corrected runs decrease with a scaling \(\propto\mathcal{R}_{i,k}\). Consequently, employing our correction will lead to an underestimate in line emissivity with a factor of \(\mathcal{R}_{i,k}\). Correcting ionization feedback while simultaneously achieving accurate recombination line emission presents a challenging trade-off. A possible solution is to use the \(\mathcal{R}_{i,k}\) values stored in the gas cells to further correct the recombination rate in the cells where the spatial resolution correction is activated. Ongoing investigations aim to improve solutions to this dilemma.
### Temporal correction
#### 6.2.1 Coupling photon density into thermochemistry calculations
The most direct (but expensive) solution to solve the temporal resolution problems is coupling the photon density into the ODE system of thermochemistry calculations. For hydrogen-only cases with a single UV bin, there are three variables: the H ii fraction \(x_{\rm H\,ii}\), the photon density normalized to its initial value \(r_{\gamma}=n_{\gamma}/n_{\gamma},0\), and specific internal energy \(u\). Thus, we need to solve the following three equations simultaneously
\[\frac{{\rm d}x_{\rm H\,ii}}{{\rm d}t} =-\alpha_{\rm H\,ii}n_{\rm H}x_{\rm H\,ii}^{2}+\sigma_{\rm eH\,i}n _{\rm H\,ii}x_{\rm H\,ii}+\tilde{c}\sigma_{\rm H\,i}x_{\rm H\,ii}n_{\gamma},0 \sigma_{\gamma}\,,\] \[\frac{{\rm d}r_{\gamma}}{{\rm d}t} =-\tilde{c}n_{\rm H\,ii}x_{\rm H\,ii}\,,\quad\text{and} \tag{43}\] \[\frac{{\rm d}u}{{\rm d}t} =\frac{1}{\rho}\Big{[}\Gamma_{\rm\,XH\,i}r_{\gamma}-(\tilde{ \Lambda}_{\rm ff}+\tilde{\Lambda}_{\rm rec})x_{\rm H\,ii}^{2}-(\tilde{\Lambda} _{\rm bb}+\tilde{\Lambda}_{\rm bf})x_{\rm H\,ii}x_{\rm H\,ii}\Big{]}\,,\]
where the \(\tilde{\Gamma}\) is the constant part of the photoionization heating rate and \(\tilde{\Lambda}\) is the temperature-dependent part of the heating and cooling rates listed in Appendix A, respectively.
In principle, the photon density coupled thermochemistry solver should give an accurate solution no matter how long the integration step is. In Figure 13, we present the result of Test 3 (Appendix 5.2) solved by a coupled solver (red curve). The integration time step
Figure 11: The momentum (_top_) and energy (_bottom_) feedback at 1 Myr obtained with (black triangles) and without (blue circles) the spatial resolution correction as a function of spatial resolution \(\mathcal{R}_{i}\). The magenta triangle shows the results of the \(\mathcal{R}_{i}=0.01\) run combining reducing the number of neighbours to 2 and our spatial resolution correction. The \(y\)-axis quantities are normalized to the fiducial results (\(\mathcal{R}_{i}=1000\)).
Figure 12: Radial integration of the recombination rate (RHS of equation 41, normalized to the ionizing photon rate \(\mathbf{Q}\)) as a function of radius obtained from the runs presented in Figure 10. The grey dashed lines indicate the value of \(\mathcal{R}_{i,k}\), to which the corrected runs asymptote.
is decided by the code as \(\Delta t=7.8\,t_{\rm rec}\). This solution matches the \(\Delta t=0.5\,t_{\rm non}\) fiducial run (grey dash curve) perfectly except at the very beginning, where the too-long time step leads to a delay in the formation of the initial Stromgren sphere. As a comparison, the green curve shows the solution of the uncoupled solver with the same time step, whose momentum feedback is erroneously reduced by over two orders of magnitude.
#### 6.2.2 Limited photon absorption in an uncoupled solver
Although the coupled solver provides an accurate solution for the temporal resolution problem, it can be unnecessarily expensive if He and H\({}_{2}\) are included and adding frequency bins accounting for their photo-chemistry. Here we introduce an approximate solution to correct the _missing photons_ issue with an uncoupled solver, based on the suggestion in Jaura et al. (2020).
In Section 5.1.1, we have seen that for a neutral cell, the maximum ability to absorb photon is \((1+f_{\rm rec})N_{\rm H1}\), where \(f_{\rm rec}=\Delta t/t_{\rm rec}\). Jaura et al. (2020) proposed a simple method to solve this problem, where they force the number of absorbed photons to be smaller than or equal to the number of atoms that can be ionized. For hydrogen, the number of photons absorbed in a thermochemistry step is
\[\Delta N_{\gamma}=\min\,\left[(1+f_{\rm rec})N_{\rm H1}\,,N_{\gamma,0}e^{- \Delta t/t_{\rm non}}\right]\,, \tag{44}\]
where \(N_{\rm H1}\) and \(N_{\gamma,0}\) are the number of neutral hydrogen and the number of hydrogen ionizing photons in a gas cell, respectively.
In the left panel of Figure 14, we present the number of photons absorbed in a fully neutral neighbouring cell during the first step after the star ignites, as a function of the size of the time step (with a setup of Test 3, assuming the sizes of all the relative time steps match each other). The solution of the coupled solver (red curve) is regarded as the accurate solution. Compared to this, the uncoupled solver (green curve) deviates from the solution when the time step is larger than \(t_{\rm non}\) exhibiting a severe _missing photons_ issue, while the limited photon absorption correction (equation 44, blue curve) matches the solution in both large and small \(\Delta t\) intervals.
In Section 5.1.2, we discussed how the difference in cadence between the photon injection and thermochemistry routines can exacerbate the _missing photons_ issue by repeatedly consuming photons in subsequent RT substeps. As the green lines in Figure 14 show, if \(\Delta t_{\star}\) is larger than a few \(0.01t_{\rm rec}\), almost all photons injected will be absorbed in the first RT step, leaving the gas photon-free to recombine in the second RT step. This issue is also solved by the limited photon absorption correction. As shown in the _right_ panel of Figure 14, our correction method ensures that only a small fraction of the total number of photons dumped in an injection step \(\Delta t_{\star}\) are absorbed in the first RT substep. This allows a substantial number of photons to remain in the second substep and suppress gas recombinations. Therefore, the total number of photons absorbed during an injection step shows a perfect agreement with the accurate solution, demonstrating the effectiveness of our correction.
In Figure 13, the blue curve shows the momentum feedback obtained with a corrected uncoupled solver, which presents a perfect match to the result of the coupled solver (red curve). Such a perfect correction is based on the fact that the recombination rate, as a function of temperature, is not changing dramatically in our idealized situation. For H ii regions, this holds in most cases because they usually have an equilibrium temperature of about \(10^{4}\) K, while one should be cautious about other environments with more drastic temperature variations or fluctuations.
## 7 Discussion and conclusions
### When are resolution corrections needed?
We have demonstrated that resolving the formation phase of H ii regions both spatially and temporally is crucial to couple their momentum and energy feedback to the ISM in a convergent manner. We have also proposed several methods to correct these resolution problems. Here we briefly discuss in what situations we need to include these resolution corrections.
#### 7.1.1 Spatial resolution issues
First of all, we emphasise that eliminating the _over-ionization_ and _over-heating_ problems is crucial to obtaining convergent simulation results. Here we will discuss in what situations these two problems become severe numerical issues.
Most RT-coupled Lagrangian codes with similar stellar and gas particle masses \(M_{\rm cell}\sim M_{\star}\gtrsim 10^{3}\,{\rm M}_{\sun}\) treat stars born in a GMC as a single stellar particle (e.g. Hopkins et al., 2018; Marinacci et al., 2019). In this case, the ionizing flux from stellar particles depends only on the IMF-averaged mass-to-light ratio, which is \(\langle q\rangle\sim 5\times 10^{46}\,{\rm photons\,s}^{-1}{\rm M}_{\sun}^{-1}\)(Rosdahl et al., 2015). Thus, the resolution effects become important when
\[n_{\rm H}\gtrsim 16\,{\rm cm}^{-3}\left(\frac{\mathcal{R}_{\rm f}}{10}\right)^{- 1}\left(\frac{\langle q\rangle}{5\times 10^{46}\,{\rm s}^{-1}{\rm M}_{\sun}^{-1}} \right)\,. \tag{45}\]
Regions with higher density in such simulations must be treated with resolution corrections to avoid enhanced ionization feedback.
Higher-resolution simulations with \(M_{\rm cell}\lesssim 10\,{\rm M}_{\sun}\) must treat each massive star as a single star particle (e.g. Emerick et al., 2019). In this
Figure 13: Evolution of momentum injection for Test 3 (Section 5.2) runs obtained with the coupled solver (red), the uncoupled solver without correction (green), and the uncoupled solver with limited absorption correction (blue), using a Courant time step decided by the code. The grey dashed curve is the fiducial result obtained with a time step \(\Delta t=0.5\,t_{\rm non}\).
case, the density threshold requiring correction becomes
\[n_{\rm H}\gtrsim 32\,{\rm cm}^{-3}\left(\frac{\mathcal{R}_{i}}{10}\right)^{-1} \left(\frac{M_{\rm cell}}{10\,{\rm M}_{\sun}}\right)^{-1}Q_{48}\,. \tag{46}\]
This result looks similar to that of low-resolution simulations. However, the steep mass-luminosity function of individual massive stars suggests that the H ii regions of low-mass B-type stars with low ionizing photon flux are susceptible to the low-resolution effects. For example, if a \(7\,{\rm M}_{\sun}\) star is still treated as an individual stellar particle (\(Q=8\times 10^{43}\,{\rm s}^{-1}\), adopting the fitting function of Schaerer 2002), the density threshold of its Stromgren sphere obtained from equation (46) is only \(0.0025\,{\rm cm}^{-3}\). Therefore, choosing a reasonable mass floor to conduct explicit RT feedback is a nontrivial problem. Fainter, less-massive stars are also ineffective in providing momentum feedback. Estimated by the early phase analytic results (equation 5 and 6), the total momentum injection of a star follows \(p\propto Q^{6/7}\gamma^{3/7}\). For instance, the \(7\)-\(15\,{\rm M}_{\sun}\) stars only provide 1/10 of the momentum feedback compared to the entire \(7\)-\(120\,{\rm M}_{\sun}\) interval (taking the mass-luminosity and mass-lifetime functions of Schaerer (2002) weighted by the Kroupa (2001) IMF), while the ionizing rate of a \(15\,{\rm M}_{\sun}\) can already boost this density threshold to \(3\,{\rm cm}^{-3}\).
Simulations focusing on the formation, accretion, and evolution of individual stars in GMCs have much higher mass resolution and treat stars as sink particles (e.g. Krumholz et al. 2011; Grudic et al. 2021). The typical mass of gas cells in these simulations is \(M_{\rm cell}\sim 10^{-3}\,{\rm M}_{\sun}\), allowing a density threshold \(>10^{5}\,{\rm cm}^{-3}\) following equation (46), so the spatial resolution issues may not be as problematic for these simulations. However, these simulations usually aim to resolve the physics in ultra-dense regions of \(n>10^{7}\,{\rm cm}^{-3}\), where we still need to carefully deal with similar issues for the low-mass ionizing stars.
In summary, the initial Stromgren sphere can only be spatially resolved in regions with relatively low density or around a particularly strong ionizing source. Therefore, in simulations with large dynamic ranges, such as galaxy formation, multiphase ISM, and GMC, it becomes essential to consider implementing a correction to prevent divergent feedback from unresolved H ii regions.
#### 7.1.2 Temporal resolution issues
The requirement to ignore the temporal resolution correction is much more stringent. As presented in Appendix B, if \(\Delta t_{\rm RT}\gg t_{\rm rec}\) the _missing photons_ issue will appear as long as the spatial resolution is high (i.e. \(\mathcal{R}_{i}\gg 100\), equation 34) so that the number of H atoms in a cell is much less than the number of photons. Failure to meet this requirement is not problematic because the _missing photons_ issue only happens when the radiation meets neutral gas unless there are differences between the sizes of the photon injection step and the thermochemistry step.
When the injection cadence is not the same as thermochemistry, the simplified time-stepping criterion of photon injection step \(\Delta t_{\star}\lesssim 0.1t_{\rm rec}N_{\rm sub}/(N_{\rm sub}-1)\) implies a density threshold of (combining equations 4 and 27)
\[n_{\rm H}\gtrsim 0.004\,{\rm cm}^{-3}\,\eta^{-3}\left(\frac{N_{\rm sub}}{N_{ \rm sub}-1}\right)^{3}\left(\frac{\tilde{c}}{10^{-3}c}\right)^{3}\left(\frac{ \mathcal{R}_{i}}{10}\right)Q_{48}^{-1}\,. \tag{47}\]
This value is too small for most classes of simulations unless the spatial resolution is extremely high. Thus, we strongly suggest that all RT implementations with different sourcing and absorption cadences should consider adopting a special time-stepping scheme or a temporal resolution correction to avoid the _missing photons_ issue. If the correction is not implemented, choosing a smaller Courant factor \(\eta\) or faster speed of light \(\tilde{c}\) is the easiest way to significantly increase
Figure 14: _Left panel:_ Similar to Figure 7, but including results from the uncoupled solver with the limited absorption correction (blue curve), which is in agreement with the accurate coupled solver (red curve) avoiding the _missing photons_ issue seen in the uncoupled solver without correction (green curve). _Right panel:_ Similar to the _left_ panel but \(\Delta t=\Delta t_{\star}=\Delta t_{\rm RT}\) and the \(y\)-axis is normalized to the number of photons dumped in an injection step with size \(\Delta t_{\star}\). The red curve shows the accurate solution of the coupled solver and the blue (green) dash, dot, and solid curves are the solution of the corrected (uncorrected) uncoupled solver for the first and second RT steps, and their summation, respectively.
the density threshold (\(\sim\eta^{-3}\tilde{c}^{3}\)) and ensure the correct simulation behaviour.
### H\({}_{2}\) and helium
In this work, we mainly focused on pure hydrogen gas, while the thermochemistry of molecular hydrogen and helium can also play a significant role in realistic astrophysical environments.
H\({}_{2}\) in the H ii regions can be converted to H atoms through photodissociation (PD) by 11.2-13.6 eV Lyman-Werner (LW) band photons and photoionization by \(>15.2\) eV UV photons. Theoretically, for typical massive stars, the PD-front is merged with the I-front in the early phase, i.e. the H\({}_{2}\) molecules are dissociated to H atoms and ionized to H\({}^{+}\) ions almost simultaneously (Bertoldi & Draine, 1996). On the other hand, the processes forming H\({}_{2}\) (dust catalysis and gas phase formation) have very low coefficients compared to the dissociation rates (Nickerson et al., 2018). For example, comparing the ratio between the rate of the fastest H\({}_{2}\) formation channel, the dust catalyst rate \(\alpha_{\rm H_{2}}^{\rm D}\), and LW band dissociation rate \(\sigma_{\rm H_{2}}^{\rm LW}\) with that of the ionization and recombination rates of hydrogen, we have \(\alpha_{\rm H_{2}}^{\rm D}/\sigma_{\rm H_{2}}^{\rm LW})/(\sigma_{\rm H_{2}} \sigma_{\rm H_{2}})\approx 0.0001\) (assume a Milky-way dust-to-gas ratio of \(\sim 0.01\), Draine et al. 2007). Thus, both spatially and temporally resolving the dissociation of H\({}_{2}\) bin the H ii region is forming is much easier than doing this for H atoms.
For helium, the case B radiative recombination rate for He ii is similar to H i (\(2.72\times 10^{-13}T_{4}^{-0.789}\) cm\({}^{3}\) s\({}^{-1}\), Benjamin et al., 1999). Adopting a primordial abundance of \(X=0.76\), the number density ratio of He to H is 0.08, so the initial Stromgren mass of helium is about \(3.5\Omega_{\rm He}/\Omega_{\rm H}\) times as much as that of hydrogen, where \(\Omega_{\rm H}\) and \(\Omega_{\rm He}\) are the ionization photon rates of H and He, respectively (equation 2). Therefore, for a given star, the mass resolution of its helium Stromgren sphere is \(15\Omega_{\rm He}/\Omega_{\rm H}\) times that of hydrogen. Resolving the initial Stromgren mass of helium is even easier for most O-type stars with \(\Omega_{\rm He}>\Omega_{\rm H}/15\) so long as their \(T_{\rm eff}\gtrsim 3.5\times 10^{4}\) K (Sternberg et al., 2003; Martins et al., 2005), so the mass resolution for helium can meet this requirement in most situations as long as hydrogen ionization is spatially resolved.
The ionization of He ii requires photons with \(h\nu>54.4\) eV. \(\Omega_{\rm He\,ii}\) is usually very small for ordinary massive stars and the recombination rate for He iii is about 10 times larger (\(2.19\times 10^{-12}\) cm\({}^{-3}\) s\({}^{-1}\) Verner & Ferland, 1996). This makes the He iii Stromgren sphere with very little mass difficult to be resolved. Additionally, resolving or correcting the He ii ionization needs to obtain the correct He ii fraction at first, which is not always guaranteed. Thus, the mass and physical states of He iii content might be highly uncertain in most simulations but overall He iii has a negligible contribution to the feedback.
For the temporal resolution of helium, the characteristic time-scale should be similar to that of hydrogen, the recombination time-scale \(\{t_{\rm rec,He\,ii}\)-\(t_{\rm rec,He\,iii}\}=\{1/\alpha_{\rm He}(n_{\rm H}+n_{\rm He}),1/\alpha_{\rm He }(n_{\rm H}+2n_{\rm He})\}\). As the recombination coefficient (\(\alpha_{\rm He}\)) of He ii is comparable to that of H, \(t_{\rm rec,He\,ii}\) is also comparable to that of H, making it similarly difficult to be temporally resolved. However, for He iii, \(\alpha_{\rm He\,ii}\) is one order of magnitude larger, making \(t_{\rm rec,He\,iii}\) one order of magnitude smaller. This, in turn, makes the He ii-He iii thermochemistry more challenging to be temporally resolved.
In summary, resolving the hydrogen Stromgren sphere presents the greatest numerical challenge and dynamical significance compared to H\({}_{2}\) and He i. As such, the resolution requirements and correction methods proposed in this paper remain relevant even in more complex chemical networks. For primordial chemical networks, we outline the implementation of our spatial resolution correction scheme, which includes He and H\({}_{2}\) chemistry, in Appendix D. Additionally, readers can refer to Jaura et al. (2020) for a limited absorption correction that also considers He and H\({}_{2}\).
### H ii regions in inhomogeneous density structures
Realistic MCs exhibit highly filamentary structures, where protostars form in overdensities characterized by power-law density profiles (\(\rho\propto r^{-\nu}\), Larson, 1981; Kauffmann et al., 2010, see Andre et al., 2014 for a review). Franco et al. (1990) demonstrated that H ii regions in media with \(w>3/2\) can expand in an accelerating manner, driving a "champagne flow". In complex structures, H ii regions tend to expand toward the rarefied regions and stall in the dense regions as blister-type regions (Mellema et al., 2006; Gendelev & Krumholz, 2012; Zamora-Aviles et al., 2019; Jin et al., 2022).
In numerical simulations, these effects can be self-consistently modelled on a scale larger than the numerical resolution, but the small-scale density gradients are smoothed due to the inadequate resolution. In the absence of a sub-grid turbulence model, gas represented by a cell is assumed to be uniform. Consequently, numerically unresolved H ii regions suffer from _over-ionization_ and _over-heating_ issues similar to those in the uniform medium, while our corrections described in Section 6.1.2 ensure the correct ionized mass predicted by the uniform solutions. However, the Stromgren mass evaluated based on the mean density may overestimate the ionized mass since stars are embedded in unresolved overdense cores where the H ii regions are trapped by the high density. This potentially leads to an overestimation of feedback from these trapped H ii regions.
Nonetheless, once the H ii regions break through these dense cores due to expansion or the assistance of stellar winds and radiation pressure (e.g. Geen et al., 2020, 2021), they are easier to resolve (\(\mathcal{R}_{i}\propto n^{-1}\)) and the local density fluctuations will have a less significant impact on the global properties of the H ii regions as they expand toward the low-density gas. High-resolution radiation-hydrodynamic simulations by Mellema et al. (2006) showed that though the expansion of H ii region in turbulent MCs is slower than that in uniform medium with the same mean density due to the overdensity around the star, over time, the mean radius can eventually catch up with the uniform case. Hence, our corrections are considered reasonable approximations to capture the feedback from unresolved H ii regions in realistic environments.
### Other early feedback channels and their combination
In addition to ionization feedback, luminous massive stars also provide energetic early (pre-SN) feedback through radiation pressure and high-velocity stellar winds. However, resolving the effects of radiation pressure in numerical simulations presents its own challenges due to the high spatial and temporal resolution requirements, especially in dense dusty regions (e.g. Krumholz, 2018; Hopkins & Grudic, 2019). Similarly, modelling the effects of stellar winds also requires high resolution simulations (e.g. Pittard et al., 2021). The rapid energy loss due to the turbulent mixing at the wind bubble/shell interface poses additional challenges that need to be addressed to accurately simulate the wind feedback (Lancaster et al., 2021, 2021). The combination of all these early feedback channels in simulations is nonlinear and can provide a complicated collective impact rather than their simple superposition (e.g. Haid et al., 2018; Geen et al., 2021). Additional challenges on the numerical resolution can be introduced by such physics and requires further investigation to better understand their collective impact on galaxy formation simulations.
### Summary
We have performed a suite of radiation hydrodynamic simulations of idealized H ii regions with the moment-based M1 closure RT solver in the moving-mesh code arepo. Below, we list our main findings.
1. Sufficient mass (spatial) resolution (\(\mathcal{R}_{i}=M_{\mathrm{S}}/M_{\mathrm{cell}}\)) is critical to accurately capture the evolution and feedback of simulated H ii regions. With a spatial resolution higher than \(\mathcal{R}_{i}=10\) (more than 10 cells inside the initial Stromgren sphere), the momentum and energy feedback from an expanding H ii region can be reproduced with an acceptable numerical error. If we want to reproduce the profiles of shock parameters, the resolution should be higher than \(\mathcal{R}_{i}=100\).
2. In the formation phase of H ii regions, insufficient spatial resolution lowers the ionization fraction of gas cells but can lead to an overestimation of the ionized mass when the H ii region is forming, i.e. the _over-ionization_ problem. In the expansion phase, the insufficient spatial resolution will fail to heat the ionized gas cells to a correct temperature and result in insufficient cooling, the total thermal energy of gas will thus also be overestimated and lead to enhanced momentum feedback, i.e. the _over-heating_ problem. It is crucial to avoid _over-ionization_ and _over-heating_ problems in simulations because they both lead to divergent feedback when the H ii regions are unresolved. To correct the spatial resolution problems, we can consider both lowering the number of neighbours \(N_{\mathrm{nb}}\), as well as introducing a correction to reestablish the accurate balance between ionization and recombination. However, although these corrections can correct the momentum feedback from H ii regions, they may not help enough for dynamic multi-phase gas structures. The _over-heating_ problem will turn the highly-ionized, hot (\(>10^{4}\) K) gas concentrated in the H ii regions into diffuse, partially ionized, warm (\(\sim 8000\) K) gas, changing the multi-phase gas structure of the ISM in star-forming regions.
3. If the size of the time step is too large, almost all photons dumped in the first step will be lost and there is no mechanism to compensate for these missing photons, leading to the _missing photons_ issue. Most simulations of real astrophysical systems (e.g., galaxy formation, multi-phase ISM, GMC) cannot temporally resolve the ionization timescale of the gas. Thus, the _missing photons_ issue should be treated carefully if the cadences of photon injection and thermochemistry are different in these simulations. A rough time-stepping criterion for the injection time step is \(\Delta t_{\star}\lesssim 0.1\mathrm{\,trc}N_{\mathrm{sub}}/(N_{\mathrm{sub}}-1)\) when each injection time step is associated with \(N_{\mathrm{sub}}\) RT steps.
4. If the ratio of injection step size to RT step size (\(t_{\star}/t_{\mathrm{RT}}\)) is too large, the _missing photons_ issue will repeat after each injection. Ionizing fluxes reduced in this way will be insufficient to ionize and heat the gas and reduce the feedback from the H ii regions. Using a thermochemistry solver coupled with the photon densities can obtain an accurate solution and solve the _missing photons_ problem fundamentally, but this is an expensive choice when including many species and frequency bins. Alternatively, the uncoupled solver with the limited photon absorption approximation proposed by Jaura et al. (2020) is a much cheaper but very powerful correction for this issue.
5. Spatially and temporally resolving the thermochemistry of hydrogen is generally the most challenging part compared to that of H\({}_{2}\) and He i. While resolving and correcting the He ii-He iii thermochemistry also presents its own difficulties, it is of relatively lesser importance for stellar feedback. Once the ionization feedback from hydrogen is spatial and temporally resolved, those for H\({}_{2}\), He i will be already solved, alleviating the complexity when designing resolved ionization feedback models. We also outline a method to implement our spatial resolution correction scheme including He and H\({}_{2}\) chemistry in Appendix D, and we refer the readers to Jaura et al. (2020) for the limited absorption correction including He and H\({}_{2}\).
## Acknowledgements
We thank Volker Springel for giving us access to arepo. YD is grateful to Josh Borrow and Yang Ni for useful discussions. YD was a visiting student at the Massachusetts Institute of Technology sponsored by the ZhengGang Fund of NJU through the ZhengGang Scholarship for Overseas Study. AS acknowledges support under an Institute for Theory and Computation Fellowships at the Center for Astrophysics | Harvard & Smithsonian. MV acknowledges support through NASA ATP 19-ATP19-0019, 19-ATP19-0020, 19-ATP19-0167, and NSF grants AST-1814053, AST-1814259, AST-1909831, AST-2007355 and AST-2107724. GLB acknowledges support from the NSF (AST-2108470, XSEDE grant MCA06N030), NASA TCAN award 80NSSC21K1053, and the Simons Foundation through their support of the Learning the Universe collaboration. The simulations of this work were run on the MIT Engaging cluster, the Anvil cluster at Purdue University as part of XSEDE through TG-PHY20025, and the Stampede2 HPC resource at Texas Advanced Computing Center as part of XSEDE through TG-AST200007 and TG-MCA06N030. We use python packages NumPy (Harris et al., 2020), ScPcV (Virtanen et al., 2020), astropy (Astropy Collaboration et al., 2013, 2018), matplotlib (Hunter, 2007) to analyze and visualize the simulation data.
## Data availability
The data that support the findings of this study are available from the corresponding author, upon reasonable request.
|
2306.00236 | Cell motility modes are selected by the interplay of mechanosensitive
adhesion and membrane tension | The initiation of directional cell motion requires symmetry breaking that can
happen both with or without external stimuli. During cell crawling, forces
generated by the cytoskeleton and their transmission through mechanosensitive
adhesions to the extracellular substrate play a crucial role. In a recently
proposed 1D model (Sens, PNAS 2020), a mechanical feedback loop between
force-sensitive adhesions and cell tension was shown to be sufficient to
explain spontaneous symmetry breaking and multiple motility patterns through
stick-slip dynamics, without the need to account for signaling networks or
active polar gels. We extended this model to 2D to study the interplay between
cell shape and mechanics during crawling. Through a local force balance along a
deformable boundary, we show that the membrane tension coupled with shape
change can regulate the spatiotemporal evolution of the stochastic binding of
mechanosensitive adhesions. Linear stability analysis identified the unstable
parameter regimes where spontaneous symmetry breaking can take place. sing
simulations to solve the fully coupled nonlinear system of equations, we show
that starting from a randomly perturbed circular shape, this instability can
lead to keratocyte-like shapes. Simulations predict that different adhesion
kinetics and membrane tension can result in different cell motility modes
including gliding, zigzag, rotating, and sometimes chaotic movements. Thus,
using a minimal model of cell motility, we identify that the interplay between
adhesions and tension can select emergent motility modes. | Yuzhu Chen, David Saintillan, Padmini Rangamani | 2023-05-31T23:16:26Z | http://arxiv.org/abs/2306.00236v1 | # Cell motility modes are selected by the interplay of mechanosensitive adhesion and membrane tension
###### Abstract
The initiation of directional cell motion requires symmetry breaking that can happen both with or without external stimuli. During cell crawling, forces generated by the cytoskeleton and their transmission through mechanosensitive adhesions to the extracellular substrate play a crucial role. In a recently proposed 1D model (Sens, PNAS 2020), a mechanical feedback loop between force-sensitive adhesions and cell tension was shown to be sufficient to explain spontaneous symmetry breaking and multiple motility patterns through stick-slip dynamics, without the need to account for signaling networks or active polar gels. We extended this model to 2D to study the interplay between cell shape and mechanics during crawling. Through a local force balance along a deformable boundary, we show that the membrane tension coupled with shape change can regulate the spatiotemporal evolution of the stochastic binding of mechanosensitive adhesions. Linear stability analysis identified the unstable parameter regimes where spontaneous symmetry breaking can take place. Using simulations to solve the fully coupled nonlinear system of equations, we show that starting from a randomly perturbed circular shape, this instability can lead to keratocyte-like shapes. Simulations predict that different adhesion kinetics and membrane tension can result in different cell motility modes including gliding, zigzag, rotating, and sometimes chaotic movements. Thus, using a minimal model of cell motility, we identify that the interplay between adhesions and tension can select emergent motility modes.
The mechanism of cell crawling on substrates is important for understanding numerous biological processes such as morphogenesis and wound healing [1]. Experiments have revealed that several main subcellular processes, including actin polymerization [2], adhesion [3], and myosin contraction [4], are spatially and temporally orchestrated to generate coherent cellular motion. These experiments have also lent themselves to systematic theoretical and computational modeling [5]. Among these different subprocesses, the initiation of motion is of particular interest because it can happen both due to external cues and spontaneously due to intrinsic biochemical or mechanical instabilities, in a process known as cell self-polarization [6, 7]. A polarized cell undergoes distinct molecular processes at the front and rear, such as the distribution of Rho family GTPases which regulate the actin protrusion and adhesion formation [8]; this distribution specifies a direction for motility. Interestingly, despite the vast number of molecular players involved [8, 9], cell migration is essentially a mechanical process [10, 11]. The integration of cell signaling into mechanical processes has led to different scales of biophysical models [12, 13, 11].
Depending on the cell type, migrating cells can assume different shapes. For example, fibroblasts have multiple protrusions [14, 15, 16], while fast-moving keratocytes are characterized by their
flat, smooth, fan-shaped leading edges [17]. Despite these differences, a common mechanism for cells to undergo directional motion can be summarized as follows: the leading edge of the cell protrudes as a result of the speed difference between actin polymerization and retrograde flow, the rear retracts to keep up with the front [4, 18], and membrane tension plays a crucial role in this process because it can coordinate the protrusion and retraction as a global regulator for cell shape change and motility [19, 20, 21]. With this picture, many phenomenological models have been proposed to explain the underlying mechanisms for motility and shape determination. These include models constructed based on the graded radial extension hypothesis [22], viscoelastic actin network and myosin transport [23], force balance between treadmilling actin filaments and membrane tension [17, 24], two-phase fluids with actin polymerization [25], and many other redundant mechanisms [26]. Later, more comprehensive free-boundary models incorporated other features such as the discrete stick-slip adhesions [27, 28], the orientational order of the actin filament network [29, 30, 31, 32], and the feedback loop between actin flow, myosin, and adhesion [33] and are reviewed in [34]. Most of these simulations either start with a crescent shape to match experimental observations [22, 23], with perturbations or polarity fields along a specified direction [27, 28, 29, 30, 31, 32, 33], or with a prescribed front [25]. Therefore, how the cell shape transitions from random fluctuations to persistent motile shapes remains unclear from these models.
Apart from steady-moving states, keratocytes can also undergo more complex motility modes such as bipedal motion and spontaneous turning, or a combination of both [35, 36, 37, 38]. Experimental
Figure 1: (_A_) Schematic describing self-polarization in cells. Fluctuations of certain wavelengths are amplified by the intrinsic instability of the feedback loop between adhesions and membrane tension, leading to self-polarization. (_B_) Schematic describing the various motility modes. Directional motion is the uniaxial motion of the cell in one preferred direction. Bipedal motion involves antiphase retraction of the left-right trailing edge and lateral oscillation of the cell. In turning motion, the cell rotates in a certain direction with left-right asymmetry. (_C_) Sketch of the two-dimensional model (top view). The cell shape is determined by the difference between the polymerization velocity \(v_{p}\) and the retrograde velocity \(v_{r}\), where \(v_{r}\) is related to the off-rate of adhesive bonds distributed within the lamellipodium with width \(l_{1}\) along the cell boundary. The force balance between friction, membrane tension, and contraction is maintained at the cell edge. (_D_) Sketch of the two-dimensional model (side view). The adhesive bonds bind to the substrate with a constant rate \(k_{\text{on}}\) and unbind with a force-dependent rate \(k_{\text{off}}\) as the actin filaments move.
and modeling studies reveal that the stick-slip adhesive sites at the rear and their coupling with the cytoskeleton dynamics are crucial for such unsteady motions [37, 38, 39, 40]. However, the models used to investigate this coupling need prior knowledge of the positions of adhesion sites and the broken symmetries. As a result, they cannot predict how the distributions of adhesions are formed in the first place. Other simulations that obtain bipedal motion without prescribed adhesion sites take the substrate deformations into account [41, 42], but treat adhesion dynamics phenomenologically.
Mogilner _et al._ suggested that cell polarization and turning may share similar mechanisms that involve the feedback between actomyosin flows and stick-slip dynamics of adhesions [5]. Recently, Sens proposed a 1D mechanical feedback loop between the binding and unbinding of cell-substrate adhesions and linear cell tension that can lead to the spontaneous symmetry breaking [10]. Based on a mean-field approximation of the molecular clutch model for adhesions, this theoretical model also predicted various one-dimensional cell locomotion behaviors such as steady crawling, bistability, and bipedal motion as observed in experiments [37, 38], bridging the gap between microscopic factors and whole cell locomotion. Combining the mechanosensitive unbinding of adhesions [10] that are based on first principles, with cell shape change can provide a natural description for the formation and the stick-slip dynamics of adhesions and how they can lead to unsteady motions. However, this model does not explore the link between the mechanical feedback loop and cell shape and motility modes. Such an exploration requires a 2D formulation with a deformable boundary. Here, we asked whether the mechanical feedback loop between adhesion and cell shape change is sufficient to explain the dynamics of cell shape change and motility modes. Specifically, we extend the 1D model in [10] to two dimensions with deformable boundaries to investigate how the coupling between membrane tension and the stick-slip dynamics of adhesions determines cell shape change and the spatial distribution of traction forces during the initiation of motility and resulting sustained motion. Linear stability analysis of the model shows that uniform steady states are unstable in certain parameter regimes due to the stick-slip nature of the adhesions. Numerical simulations in 2D predict that the interplay between membrane tension and mechanosensitive adhesions is sufficient for a circular-shaped cell to spontaneously initiate and sustain motion (Fig. 1_A_), and the various motility modes mentioned above can be captured (Fig. 1_B_) even without the consideration of the complex reorganization of the cytoskeleton. Thus, our 2D model is able to capture the initiation of cell polarization and the different motility modes (directional, bipedal, and turning) by tuning the physical parameters, and demonstrates the crucial role for interplay between adhesions and membrane tension in cell motility.
## Model Development
### Governing equations
At cellular length scales, inertia is negligible [12], so that forces are balanced everywhere at any instant of time. When the actin filaments treadmill at the edge of the cell, the membrane imposes an opposing force while actomyosin contraction generates a contractile force (Fig. 1_C_). These two forces lead to the retrograde flow of actin filaments away from the cell edge and are locally balanced by a friction force pointing outwards and created by transmembrane adhesions that can stochastically bind or unbind to the flat substrate [43]:
\[\mathbf{F}_{\text{contraction}}+\mathbf{F}_{\text{membrane}}+\mathbf{F}_{\text{ friction}}=\mathbf{0}. \tag{1}\]
Here, we apply a coarse-grained approach and assume the focal adhesions are concentrated in a narrow region near the cell periphery with width \(l_{1}\), as proposed by [10]. The cell boundary is thus treated as a one-dimensional curve \(\Gamma(t)\) evolving in a two-dimensional plane (see Fig. S1 of the _SI Appendix_), where the adhesion clusters are material points containing a collection of \(\rho l_{1}\) adhesive linkers on average per unit length along the curve. The position vector of a material point on the boundary at time \(t\) is represented by \(\mathbf{x}(\alpha,t)\), where \(\alpha\in[0,2\pi]\) is a Lagrangian parameter (a detailed description of the geometry and parameterization can be found in the _SI_). At each material point, the total number of adhesive linkers for each adhesion cluster is assumed to be conserved [43]. Each of the adhesive linkers can unbind and rebind between the substrate and the sliding actin filaments with an off-rate \(k_{\text{off}}\) and an on-rate \(k_{\text{on}}\) (Fig. 1_D_), and the effect of thermal fluctuations is modeled with an effective diffusivity \(D\). Denoting the current arclength as \(s(\alpha,t)\in[0,L(t)]\), the fraction of bound linkers \(n(\alpha,t)\) at each material point evolves by the kinetic equation
\[\frac{\partial n}{\partial t}(\alpha,t)=k_{\text{on}}(1-n)-k_{ \text{off}}n+D\frac{\partial^{2}n}{\partial s^{2}}. \tag{2}\]
According to the Bell-Evans formula [44, 45], the mechanical force \(f_{b}\) felt by a given linker will lower the energy barrier for it to unbind from the substrate, such that the off-rate increases exponentially with the force as \(k_{\text{off}}=k_{\text{off}}^{0}\text{exp}(f_{b}/f_{0})\), where \(k_{\text{off}}^{0}\) is the off-rate under zero force and \(f_{0}\) is a molecular force scale for the linker to rupture with a typical order of several pN. For simplicity, the on-rate \(k_{\text{on}}\) is assumed to be a force-independent constant.
The mechanosensitivity of the adhesions allows us to capture the biphasic relation between the friction force and the retrograde velocity \(v_{r}\)[46, 47, 48, 49]. Here, we adopt a minimal mean-field approximation [10] where the average extension of a linker is approximated by the retrograde velocity \(v_{r}\) times its average lifetime \(1/k_{\text{off}}\). Each linker is viewed as an elastic spring with spring constant \(k_{b}\). By Hooke's law, the force experienced by a single linker is given by \(f_{b}=k_{b}v_{r}/k_{\text{off}}\). Hence, the retrograde velocity and the dimensionless off-rate \(r=k_{\text{off}}/k_{\text{off}}^{0}\) can be related by \(v_{r}=v_{\beta}r\log r\), where \(v_{\beta}=k_{\text{off}}^{0}f_{0}/k_{b}\) is a characteristic velocity scale related to the mechanosensitive unbinding process. In addition to the friction generated by the mechanosensitive adhesions, viscous dissipation between the actin flow and the substrate contributes to a linear friction \(\zeta_{0}v_{r}.\) Assuming that protrusions as well as the retrograde flow are locally normal to the cell boundary [50, 51], the total friction force is given by
\[\mathbf{F}_{\text{friction}}=\left(\zeta_{0}v_{r}+\zeta_{1}\frac{nv_ {r}}{r}\right)\mathbf{n}, \tag{3}\]
where \(\mathbf{n}\) is a unit outward normal vector on the cell edge \(\Gamma(t)\).
To highlight the interplay between the mechanosensitive adhesions and the membrane tension, we simply treat the contractile force per unit length as a constant force pointing inwards along the normal direction, \(\mathbf{F}_{\text{contraction}}=-\sigma_{c}\mathbf{n}\). The time scale for the rebinding and unbinding of adhesions (seconds) [52] is much longer than the time scale for the membrane force to equilibrate (milliseconds) [17], so the membrane tension, \(\sigma_{m}\), is assumed to be spatially uniform along the cell boundary. The force per unit length integrated over the lamellipodium height is then given by \(\mathbf{F}_{\text{membrane}}=-2h\sigma_{m}H\mathbf{n}\), where \(H=1/h+\kappa\) is the total curvature given by the in-plane curvature \(\kappa\) and the lamellipodium radius \(h\) along the vertical direction to the substrate.
We further assume that the membrane tension depends linearly on the total area \(A(t)\) of the
cell as it evolves in time,
\[\frac{\mathrm{d}\sigma_{m}}{\mathrm{d}t}(t)=k_{\sigma}\frac{ \mathrm{d}A}{\mathrm{d}t}, \tag{4}\]
where \(k_{\sigma}\) is an effective stiffness that accounts for the extensibility of the membrane. Finally, the shape of the cell evolves according to the kinematic boundary condition
\[\frac{\partial\mathbf{x}}{\partial t}(\alpha,t)=(v_{p}-v_{r})\mathbf{n}, \tag{5}\]
where the normal velocity is determined by the difference between the actin polymerization velocity \(v_{p}\) and the retrograde velocity \(v_{r}\). Here, we treat the polymerization velocity as a constant along the cell edge following [53, 54], although it can be a function of the myosin density as hypothesized in some more detailed modeling approaches [51].
### Non-dimensionalization
We nondimensionalize the governing equations with the following scales. The characteristic time scale is given by the off-rate under zero force load \(1/k_{\text{off}}^{0}\). The mechanosensitive unbinding process provides a characteristic velocity scale \(v_{\beta}\) as mentioned in the previous subsection. The cell size is characterized by its average radius \(R_{0}\). The dimensionless variables are listed as follows:
\[t^{*}=tk_{\text{off}}^{0},\quad v_{r}^{*}=\frac{v_{r}}{v_{\beta}},\quad\mathbf{x}^{*}=\frac{\mathbf{x}}{R_{0}},\quad\sigma_{m}^{*}=\frac{2\sigma_{m}}{ \zeta_{0}v_{\beta}}. \tag{6}\]
The dimensionless parameters governing the system can be classified into two groups, namely parameters that characterize the stick-slip dynamics of the adhesions and parameters that characterize the general cell properties. We estimate their orders of magnitude from experiments and previous modeling approaches (see _SI_), as summarized in Table 1. The first group contains the dimensionless relative adhesion strength \(\zeta_{1}^{*}=\zeta_{1}/\zeta_{0}\) that compares the contribution of the adhesive friction (sticking) and the viscous friction (slipping), the dimensionless on-rate \(r_{\text{on}}=k_{\text{on}}/k_{\text{off}}^{0}\), and the dimensionless diffusion coefficient \(D^{*}=D/k_{\text{off}}^{0}R_{0}^{2}\). The second group contains the dimensionless lamellipodium height \(h^{*}=h/R_{0}\), the dimensionless length \(\epsilon=v_{\beta}/k_{\text{off}}^{0}R_{0}\) that compares the length scale provided by the off-rate and the retrograde velocity with the cell size, the dimensionless effective stiffness \(k_{\sigma}^{*}=2k_{\sigma}R_{0}/\zeta_{0}k_{\text{off}}^{0}\) that compares the cell stiffness to the adhesive friction, the dimensionless actomyosin contraction \(\sigma_{c}^{*}=\sigma_{c}/\zeta_{0}v_{\beta}\), and the dimensionless polymerization velocity \(v_{p}^{*}=v_{p}/v_{\beta}\).
The dimensionless governing equations can be summarized as follows, where the stars have been dropped for simplicity:
\[\frac{\partial\mathbf{x}}{\partial t}(\alpha,t)=\epsilon(v_{p}-r\log r )\mathbf{n}, \tag{7}\] \[\frac{\partial n}{\partial t}(\alpha,t)=r_{\text{on}}(1-n)-rn+D \frac{\partial^{2}n}{\partial s^{2}},\] (8) \[\frac{\mathrm{d}\sigma_{m}}{\mathrm{d}t}(t)=k_{\sigma}\frac{ \mathrm{d}A}{\mathrm{d}t}(t),\] (9) \[\sigma_{c}+\sigma_{m}(1+\kappa h)=r\log r+\zeta_{1}n\log r. \tag{10}\]
### Numerical implementation
To describe the geometry, the tangent angle, \(\theta\), and the arclength derivative, \(s_{\alpha}\), are introduced as independent variables instead of \(\mathbf{x}\). By specifying the arclength derivative as \(s_{\alpha}=L/2\pi\), the mesh points are kept equally spaced in arclength at every time step. The derivatives with respect to \(s\) and \(\alpha\) can be exchanged through \(\partial_{\alpha}=s_{\alpha}\partial_{s}\): this enables us to apply a finite difference scheme in the fixed \(\alpha\)-parametric domain, to which the curve is mapped as a circle and uniformly discretized in \(\alpha\) as \(\alpha_{i}=2\pi(i-1)/N,\;i=1,\cdots,N+1\). The method was validated by comparison to the linear stability results at short times (see Fig. S2 of the _SI Appendix_), and by comparison with a different scheme based on spline interpolation; excellent agreement was found between the two schemes, with the \(\theta\)-\(L\) method providing enhanced computational speed.
All simulations start from a circular configuration with a uniform fraction of bound linkers, which is perturbed initially, along with other corresponding physical quantities, using the first 100 Fourier modes with amplitudes varying randomly from \(-10^{-4}\) to \(10^{-4}\). Given the geometric and physical variables at time step \(t^{n}\), we update their values at \(t^{n+1}\) by the following steps: (1) Update the shape of the curve \(\mathbf{x}\) with the \(\theta\)-\(L\) formulation using an explicit Euler scheme. Compute other geometric quantities such as the normal vector and the curvature. (2) Update the membrane tension \(\sigma_{m}\) with the explicit Euler scheme. (3) Update the fraction of bound linkers \(n\) with a Crank-Nicholson scheme. (4) Update the off-rate by solving the force balance (10) iteratively using Newton's method and obtain the normal velocity. The spatial derivatives are discretized by central finite difference and the integrals are computed by the trapezoidal rule with end-correction. The number of grid points is taken as \(N=2000\) and the time step is set to \(\Delta t=10^{-3}\).
## Results
### Initiation of cell motion through a stick-slip instability
We first analyze the linear stability of the model to uncover a mechanism for cell spontaneous symmetry breaking and motility initiation through an instability arising from the stick-slip dynamics of the mechanosensitive adhesions. A similar instability was discussed in [10] for the 1D case as a "stick-slip instability", which gave rise to persistent oscillations between protrusion and retraction phases by switching between sticking and slipping states. Here, we show that, when cell shape change is considered, this instability still exists but distinct modes of deformation are subject to distinct instability criteria due to the spatial effects.
\begin{table}
\begin{tabular}{l r} Dimensionless parameter & Approximate value \\ \hline \(\zeta_{1}^{*}=\zeta_{1}/\zeta_{0}\) & \(400-600\) \\ \(r_{\text{on}}=k_{\text{on}}/k_{\text{off}}^{0}\) & \(10-60\) \\ \(D^{*}=D/k_{\text{off}}^{0}R_{0}^{2}\) & \(0.01-0.1\) \\ \hline \(\epsilon=v_{\beta}/k_{\text{off}}^{0}R_{0}\) & \(0.001\) \\ \(h^{*}=h/R_{0}\) & \(0.01\) \\ \(\sigma_{c}^{*}=\sigma_{c}/\zeta_{0}v_{\beta}\) & \(100\) \\ \(k_{\sigma}^{*}=2k_{\sigma}R_{0}/\zeta_{0}k_{\text{off}}^{0}\) & \(1000\) \\ \(v_{p}^{*}=v_{p}/v_{\beta}\) & \(200\) \\ \hline \end{tabular}
\end{table}
Table 1: Range of dimensionless model parameters. See _SI Appendix_ for dimensional parameter values.
We take the base state for the analysis to be a stationary circle with radius \(\bar{R}=1\) where the retrograde velocity balances with the protrusion velocity \(v_{p}=\bar{r}\log\bar{r}\) everywhere, and the fraction of bound linkers is uniformly distributed along the edge with the steady-state value \(\bar{n}=r_{\rm on}/(r_{\rm on}+\bar{r})\). Henceforth, overbars are used to denote base-state variables. From the force balance (10), the base state surface tension then satisfies
\[\bar{\sigma}_{m}=\frac{1}{1+h/\bar{R}}(\bar{r}\log\bar{r}+\zeta_{1}\bar{n}\log \bar{r}-\sigma_{c}). \tag{11}\]
We consider small perturbations around the base state with the ansatz \(\phi=\bar{\phi}+\delta\phi=\bar{\phi}+\sum_{k}\phi_{k}{\rm exp}({\rm i}k\theta+ \lambda_{k}t)\). Here \(\bar{\phi}\) represents the base-state value of variable \(\phi\), and \(\phi_{k}\) and \(\lambda_{k}\) denote the initial magnitude of perturbation and corresponding dimensionless growth rate of the \(k^{\rm th}\) normal mode, respectively. Note that in our model, the surface tension is assumed to be uniform along the cell edge, so perturbations of the surface tension with modes \(k\neq 0\) are set to zero. Inserting the ansatz into the governing equations and linearizing the system for small perturbations yields eigenvalue problems for the growth rates \(\lambda_{k}\). For \(k=0\),
\[\Big{(}1+\log\bar{r}+\zeta_{1}\frac{\bar{n}}{\bar{r}}\Big{)} \lambda_{0}^{2}+\Big{\{}\Big{(}1+\log\bar{r}+\zeta_{1}\frac{\bar{n}}{\bar{r}} \Big{)}(r_{\rm on}+\bar{r})\] \[+\Big{[}\Big{(}1+\frac{h}{\bar{R}}\Big{)}2\pi\bar{R}k_{\sigma}\! -\!\frac{\epsilon h\bar{\sigma}_{m}}{\bar{R}^{2}}\Big{]}(1+\log\bar{r})\!-\! \zeta_{1}\bar{n}\log\bar{r}\Big{\}}\lambda_{0}\] \[+\Big{[}\Big{(}1+\frac{h}{\bar{R}}\Big{)}2\pi\bar{R}k_{\sigma}\! -\!\frac{\epsilon h\bar{\sigma}_{m}}{\bar{R}^{2}}\Big{]}(1\!+\!\log\bar{r})(r _{\rm on}\!+\!\bar{r})=0, \tag{12}\]
and for \(k\neq 0\),
\[\Big{(}1\!+\!\log\bar{r}\!+\!\zeta_{1}\frac{\bar{n}}{\bar{r}} \Big{)}\lambda_{k}^{2}\!+\!\Big{[}\Big{(}1\!+\!\log\bar{r}\!+\!\zeta_{1}\frac{ \bar{n}}{\bar{r}}\Big{)}\Big{(}r_{\rm on}\!+\!\bar{r}\!+\!\frac{Dk^{2}}{\bar{R }^{2}}\Big{)}\] \[+\frac{\epsilon h\bar{\sigma}_{m}}{\bar{R}^{2}}(k^{2}-1)(1+\log \bar{r})-\zeta_{1}\bar{n}\log\bar{r}\Big{]}\lambda_{k}\] \[+\frac{\epsilon h\bar{\sigma}_{m}}{\bar{R}^{2}}(k^{2}-1)(1+\log \bar{r})\Big{(}r_{\rm on}+\bar{r}+\frac{Dk^{2}}{\bar{R}^{2}}\Big{)}=0. \tag{13}\]
The governing equations are nonlinearly coupled and, consequently, they provide constraints on the initial perturbations (see the _SI_ for details). These constraints satisfy a quadratic equation for the perturbation amplitudes with two possible conjugate imaginary solutions, corresponding to two independent modes of perturbation, analogous to the one-dimensional case [10] where the two independent modes are the symmetric and the anti-symmetric modes.
The various Fourier modes represent different modes of deformation. Typical dispersion relations for parameter choices relevant to physiological conditions values are plotted in Fig. 2. We find that the precise choice of parameters does not affect the qualitative behavior of the dispersion relation, which can be summarized as follows. The mode \(k=0\) describes a spatially homogeneous perturbation and corresponds to the global dilation or contraction of the cell. This is the only mode to be affected by the effective membrane stiffness \(k_{\sigma}\), and we find it to be always linearly stable at physiological values of the stiffness (\(k_{\sigma}\sim 10^{3}\)). Thus, membrane elasticity always acts to maintain the cell area at its base value in the linear regime. The mode \(k=1\) is the only mode that is not center-symmetric and captures translational motion of the center of mass; its growth rate is found to be purely real regardless of the choice of model parameters. Therefore, when this mode becomes unstable, the corresponding perturbation is amplified and leads to a symmetry break
ing in space that singles out a certain direction to initiate locomotion. Subsequent modes with \(k>1\) describe various shape deformations with increasingly shorter wavelengths. At relatively low wavenumbers, the dispersion relation typically displays two real positive growth rates. As \(k\) increases, these give way to two complex conjugate growth rates, suggesting that there can exist oscillations and traveling waves propagating along the edge, known as "stick-slip" waves, which are similar to the lateral waves predicted along a flat edge [10]. Finally, all modes beyond a certain wavenumber become stable, indicating that high-frequency perturbations will decay.
The dependence of the growth rates on the relevant system parameters can also be gleaned from Fig. 2. We first note that a Hopf bifurcation can occur for modes with high wavenumbers as the parameters vary, where the real growth rates become growth rates with conjugate imaginary parts, similar to the 1D case [10]. We focus on the role of the three parameters that characterize adhesion kinetics. The relative adhesion strength \(\zeta_{1}\) and the on-rate \(r_{\rm on}\) are found to affect both the magnitude of the unstable growth rates as well as the number of unstable modes, while the effect of the diffusion coefficient \(D\) is to damp high order modes. As \(\zeta_{1}\) increases, both the magnitude of the positive real growth rates and the number of unstable modes increase. The effect of the on-rate \(r_{\rm on}\), on the other hand, is nonmonotonic and exhibits a biphasic behavior. When the on-rate increases, the magnitudes of unstable growth rates first increase and then decrease. This suggests that the growth rates are not sensitive to the growth rates.
Figure 2: Dispersion relations showing the real and imaginary parts of the growth rate \(\lambda_{k}\) as functions of wavenumber \(k\) for various choices of the parameters related to adhesion kinetics: (_A_) growth rates under different relative adhesion strengths \(\zeta_{1}\) with \(r_{\rm on}=20,\ D=0.01,\ \sigma_{c}=100\); (_B_) growth rates under different on-rates \(r_{\rm on}\) with \(\zeta_{1}=600,\ D=0.01,\ \sigma_{c}=100\); (_C_) growth rates under different diffusion coefficients \(D\) with \(\zeta_{1}=600,\ r_{\rm on}=20,\ \sigma_{c}=100\); (_D_) growth rates under different contraction strengths \(\sigma_{c}\) with \(\zeta_{1}=600,\ r_{\rm on}=20,\ D=0.01\). In all cases, the parameters related to the general cell properties are fixed as \(\epsilon=0.001,\ h=0.01,\ k_{\sigma}=1000,\ r_{p}=50\).
the possibility of a stick-slip instability similar to the 1D case [10], and provides a mechanism for the cell to oscillate between protruding (sticking) and retracting (slipping) states at the two distinct ends. Here, in two dimensions, certain modes of perturbation are amplified by the mechanosensitive nature of the adhesion clusters and their coupling with the retrograde flow, leading to the local adhesion sites switching between sticking and slipping motions in distinct regions. To make the biphasic relationship between friction and retrograde flow possible, the relative adhesion strength needs to be strong enough for the adhesive friction to play a role compared to the linear friction. Moreover, the rebinding rate cannot be either too small or too large, since in both cases the adhesive linkers either dissociate or rebind to the substrate too quickly. As a result, there is no possible stick-slip transition and the system remains linearly stable. That optimal locomotion efficiency occurs for intermediate adhesions was demonstrated in experiments by varying the concentration of integrins and blocking integrins [55].
Our model assumed a uniform contractile force along the cell edge as a way to single out the role of mechanosensitive adhesions in driving symmetry breaking. The effect of actomyosin contraction on stability is shown in Fig. 2\(D\), where it is found to have a weak but destabilizing effect on the system. This finding is qualitatively consistent with previous contraction-driven cell motility models and with various experimental observations [56, 33, 57].
Instability of mode \(k=1\) is responsible for the formation of the front and rear in the early stage of locomotion
As already mentioned above, the instability of the \(k=1\) mode provides a mechanism for the self-polarization of the cells and initiation of motility. This is further demonstrated in our numerical simulations. A typical temporal evolution of a cell shape at short times is illustrated in Fig. 3\(A\), where the cell edge is colored by the fraction of bound linkers. Starting from a circular shape and a randomly perturbed initial distribution of bound linkers at \(t=0\), high-frequency fluctuations are found to decay very rapidly (\(t=0.5\)) as predicted by the stability analysis. The \(k=1\) mode, which has the largest growth rate, grows simultaneously, resulting in the formation of a potential cell front where more linkers are bound to the substrate and of a potential rear where fewer linkers are bound (\(t=1\)). The polymerization velocity then exceeds the retrograde velocity in the front while the opposite occurs at the rear, leading to expansion in the front, shrinkage at the rear, and translocation of the center of mass (\(t=1.5\)). The front further keeps protruding while the rear keeps retracting, and the cell ultimately evolves to a fan-like shape with a smooth leading edge (\(t=2\)) very similar to the characteristic shape of coherent keratocytes [37].
The spatiotemporal evolution of the fraction of bound linkers, off-rate, friction force and curvature during these early stages are plotted as kymographs in Fig. 3\(B\)-\(E\). As a result of the growth of the \(k=1\) mode, the retrograde velocity is low in the front of the cell and high at the rear. Consequently, strong adhesion develops in the front where the off-rate is low and hence most linkers are bound to the substrate. Meanwhile the off-rate is high with most linkers unbound along regions at the back and sides, in agreement with the traction stress distribution revealed by experiments in migrating cells [58]. Comparing the spatiotemporal distribution of the friction and the curvature (Fig. 3\(D\) and \(E\)) shows that regions with high friction coincide with regions with high curvature as required by the force balance equation. This is also consistent with experiments [58] where traction stress concentrates at the sides. Note that unlike many previous modeling approaches [37, 40] where the breaking of front-rear symmetry and adhesion distribution were treated as inputs to trigger the initiation of motion and turning, our model provides a route for this distribution of adhesions to emerge spontaneously through the instability of the \(k=1\)
mode and to become further amplified by the feedback loop between mechanosensitive adhesions, membrane tension, and geometry.
### The stability diagram predicts various motility modes
Next, we turn our focus to long-time dynamics and the effect of adhesion kinetic parameters. While the initiation of locomotion through the instability of the \(k=1\) mode is generic, various cell motility modes are observed at long times depending on the relative adhesion strength \(\zeta_{1}\) and on-rate \(r_{\text{on}}\). We categorize them in a phase diagram in Fig. 4\(A\), in which the blue shaded regions highlight the linearly unstable regimes for the Fourier modes with wavenumbers \(k=1,\ 2,\ 5,\ 10,\ 15\), while the dots denote the long-time motility modes. Each dot represents five simulations with different random initial perturbations. A systematic exploration of the parameter space allowed us to classify the motility modes by the following criteria: (i) if the cell moves in a straight line with no turns over the entire simulation time range (\(t=100\)), we call it a "gliding" mode (Fig. 4\(B\) and \(C\)); (ii) if the cell turns only towards one direction, we call it a "rotational" mode (Fig. 4\(D\)); (iii) if the cell turns alternatively between left and right resulting in a lateral oscillation, we call it a "zigzag" mode (Fig. 4\(E\) and \(F\)); (iv) for some combinations of parameters and for different initial conditions, we observed "mixed" modes where the cell can exhibit different types of motions including the three cases mentioned above; (v) finally, there exist more complicated types of motion that do not fit into any of the cases above and that we label as "others" (Fig. 4\(G\)-\(J\)). Most of the motility modes described here have been observed in experiments as well as in previous models: the gliding motion is well known in fast-moving fish keratocytes [59]; the zigzag motion is analogous to the "bipedal" motion of oscillating keratocytes [37, 38]; and spontaneous turning has also been observed in keratocytes even in the absence of external cues [35, 39].
Figure 3: Spatiotemporal evolution of a fan-shaped cell during the initiation of motion (\(t=0\) to \(t=2\)) from a nonlinear numerical simulation. (\(A\)) Time evolution of the cell shape, where the edge is colored by the fraction of bound linkers. The two black dots show the initial and current center-of-mass positions. (\(B\))-(\(E\)) Kymographs of the fraction of bound linkers \(n\), off-rate \(r\), total friction \(r\log r+\zeta_{1}n\log r\), and local curvature \(\kappa\), as functions of normalized arclength \(s/L\) and time \(t\). Parameters values: \(\epsilon=0.001,\ h=0.01,\ k_{\sigma}=1000,\ \sigma_{c}=100,\ r_{p}=50,\ r_{\text{on}}=25,\ \zeta=600,\ D=0.05\).
Although the dynamics at long times are essentially nonlinear, Fig. 4\(A\) shows a clear correlation between the stability diagram and the motility modes. In regions of the parameter space with fewer unstable modes, where the on-rate is either relatively low or high, the motion tends to be uniaxial with the cell remaining polarized with left-right symmetry. More complex dynamics arise for intermediate on-rates, which we attribute to the stick-slip instability. Rotational and zigzag modes arise in this regime, and are characterized by broken left-right symmetry in addition to front-rear asymmetry, resulting in quasi-periodic turns in the cell trajectory. We take a closer look at these turns in Fig. 5\(A\) and \(B\), where we plot the temporal evolution of the membrane tension together with the corresponding shapes and center of mass velocities within one oscillation period for the rotational and zigzag modes, respectively. At the initiation of a turn, the distribution of bound linkers is first observed to become asymmetric, with an increase in the fraction of bound linkers on one side of the cell. This results in enhanced sticking on that side along with slipping on the opposite side, allowing the cell to turn in that direction, and this mechanism is reminiscent of the "sticking wave" predicted in the 1D flat case [10]. As the cell shape becomes asymmetric and the cell rotates, the membrane tension is found to increase and reaches a peak value before relaxing again in the later stage of the turn. In the zigzag case, the high adhesion region alternatively switches between the left and right sides of the cell, while for the rotational cases the direction in which the high adhesion regions form remains fixed, with equal probabilities for a cell to turn clockwise or counterclockwise. The Hopf bifurcation of higher-order modes may be responsible for this transition from uniaxial translation to oscillatory motion. Moreover, we find that there is a strong correlation between cell shapes and motility modes. For the gliding cases, the cell adopts either a triangular-like shape (for small on-rates, Fig. 4_B_) or a nearly circular shape (for large on-rates, Fig. 4_C_), while for the zigzag and rotational motions, the cells are more fan-shaped with curved fronts (Fig. 4_D-F_). Another key observation is that, regardless of the randomness in the initial perturbations, the emerging shapes corresponding to a give set of parameters are robust; we further elaborate on this point below.
### Robustness of motility modes
The observations above suggest that the feedback loop between mechanosensitive adhesions and membrane tension uniquely determines the emergent motility mode for a given set of system parameters, irrespective of the initial condition or history of the system. To further demonstrate the robustness of these modes, we vary the parameters during a simulation to analyze transitions between modes. A typical transition is shown in Fig. 6_A_: starting from a cell performing a gliding motion, we abruptly vary the value of \(r_{\mathrm{on}}\) from 16 to 30 at \(t_{0}=25\), causing it to switch to a zigzag mode. As shown in the snapshots, the cell glides smoothly at first, and, after the change in \(r_{\mathrm{on}}\), its shape quickly adjust and start undergoing zigzags. The oscillation frequency and cell morphology after the transition are similar to the corresponding zigzag motions with \(r_{\mathrm{on}}=30\) starting from a randomly perturbed initial condition. The time instant \(t_{0}\) at which we start altering the parameters, the period of time \(\Delta t\) within which we gradually alter the parameters, and the intermediate states do not significantly affect the zigzag dynamics (Fig. 6_B_). To quantitatively compare the resulting zigzag modes emerging through different routes, we plot the temporal evolution of the membrane tension in Fig. 6\(B\) for different choices of \(t_{0}\) and \(\Delta t\). After increasing the on-rate and the adhesion strength, the cell expands, leading to an increase in the membrane tension towards the tension value of the new state, regardless of the history of states. This is in agreement with the one-dimensional case [10] as well as experiments [19] where the membrane tension in motile cells is determined by the adhesion strength and cytoskeletal forces, and increases as cell-substrate adhesion strengthens--in
our model, as either the on-rate and relative adhesion strength increases.
Similar transitions are also observed when we change parameters between other modes (see _SI Appendix_, Movies S10-S12). In all cases, after changing the parameters, the cell first goes through a transient state and then soon evolves into the motility mode selected by the new set of parameters in the phase diagram of Fig. 4. This further suggests that these motility modes are attractors of the dynamical system, with the model solutions either approaching fixed points or stable limit cycles in phase space.
### Effect of adhesion parameters
Finally, we investigate the effect of varying adhesion kinetic parameters on cell geometry, mechanics and locomotion. For various combinations of \(r_{on}\) and \(\zeta_{1}\), we calculate the time-average circularity \(4\pi A/L^{2}\) (with a maximum value of 1 corresponding to a perfect circle), the average membrane tension, and the average distance of the center of mass from the origin \(x_{c}=\sum_{i=1}^{M}|\mathbf{x}_{c}^{(i)}-\mathbf{x}_{c}^{(0)}|/M\), and plot them as phase diagrams in Fig. 7 where the motility modes are also labeled. Note that the precise boundary between each mode changes slightly as we vary the diffusion coefficient \(D\), but the qualitative behavior remains unchanged. For the gliding modes with low on-rates, the cell adopts a nearly triangular shape with low circularity, with only a weak dependence on \(\zeta_{1}\). As the on-rate increases, the shapes all become more circular regardless of motility mode as seen in Fig. 7\(A.\) A possible explanation for this behavior is that, as the on-rate increases, the difference in the fraction of bound linkers between the front and the rear decreases, and therefore the difference in the sticking and slipping velocity also decreases, resulting in less deformed shapes. The average membrane tension shows little correlation with the various motility modes in Fig. 7\(B,\) and is mainly determined by the on-rate and the relative adhesion strength, consistent with the discussion above [10, 19].
The ability of a cell to explore space depends strongly on its motility mode as illustrated in Fig. 7\(C,\) where we show the phase diagram for the average distance traveled by the cell. The gliding and the zigzag modes are more unidirectional and thus allow the cell to travel for a longer distance than in the rotational mode. In the case of zigzag trajectories, the left-right oscillations can be accompanied by a circular motion when the relative adhesion strength and the on-rate increase, leading to a decrease in the distance traveled. Ultimately, the zigzag mode gives way to trajectories labeled as "others" in Fig. 7\(C,\) which are characterized by irregular turns and may result from the superposition of multiple oscillatory periods.
## Discussion
In this work, we have extended the one-dimensional model coupling actin polymerization, adhesion, membrane tension, and shape change described in [10] to two dimensions to elucidate the role of the coupling between tension and adhesion in the determination of cell shape, the initiation of migration, and the selection of motility modes. With this minimal model, we showed that motile cells can display rich dynamical behaviors by relying on a relatively simple set of physical mechanisms and couplings. We first performed a linear stability analysis and demonstrated that the \(k=1\) mode, describing translocation of the center of mass, is always the most unstable and is responsible for the cell to obtain its polarity and identify a direction in space for migration by spontaneous symmetry breaking. We then conducted nonlinear numerical simulations, and showed that, driven by a constant actin polymerization rate, the model was able to capture various motility
modes commonly seen in experiments, such as unidirectional gliding, bipedal motion and turning. We identified the nonlinear coupling between stochastic adhesion and membrane tension as the key mechanism involved in the selection of motility modes. Specifically, the relative adhesion strength and on-rate were shown to govern membrane tension, which in turns connects spatially distributed adhesions and thus couples the retrograde flow with shape change and adhesion kinetics. Certain fluctuation modes are amplified by this feedback loop, resulting in directional motion and oscillatory behaviors.
The basic physical mechanisms involved in this process are summarized in Fig. 8. A local increase in the off-rate \(r\) can have two distinct effects: a decrease in the fraction of bound linkers \(n\) and a local retraction of the cell edge. The latter will lead to a global decrease in cell area and, consequently, in a decrease in membrane tension due to the membrane elasticity. The force balance (10) dictates that the decrease in \(n\) will cause an increase the off-rate \(r\), whereas the decrease in membrane tension will cause \(r\) to decrease. Therefore, there is a competition between the membrane tension and the fraction of bound linkers in the regulation of the off-rate, tuned by the relative adhesion strength \(\zeta_{1}\). When \(\zeta_{1}\) is large, the effect of \(n\) dominates resulting in a positive feedback on \(r\), rendering the system unstable and driving the onset of motility. On the other hand, when \(\zeta_{1}\) is small, the situation is the opposite and the system is stable. These predictions are borne out by the results of our stability analysis and numerical simulations.
Our model predictions are consistent with experimental observations of symmetry breaking and cell motility modes. Using experimental measurements and mechanical models, Barnhart _et al._ showed that two feedback loops--one between actin flow and adhesions and the other between actin flow and myosins--are required to initiate motility in fish keratocytes [33]. Similarly, using a phase-field approach, Shao _et al._[28] showed that coupling between adhesions, actin flow, and myosin contraction is required to recapitulate different experimental observations of keratocyte shape change. In the present work, we showed that the feedback between adhesion and membrane tension, for constant actin flow, is sufficient to capture both the spontaneous initiation of motility by symmetry breaking and the emergence of complex motility modes. An interesting and as yet unexplained experimental observation that our model may help shed light on is that, at low temperatures, the trajectories of fast-moving keratocytes tend to be more unidirectional, while their motion is more circular and less persistent at high temperatures [5]. Previous studies have proposed a "steering wheel" mechanism [39, 40] whereby the turning of a cell is caused by asymmetrically distributed adhesion sites at the rear, but the relationship between the turning and temperature is unclear. This effect can be explained by the molecular clutch approach for adhesions in our model. According to Bell's theory [44], the reaction rate increases with temperature, so binding and unbinding processes should be more active at high temperatures. By this effect, an increase in temperature drives an increase in the on-rate and relative adhesion strength, and thus drives the transition from gliding to zigzag, rotational, and other modes.
While our model captures many experimental observations and makes testable predictions on the interaction between membrane tension and adhesion, it has certain limitations. In particular, it does not account for the complex molecular machinery underlying actomyosin contraction or associated signaling pathways and only contains a single mechanical feedback loop. We also note that even though we obtain fan-shaped cells in the early stages of motion, the shapes ultimately become triangular for gliding cells, which could be a result of the constant actin polymerization velocity assumed in our model and of ignoring actin remodeling events [5, 11, 13, 51]. The contraction generated by the distinct distribution of myosin motors in crawling cells is also neglected in our model. In some studies, contraction alone can be shown to generate spontaneous symmetry breaking and motions [56]. Finally, like other cell motility models [27, 29, 33], we have assumed
that membrane tension is spatially uniform, yet spatial variations in tension has been observed in cells in experiments [60]. An extension of our approach to incorporate some of these additional details as well internal active stresses has the potential to further enrich the model.
|
2309.10878 | DeepliteRT: Computer Vision at the Edge | The proliferation of edge devices has unlocked unprecedented opportunities
for deep learning model deployment in computer vision applications. However,
these complex models require considerable power, memory and compute resources
that are typically not available on edge platforms. Ultra low-bit quantization
presents an attractive solution to this problem by scaling down the model
weights and activations from 32-bit to less than 8-bit. We implement highly
optimized ultra low-bit convolution operators for ARM-based targets that
outperform existing methods by up to 4.34x. Our operator is implemented within
Deeplite Runtime (DeepliteRT), an end-to-end solution for the compilation,
tuning, and inference of ultra low-bit models on ARM devices. Compiler passes
in DeepliteRT automatically convert a fake-quantized model in full precision to
a compact ultra low-bit representation, easing the process of quantized model
deployment on commodity hardware. We analyze the performance of DeepliteRT on
classification and detection models against optimized 32-bit floating-point,
8-bit integer, and 2-bit baselines, achieving significant speedups of up to
2.20x, 2.33x and 2.17x, respectively. | Saad Ashfaq, Alexander Hoffman, Saptarshi Mitra, Sudhakar Sah, MohammadHossein AskariHemmat, Ehsan Saboori | 2023-09-19T18:58:38Z | http://arxiv.org/abs/2309.10878v1 | Saad Ashfaq
###### Abstract
The proliferation of edge devices has unlocked unprecedented opportunities for deep learning model deployment in computer vision applications. However, these complex models require considerable power, memory and compute resources that are typically not available on edge platforms. Ultra low-bit quantization presents an attractive solution to this problem by scaling down the model weights and activations from 32-bit to less than 8-bit. We implement highly optimized ultra low-bit convolution operators for ARM-based targets that outperform existing methods by up to 4.34\(\times\). Our operator is implemented within Deeplite Runtime (DeepliteRT), an end-to-end solution for the compilation, tuning, and inference of ultra low-bit models on ARM devices. Compiler passes in DeepliteRT automatically convert a fake-quantized model in full precision to a compact ultra low-bit representation, easing the process of quantized model deployment on commodity hardware. We analyze the performance of DeepliteRT on classification and detection models against optimized 32-bit floating-point, 8-bit integer, and 2-bit baselines, achieving significant speedups of up to 2.20\(\times\), 2.33\(\times\) and 2.17\(\times\), respectively.
DeepliteRT]DeepliteRT: Computer Vision at the Edge
1
## 1 Introduction
Deep learning models for computer vision are being extensively deployed in various domains and industries due to substantial improvements in the accuracy of deep convolutional neural networks (CNNs). CNN architectures including VGG [], ResNet [], Inception [], DenseNet [] and YOLO [] have demonstrated exceptional performance on image classification and object detection tasks. The widespread adoption of deep learning solutions in computer vision has also coincided with the growth of edge computing [], promising the potential of bringing machine learning to low-power edge devices. However, the enhancements in CNN model accuracy have come at the expense of increased model complexity
###### Abstract
We propose a novel novel approach to solve the problem of linear programming in multi-dimensional linear programming. The proposed approach is based on the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear programming of the linear of the linear programming of the linear programming of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear programming of the linear of the linear of the linear programming of the linear of the linear of the linear programming of the linear of the linear of the linear programming of the linear of the linear programming of the linear of the linear of the linear of the linear programming of the linear of the linear of the linear of the linear programming of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of the linear of
enable fake ultra low-bit quantized models trained with various ML frameworks to be executed on ARM CPUs without any additional changes in the training and inference paths. With support for mixed precision inference, layers in the network that are sensitive to quantization can be kept at higher precision (FP32, INT8, etc.) while insensitive layers can be reduced to ultra low-bit in order to minimize the accuracy drop resulting from quantizing all layers in the model. To summarize, this paper makes the following contributions:
* We implement high performance bit-serial convolution kernels that achieve a speedup of up to 4.34\(\times\) over existing ultra low-bit methods on ARM-based platforms.
* We present DeepliteRT, a compiler and runtime package for ultra low-bit inference on ARM CPUs. DeepliteRT automates the process of converting fake-quantized convolution layers from different machine learning frameworks used for quantization-aware training into ultra low-bit convolution kernels. Quantized models can be exported with the weights and activations still in full-precision without the need for custom operator definitions as compiler passes in DeepliteRT can handle the necessary casting, layout transforms and operator conversions during compilation. DeepliteRT provides a framework-agnostic end-to-end solution for ultra low-bit CNN deployment on edge devices eliminating the need to modify any code in the inference or runtime path.
* We perform a comprehensive evaluation of DeepliteRT on classification and detection models for both ARMv7 and ARMv8 targets, achieving significant performance improvements of up to 2.20\(\times\), 2.33\(\times\) and 2.17\(\times\) over highly optimized FP32, INT8 and ultra low-bit baselines, respectively.
## 2 Related Work
### Ultra Low-bit Quantization
Quantization methods can be broadly categorized into uniform and non-uniform as well as quantization-aware training (QAT) and post-training quantization (PTQ). Uniform quantization refers to the case where the floating-point weights are quantized to integer values with a linear scaling from the integer to floating-point domain. The benefit of these methods is that operations can be performed in the integer domain and quickly converted to the floating-point domain via multiplication of a scaling factor. Non-uniform quantization removes this restriction, allowing for more flexibility in the mapping from floating-point to integer data.
QAT quantizes weights and activations while training the model to better simulate the model's performance after quantized deployment. PTQ methods train a full-precision model without regard for quantization, and then quantize the model with minimal access to the training dataset. State-of-the-art ultra low-bit quantization methods, shown in Table 1, make use of QAT to offset the loss of precision when reducing precision to less than 8 bits. LSQ
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Top-1**} & \multicolumn{5}{c}{**Top-1 Accuracy@2-bit**} \\ & & **Accuracy@32-bit** & & **PACT** (2018) & **LQ-NET** (2018) & **QIL** (2019) & **PACT-SAWB** (2019) & **LSQ** (2020) \\ \hline ResNet18 & 70.5\% & 64.4\% & 65.2\% & 65.7\% & 67.0\% & 67.9\% \\ ResNet50 & 76.9\% & 72.2\% & 71.5\% & & 74.2\% & 74.6\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: 2-bit accuracy on ImageNet with different QAT methods [(]) [(]) [(]) [(]) [(]).
[11] is a simple yet effective quantization method which takes advantage of both uniform quantization and QAT to quantize models to as low as 2 bits with minimal accuracy degradation. For example, ResNet18 quantized to 2 bits with LSQ only incurs a 2.4% drop in accuracy relative to full-precision, but offers 16\(\times\) compression per quantized layer.
### Ultra Low-bit Inference
Most previous works on sub-8-bit inference on CPU architectures utilize the bit-serial method [1][2] for dot product computation. Considering binary vectors with unipolar (unsigned) encoding where each input value is either 0 or 1, the bit-serial dot product is given by Eq. (1a). A bit-wise AND operation gives the element-wise product of the binary inputs and the popcount operation, that counts the number of bits set to 1, performs the accumulation. The binary case can easily be extended to larger bit-widths by slicing the inputs into binary vectors and performing a summation of the bit-serial dot products over all possible bit-sliced combinations. The corresponding equation for an M-bit weight and an N-bit activation vector is given in Eq. (1b) where operations are performed across bit-planes (\(w_{m}\) and \(a_{n}\)).
\[\vec{w}\cdot\vec{a}=popcount(\vec{w}\;\;\&\;\;\vec{a}) \tag{1a}\] \[\vec{w}\cdot\vec{a}=\sum_{m=0}^{M-1}\sum_{n=0}^{N-1}(popcount( \vec{w_{m}}\;\;\&\;\;\vec{a_{n}}))<<(n+m) \tag{1b}\]
This bit-serial approach is implemented within TVM for dense and convolution layers in [1] and [1] with an average speedup of 1.9\(\times\) for a 2-bit ResNet18 network over an optimized FP32 baseline on the ARM Cortex-A53 CPU in the Raspberry Pi 3B. Riptide [1] also uses the bit-serial kernels in TVM along with fusion, vectorization and tiling optimizations for binary networks to achieve considerable latency improvements over full-precision models on the Cortex-A53. Bitflow [1] presents another bit-serial implementation of a binary VGG network for Intel CPUs that is even faster than the corresponding full-precision CNN tested on a high-performance GPU. There have also been initiatives in this space that are not based on the bit-serial method including ULPPACK [1], BiQGEMM [1] and DeepGEMM [1].
## 3 Bit-serial Convolution
### Bitpacking
Binary quantization approaches [1] can result in an unacceptable accuracy loss due to the use of a single bit for weight and activation values. To counter this, the bit-serial method can be extended to multiple bits by slicing the input weights and activation into separate bitplanes depending on the bit-width. This is illustrated in Fig. 1 for the 2A2W configuration (2 bits for activations and 2 bits for weights). Each value in the input data is first broken down into its constituent bits, creating bitplanes at every bit position. A bitplane holds the corresponding bit from different input values; for instance, bitplane 0 for weights stores the least significant bits across the weight values. Bitplanes can be compactly stored into standard data types such as 8-bit unsigned integers through the process of bitpacking. Assuming unipolar encoding for the 2-bit weights and activations, the bit-serial dot product can then be computed using Eq. (1b) producing the same result as a standard dot product as shown in Fig. 1. Based on our experiments, the bitpacking operation is not a major bottleneck consuming only 2-4% of the overall execution time in the bit-serial computation.
### Optimized bit-serial dot product
Eq. (1b) assumes a unipolar encoding scheme with unsigned values for both weights and activations. Recent works [][][] typically employ a hybrid unipolar-bipolar scheme with unipolar activations and bipolar (signed) weights producing quantized models with higher accuracy. The nn.bitserial_conv2d operator in TVM implements a convolution kernel for this hybrid scheme that calculates the bit-serial dot product as shown in Eq. (2), providing an open-source SOTA baseline for comparison with our work.
\[\vec{w}\cdot\vec{a}=\sum_{m=0}^{M-1}\sum_{n=0}^{N-1}(popcount(\vec{w_{m}}~{}~ {}\&~{}~{}\vec{a_{n}})-popcount(\neg\vec{w_{m}}~{}~{}\&~{}~{}\vec{a_{n}}))<<(n +m) \tag{2}\]
Compared to the purely unipolar case in Eq. (1b), this version doubles the number of popcount instructions adding considerable latency to the dot product calculations. Moreover, the weights can not take on the value 0 since this bipolar scheme distributes the quantization levels around 0. For example, in the case of 2 bits, each weight value will lie in the discrete set {-3, -1, 1, 3}. Such a representation introduces error when quantizing zero values, which is particularly harmful for common operations such as zero-padding and ReLU [].
To address these drawbacks, we propose a novel bit-serial computation method in Eq. (3) for the hybrid scheme. Our approach reduces the number of popcount operations per dot product to one. It also requires the same number of overall instructions as the unipolar variant except for the most significant weight bit which has a slight overhead due to a constant multiplication. Our scheme also enables zero mapping of the signed weight values. For instance, 2-bit weights now fall in the set {-2, -1, 0, 1} providing compatibility with high accuracy quantization techniques such as LSQ that require zero mapping for the weights. This bit-serial dot product is the building block of our bit-serial convolution operator dlrt_bitserial_conv2d. With optimizations in kernel and data vectorization, loop reordering, and parallelization, dlrt_bitserial_conv2d achieves substantial performance uplifts over TVM's nn.bitserial_conv2d as shown in Fig. 2.
\[\vec{w}\cdot\vec{a}=\begin{cases}-1\times\sum_{n=0}^{N-1}(popcount(\vec{w_{M- 1}}~{}~{}\&~{}\vec{a_{n}}))<<(n+m),&\text{if $m=M-1$}\\ \sum_{m=0}^{M-1}\sum_{n=0}^{N-1}(popcount(\vec{w_{m}}~{}~{}\&~{}\vec{a_{n}}))<< (n+m),&\text{otherwise}\end{cases} \tag{3}\]
As opposed to the nn.bitserial_conv2d kernel that is only defined for ARMv7,
Figure 1: Input weight and activation values are sliced into bitplanes and bitpacked within unsigned 8-bit integers enabling dot product calculation using bitwise operations.
we implement both 32-bit and 64-bit d1rt_bitserial_conv2d kernels enabling deployment on a broader range of 32-bit ARMv7 and 64-bit ARMv8 platforms.
## 4 DeepliteRT
Machine learning frameworks used for ultra low-precision QAT such as PyTorch [] and TensorFlow [] produce quantized models with extra operators relative to the full-precision network to handle the quantization and dequantization of model weights and activations. Assuming uniform quantization, these operators including addition, subtraction, division, multiplication, clipping and rounding are generally used to convert the floating-point data to integer before quantized layers and integer data back to floating-point after quantized layers. Inference engines such as ONNX Runtime [] offer native support for these operators as they act on standard data types (FP32, INT16, INT8, etc.). However, quantized nodes such as convolution and dense layers are typically fake-quantized during QAT, restricting the weights and activations to a discrete set but still storing them in FP32. To realize ultra low-bit deployment on target hardware, custom operators and attributes for these layers have to be added by the ML framework which need to be then parsed and lowered to corresponding low-level kernels by the inference engine. These modifications in the ML and runtime frameworks require some level of expertise in both training and inference domains. Moreover, the changes made for one ML framework are not portable to a different framework, making quantized ultra low-bit model deployment inaccessible to most practitioners.
### Compiler passes
nn.conv2d is the operator for 2D convolution in TVM's Relay IR. Convolution layers from models trained with different ML frameworks are internally converted into nn.conv2d by the appropriate frontend. For instance, tf.nn.conv2d from a TensorFlow model,
torch.nn.Conv2d from a PyTorch model and Conv from an ONNX model are all translated to nn.conv2d. We define a sequence of compiler passes in DeepliteRT to convert a fake quantized convolution layer represented by nn.conv2d in Relay IR into our optimized bit-serial convolution operator dlrt_bitserial_conv2d as shown in Fig. 3.
**convert_conv2d_ bitserial:** This custom pass converts nn.conv2d nodes for quantized layers into dlrt_bitserial_conv2d nodes in the IR. It also casts the input weights and activations into integer and the resulting convolution output back to floating-point.
**transform_layout:** This pass is invoked to change the layout for activations to NHWC and the layout for weights to HWIO as required by the low-level
dlrt_bitserial_conv2d kernel. The transformation is only performed if the activations and/or weights are not already in the required layouts.
**bitpack_weights:** This custom pass adds nn.bitpack operators in the Relay IR for the bitpacking of weights during compilation prior to bit-serial convolution. The bitpacking of activations is handled by the dlrt_bitserial_conv2d operator during inference since the activation values are not available offline.
**fold_constant:** This pass is used to perform all the computations on weights during compilation as they are compile-time constants. The result of casting the weights to integer, transforming their layout and bitpacking them is then simply passed as a constant to the dlrt_bitserial_conv2d operator.
Figure 3: DeepliteRT converts fake-quantized convolution layers from models in different formats to optimized ultra low-bit convolution operators through a series of compiler passes. The passes replace nn.conv2d with dlrt_bitserial_conv2d, bitpack the weights in ultra low-bit, and cast and transform the layouts of data as required. The resulting compiled model can be deployed on ARMv7 and ARMv8 CPUs via TVM runtime.
### Mixed precision support
In the default case, DeepliteRT converts all convolution layers except the first to bit-serial operators using the specified bit-width. However, quantizing all the layers to ultra low-bit can result in severe accuracy degradation. This can be countered with mixed precision quantization by choosing different precisions across layers using methods such as HAWQ-V3 [11] for accuracy preservation. DeepliteRT provides mixed precision inference by accepting a configuration file as input that specifies the quantization parameters per layer including activation bit-width, weight bit-width and encoding scheme. This per-layer information is passed to the **convert_conv2d_bitserial** pass to selectively offload convolution layers to ultra low-bit with the provided bit-widths and keep other layers in full-precision as required.
## 5 Evaluation
We evaluate classification and detection models on a Rasbperry Pi 4B (4\(\times\)ARM [email protected]) device with 32-bit and 64-bit operating systems to enable ARMv7 and ARMv8 execution. We select TVM FP32 for the full-precision baseline as it significantly outperformed FP32 kernels in ONNX Runtime and TensorFlow Lite [11] in our experiments. TVM does not offer an optimized INT8 operator so we choose ONNX Runtime for INT8 experiments due to its high performance 8-bit kernels. Finally, we use the TVM 2A2W configuration based on the nn.bitserial_conv2d operator for ultra low-bit experiments; we also port this operator to ARMv8 to establish the 64-bit 2A2W baseline. All models deployed with TVM and DeepliteRT were tuned using AutoTVM [11] with 1500 trials.
### End-to-end performance
Table 2 reports the end-to-end latencies and speedups for classification and detection models. The average, minimum and maximum numbers represent the speedups realized with DeepliteRT over the TVM FP32, ONNX Runtime INT8 or TVM 2A2W results in the same column. Some results for ResNet101 and VGG19 at FP32 are missing in the table as the device runs out of memory when loading full-precision model parameters. Interestingly, even though the TVM 2A2W configuration offers similar level of performance in 32-bit and 64-bit modes, it does not remain competitive in the latter case due to substantial performance
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{**Raspberry Pi 4B - 32-bit ARMv7**} & \multicolumn{3}{c}{**Raspberry Pi 4B - 64-bit ARMv8**} \\ \cline{2-9} & FP32 & INT8 & 2A2W & 2A2W (Ours) & FP32 & INT8 & 2A2W & 2A2W (Ours) \\ \hline ResNet18 & 149.29 & 145.44 & 130.92 & 70.32 & 110.94 & 91.13 & 123.28 & 67.13 \\ ResNet50 & 433.19 & 326.49 & 311.8 & 196.79 & 315.03 & 203.56 & 295.96 & 197.91 \\ ResNet101 & - & 558.47 & 487.96 & 325.37 & 545.01 & 378.27 & 471.71 & 319.09 \\ VGG19 & - & 1399 & 1003 & 654.69 & - & 922.28 & 962.65 & 636.79 \\ InceptionV3 & 312.82 & 245.16 & 357.77 & 165.05 & 218.18 & 151.55 & 340.82 & 164.62 \\ DenseNet121 & 387.98 & 589.03 & 296.27 & 252.65 & 302.50 & 261.94 & 269.91 & 227.05 \\ \hline VGG16-SSD300 & 1671 & 2310 & 1780 & 1190 & 1547 & 1462 & 1631 & 1060 \\ YOLOv5s & 219.72 & 197.27 & 135.64 & 100.32 & 169.93 & 113.5 & 130.03 & 97.49 \\ \hline Average speedup & 1.89\(\times\) & 1.91\(\times\) & 1.58\(\times\) & - & 1.54\(\times\) & 1.20\(\times\) & 1.56\(\times\) & - \\ Minimum speedup & 1.40\(\times\) & 1.49\(\times\) & 1.17\(\times\) & - & 1.32\(\times\) & 0.92\(\times\) & 1.19\(\times\) & - \\ Maximum speedup & 2.20\(\times\) & 2.33\(\times\) & 2.17\(\times\) & - & 1.71\(\times\) & 1.45\(\times\) & 2.07\(\times\) & - \\ \hline \hline \end{tabular}
\end{table}
Table 2: End-to-end latencies (ms) and speedups of DeepliteRT 2A2W over TVM FP32, ONNX Runtime INT8 and TVM bit-serial 2A2W baselines.
uplifts for the FP32 and INT8 baselines with the ARMv8 ISA. In contrast, DeepliteRT offers leading performance for both ARMv7 and ARMv8 targets. On average, DeepliteRT realizes speedups of 1.89\(\times\), 1.91\(\times\) and 1.58\(\times\) in 32-bit mode and 1.54\(\times\), 1.20\(\times\) and 1.56\(\times\) in 64-bit mode over TVM FP32, ONNX Runtime INT8 and TVM 2A2W, respectively.
### Model accuracy and mixed precision
SOTA for ultra low-bit quantization has progressed at a rapid pace as shown in Table 1. We study the accuracy-performance tradeoff of ultra low-bit quantization using LSQ for a classification and detection model in Fig. 4. ResNet18 trained on the VWW dataset [] only incurs accuracy drops of 0.86% and 2.09% relative to the FP32 baseline with performance uplifts of up to 2.12\(\times\) and 3.19\(\times\) at 2A2W and 1A2W, respectively. Similarly, VGG16-SSD300 [] trained on the VOC dataset [] only sees a 0.18 loss in mAP at 2A2W while realizing a speedup of up to 1.46\(\times\). The minor accuracy dips, substantial latency improvements and huge savings in model size make ultra low-bit networks an ideal fit for edge deployment. Moreover, mixed precision inference with DeepliteRT enables practitioners to easily explore this tradeoff between accuracy and performance, as illustrated in Table 3 for ResNet50, by varying the number of layers in FP32, 2A2W and 1A2W. An appropriate quantization configuration can be chosen based on model accuracy and latency measurements from the target.
## 6 Conclusion
We present an end-to-end inference solution in DeepliteRT for ML framework-agnostic deployment of ultra low-bit quantized models on 32-bit ARMv7 and 64-bit ARMv8 platforms. It implements compiler passes for the automatic conversion of fake-quantized networks in full-precision to compact representations in ultra low-bit, eliminating the need for custom modifications in the training and runtime components to enable inference at ultra low-precision. Using high-performance bit-serial convolution kernels, DeepliteRT outperforms highly optimized floating-point, integer, and ultra low-bit baselines on image classification and object detection models by up to 2.20\(\times\), 2.33\(\times\) and 2.17\(\times\), respectively.
Figure 4: Trade off between ultra low-bit model accuracy and performance.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**52 FP32** & **26 FP32 + 26 2A2W** & **52 2A2W** & **26 2A2W + 26 1A2W** & **52 1A2W** \\ \hline
433.19 & 314.69 & 196.79 & 180.37 & 134.26 \\ \hline \hline \end{tabular}
\end{table}
Table 3: DeepliteRT latency (ms) on ResNet50 with mixed precision configurations. |
2310.20614 | Primordial Orbital Alignment of Sednoids | We examined the past history of the three most detached TransNeptunian
Objects (TNOs) -- Sedna, 2012 VP113, and Leleakuhonua (2015 TG387) -- the three
clearest members of the dynamical class known as sednoids, with high perihelia
distances $q$. By integrating backward their nominal (and a set of cloned)
orbits for the Solar System's age, we surprisingly find that the only time all
their apsidal lines tightly cluster was 4.5 Gyr ago, at perihelion longitude
$\varpi$ of $200^\circ$. This "primordial alignment" is independent of the
observational biases that contribute to the current on-sky clustering in the
large-semimajor axis Kuiper Belt. If future sednoid discoveries confirm these
findings, this strongly argues for an initial event during the planet formation
epoch which imprinted this particular apsidal orientation on the early detached
TNO population. Their apsidal orientations were then subsequently modified only
by the simple precession from the 4 giant planets (and weakly by the galactic
tide). If other sednoids also cluster around the same primordial value, various
models suggesting a still present planet in the outer Solar System would be
incompatible with this alignment. We inspected two scenarios that could
potentially explain the primordial alignment. First, a rogue planet model
(where another massive planet raises perihelia near its own longitude until
ejection) naturally produces this signature. Alternatively, a close stellar
passage early in Solar System history raises perihelia, but it is poor at
creating strong apsidal clustering. We show that all other known $35<q<55$ au
TNOs are either too perturbed or orbits are still too uncertain to provide
evidence for or against this paradigm. | Yukun Huang, Brett Gladman | 2023-10-31T16:49:24Z | http://arxiv.org/abs/2310.20614v3 | # Primordial Orbital Alignment of Sednoids
###### Abstract
We examined the past history of the three most detached TransNeptunian Objects (TNOs) - Sedna, 2012 VP\({}_{113}\), and Leleakuhonua (2015 TG\({}_{387}\)) - the three clearest members of the dynamical class known as sednoids, with high perihelia distances \(q\). By integrating backward their nominal (and a set of cloned) orbits for the Solar System's age, we surprisingly find that the only time all their apsidal lines tightly cluster was 4.5 Gyr ago, at perihelion longitude \(\varpi\) of 200\({}^{\circ}\). This "primordial alignment" is independent of the observational biases that contribute to the current on-sky clustering in the large-semimajor axis Kuiper Belt. If future sednoid discoveries confirm these findings, this strongly argues for an initial event during the planet formation epoch which imprinted this particular apsidal orientation on the early detached TNO population and then subsequently modified only by the simple precession from the 4 giant planets. If other sednoids also cluster around the same primordial value, various models suggesting a still present planet in the outer Solar System would be incompatible with this alignment. We inspected two scenarios that could potentially explain the primordial alignment. First, a rogue planet model (where another massive planet raises perihelia near its own longitude until ejection) naturally produces this signature. Alternatively, a close stellar passage early in Solar System history raises perihelia, but it is poor at creating strong apsidal clustering. We show that all other known \(35<q<55\) au TNOs are either too perturbed or orbits are still too uncertain to provide evidence for or against this paradigm.
Trans-Neptunian objects (1705) -- Kuiper belt (893) -- Celestial Mechanics (221) 0000-0002-4003-2886]Yukun Huang (Dept. of Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, BC, Canada
0000-0002-4880-7885]Brett Gladman (Dept. of Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, BC, Canada
## 1 Introduction
The vast extent of the Solar System's TransNeptunian Objects (TNOs) has long captivated the curiosity of astronomers. These icy remnants, relics from the early Solar System, offer invaluable insights into the primordial conditions and dynamical histories of the giant planets. Among thousands of discovered TNOs, a tiny subset known as sednoids - characterized by their large semimajor axes (\(a\)) and significantly high perihelia (\(q\)) - stands out as particularly intriguing. The first member, (90377) Sedna (Brown et al., 2004), was followed by 2012 VP\({}_{113}\)(Trujillo and Sheppard, 2014), and most recently, (541132) Leleakuhonua (provisional designation: 2015 TG\({}_{387}\)), was discovered by Sheppard et al. (2019). The barycentric orbital elements of the three sednoids are listed in Table 1, in which the longitude of perihelion is defined as \(\varpi=\Omega+\omega\).
Non-classical TNOs (see Gladman & Volk, 2021 for TNO classifications) are generally believed to have originated in the primordial planetesimal disk interior to \(\sim\)30 au and were scattered and/or transported onto their current orbits. It is believed that sednoids were created in a similar way due to a combination of early planetary scatterings and a detachment process that significantly increased their perihelia. The postulated \(q\)-raising mechanisms include: stellar flybys while the Sun was still in its birth cluster (Morbidelli & Levison, 2004; Kenyon & Bromley, 2004; Brasser et al., 2006, 2012), short-lived rogue planets (Gladman & Chan, 2006), a distant planetary-mass solar companion (Gomes et al., 2006) or a still existing planet (Lykawka & Mukai, 2008; Batygin & Brown, 2016), as well as stellar flybys during solar migration in the Milky Way (Kaib et al., 2011).
Upon the discovery of 2012 VP\({}_{113}\), Trujillo & Sheppard (2014) pointed out that argument of perihelion (\(\omega\)) for many large-\(a\) TNOs may cluster about 0\({}^{\circ}\) and proposed a hypothetical super-Earth planet as an explanation. Batygin & Brown (2016) explored a similar idea and proposed the so-called "Planet Nine" to account for the current on-sky orbital clustering of some distant TNOs. However, modern outer Solar System surveys (Shankman et al., 2017; Napier et al., 2021; Bernardinelli et al., 2022) have cast doubt on the validity of the _current_ clustering, showing it to be consistent with survey biases applied to underlying uniform distributions in \(\Omega\), \(\varpi\), and \(\omega\).
In this Letter, we presented a potentially new phenomenon involving the three sednoids that could provide a valuable constraint on their origin and the early history of the outer Solar System.
## 2 Rewinding Sednoids
The orbital evolutions of sednoids over the past \(\approx\)4 Gyr are primarily driven by secular precessions induced by gravitational effects of the four giant planets, assuming that no planetary mass is still present in the outer Solar System. The analytically approximate apsidal precession rate as a function of the TNO's (\(a,e,i\)) orbital elements is given by (Batygin et al., 2019):
\[\dot{\varpi}=\frac{3}{8}\;n\;\frac{3\cos^{2}i-1}{\left(1-e^{2}\right)^{2}} \sum_{j=5}^{8}\frac{m_{j}a_{j}^{2}}{M_{\odot}a^{2}}\;, \tag{1}\]
where \(n\) denotes the TNO's mean motion, \(M_{\odot}\) is the solar mass, and the index \(j\) denotes the \(j\)-th planet. This equation estimates the apsidal precession periods of 2012 VP\({}_{113}\) (1.2 Gyr), Sedna (2.8 Gyr), and Leleakuhonua (7.1 Gyr). Assuming that their orbital \(a\), \(q\), and \(i\) have not changed significantly post formation, one can rewind their longitudes of perihelion back in time by applying the linear precession rate (Equation 1). Surprisingly, we found that the only time their \(\varpi\) were all tightly clustered was 4-4.5 Gyr ago, at around \(\varpi\approx\)200\({}^{\circ}\).
\begin{table}
\begin{tabular}{l r r r r} \hline \hline Object & \(a\) (au) & \(q\) (au) & \(i\) (deg) & \(\Omega\) (deg) & \(\varpi\) (deg) \\ \hline
2012 VP\({}_{113}\) & \(262.0\pm 0.6\) & 80.5 & 24.1 & 90.8 & 24.7 \\
(90377) Sedna & \(506.4\pm 0.2\) & 76.2 & 11.9 & 144.4 & 95.7 \\ (541132) Leleakuhonua & \(1089.6\pm 185\) & 65.0 & 11.7 & 301.0 & 59.0 \\ \hline \end{tabular} Note.: Data retrieved from JPL Small-Body Database ([https://ssd.jpl.nasa.gov/tools/sbdb_query.html](https://ssd.jpl.nasa.gov/tools/sbdb_query.html)). Only the semimajor axis uncertainties (1\(\sigma\)) are presented. Uncertainties in other orbital elements are too small to be listed. Compared to the other two objects, (541132) Leleakuhonua has a significantly large uncertainty in \(a\), due to fewer observations and a shorter data-are span.
\end{table}
Table 1: Barycentric orbital elements of the three sednoids
Intrigued by this approximate analytical result, we carried out backward numerical integration to validate the primordial alignment of the sednoids. For each sednoid, we generated 11 initial conditions, including 1 nominal orbit and 10 cloned orbits distributed inside the orbital uncertainty, using SBDynT 1, with the clones serving to diagnose plausible uncertainties in the apsidal angles. These initial test-particle orbits along with the four giant planets were integrated backward in time using the Mercurius integrator in Rebound(Rein and Liu, 2012; Rein et al., 2019).
Footnote 1: Small Body Dynamics Tool developed by Kat Volk and Dallin Spencer, [https://github.com/small-body-dynamics/SBDynT](https://github.com/small-body-dynamics/SBDynT)
Figure 1 shows the computed past evolution histories of \(\varpi\). Similarly to the analytical approximation, their apsidal lines were aligned at 200\({}^{\circ}\) 4.4-4.5 Gyr ago, with a circular standard deviation of only 8\({}^{\circ}\) (black curve in lower panel). For reference, three randomly-generated angles would have an average circular standard deviation of \(\approx\)60\({}^{\circ}\).
To test the statistical significance of the clustering, we applied a Rayleigh test of uniformity at all the output times. The \(p\)-value of the test (red shaded curve and right-hand scale in the lower panel of Figure 1) represents the probability that three random angles are more clus
Figure 1: Upper: Past evolutions of perihelion longitudes (\(\varpi\)) for Sedna (blue), 2012 VP\({}_{113}\) (red), and Leleakuhonua (black). Solid lines denote the backward propagation of their nominal orbits, whereas shaded areas denote ranges of their clone orbits. The only instance where the sednoid apsidal lines all converged was around 4.5 Gyr ago (vertical dashed line), right after the Solar System was formed. Lower: The circular standard deviation of the three angles (black) and the statistical confidence (\(p\)-value, red shaded) that they are generated from a uniform distribution. There is only 1 in 20 chance (2\(\sigma\)) that three random angles would cluster at the same level as the sednoids did 4.5 Gyr ago.
tered than the three sednoids at that time. The Rayleigh test shows that there is a \(<5\%\) chance that the primordial clustering is just a statistical coincidence. In particular, it would seem in principle that the alignment, if it were by chance, could occur at any time in the past; the fact that it occurs at the'special' time that corresponds to the planet formation epoch of Solar System history is a 'difficult to quantify' additional low-probability event. Therefore, we conclude that the primordial alignment in \(\varpi\) is a extremely interesting new possibility, with more sednoid orbits required to reach \(3\sigma\) confidence.
## 3 Cosmogonic interpretations
Similar backward integrations have been successfully applied in identifying asteroid family. For example, Nesvorny et al. (2002) used this method to identify 13 members of the Karin family, believed to have been created from a catastrophic collision of a larger parent body. All of its current presumed members have very similar \(\Omega\) and \(\varpi\) when propagated back to the breakup epoch 5.8 Myr ago.
The sednoid story is similar, but not based on a collision. Instead, sednoids (and other detached TNOs) are believed to have originated from the primordial planetesimal disk and were transplanted to their current orbits. If the primordial alignment is validated with other decoupled TNOs, it would mean that after formation the detached Kuiper Belt has remained largely unperturbed over the past \(\approx\)4 Gyr, with only precession induced by the four giant planets slowly altering orbits. This would be incompatible with hypotheses suggesting undiscovered planets currently existing in the outer Solar System (e.g., Gladman et al., 2002; Brunini and Melita, 2002; Gomes et al., 2006; Batygin and Brown, 2016; Volk and Malhotra, 2017; Lykawka and Ito, 2023), and aligns with the non-detection of planetary bodies in observational surveys (Trujillo, 2020; Belyakov et al., 2022) and high-precision spacecraft tracking data (Fienga et al., 2020; Gomes et al., 2023).
This primordial alignment would strongly argue for a very early primordial event (within the first \(\sim 100\) Myr) that established this specific apsidal orientation for the strongly detached TNO populations, presumably at the same time when their \(q\) were lifted. The exact timing and duration of the event depend on the width of the clustering, which is still uncertain due to the alignment being defined by only 3 objects. Nevertheless, one can roughly estimate from Figure 1 that this event must have ended within a few hundred Myr, after which the clustering is no longer significant. One can thus potentially use the width and timing of the primordial alignment (\(-4.5\)--\(4.3\) Gyr) to constrain the sednoid perihelion lifting mechanism. We briefly explore two hypotheses that could explain this phenomenon.
## 4 Possible Scenarios
**Primordial Planet.** One hypothesis to explain the alignment revolves around a temporarily present rogue planet born in the Solar System. Gladman and Chan (2006) showed that scattering rogue objects could have raised TNO perihelia and created Sedna-like orbits; the secular \(q\)-lifting effect is dominated by the single most massive rogue. Huang et al. (2022) recently demonstrated that the rogue can also help populate distant TNOs below \(a<100\) au by collaborating with Neptunian mean-motion resonances. Previous studies have shown that the detachment dynamics of a highly eccentric (rogue) planet is correlated with its relative apsidal orientation \(\Delta\varpi\)(Batygin and Morbidelli, 2017; Huang, 2023). It is thus worth exploring a proof of concept that a rogue can produce a primordial alignment.
Figure 2 displays the simulation result of a rogue planet that lifts objects from a massive early scattering disk (which has \(q<40\) au). The 2 \(M_{\oplus}\) rogue has an initial orbit \(a_{r,0}=400\) au,
\(q_{r,0}=32\) au, \(i_{r,0}=15^{\circ}\), and was propagated along with the four giant planets for 185 Myr. To generate the initial disk of scattering particles, Neptune was forced to migrate from 24 to 30 au through an outer planetesimal disk of 500,000 test particles spanning 24.5 to 33 au; the migration timescale and disk parameters are similar to those of grainy migration simulations (Nesvorny & Vokrouhlicky, 2016; Nesvorny et al., 2016), but we find that these parameters play little role in the emplacement of detached TNOs beyond \(a>200\) au. As a result of Neptune scattering, TNOs were constantly fed to the large-\(a\) region, where the rogue's gravity can raise the TNO perihelia (Gladman & Chan, 2006). The planetary simulation (rogue scattering and Neptune migrating) was performed using Rebound (Rein & Liu, 2012), while the test particle simulation was carried out by Glisser, a GPU N-body integrator based on Zhang & Gladman (2022).
During the \(\sim\)100-Myr temporary presence, the rogue was able to reproduce sednoids in a wide range of semimajor axes (a = 200-\(\sim\)1000 au, color-coded in Figure 2). The detachment was strongest along the rogue's apsidal line (Huang, 2023), which precesses \(\approx\)100\({}^{\circ}\) (blue lines) until ejection by Neptune. This \(2M_{\oplus}\) rogue creates a strong clustering in the \(\varpi-q\) space, with a circular standard deviation of \(\approx\)25\({}^{\circ}\) for \(q>50\) au particles. While this is more dispersed than the primordial alignment observed in the three Sednoids, more Sednoid discoveries will be needed to measure the underlying spread of their primordial apsidal lines. A tightly-dispersed primordial \(\varpi\) distribution can occur for a more massive rogue planet with a shorter lifetime before ejection.
After the rogue's removal, the now-detached objects/sednoids precess at different rates (determined by their \(a\), \(q\), and \(i\) as per Equation 1). This eventually results in a nearly-uniform \(\varpi\) distribution in today's surviving population. For icy bodies with \(q>120\) au, however, there is not enough time to homogenize their orbital orientations because their precession peri
Figure 2: TNO \(\varpi\)–\(q\) distributions at the end of a 2 \(M_{\oplus}\) rogue planet’s 185-Myr early presence (left) and today (right). The rogue planet’s longitudes of perihelion (\(\varpi_{r}\)) at the start and at the removal are marked by blue solid and dashed lines, respectively. Upon the rogue’s removal, the differing precession rates due to wide ranges of \(a\) and \(q\) (red arrows; see also Equation 1) led to the near homogenization of \(\varpi\), except for \(q>120\) au TNOs (right of the red dashed line, right panel) where precession periods are comparable to the age of the Solar System.
ods are comparable to the age of the Solar System. If this general picture is true, then if any \(q>120\) au TNOs were to be discovered, they would probably spread from \(\varpi\approx\)\(-120^{\circ}(240^{\circ})\) to \(\approx\)\(60^{\circ}\) (Figure 2's right panel), assuming a primordial \(\varpi\) clustering near \(200^{\circ}\).
The close primordial sednoid alignment is only created in \(\varpi\), not in \(\Omega\) or \(\omega\); both angles have a larger circular standard deviation of \(\approx\)\(80^{\circ}\) at 4.5 Gyr ago (but have an anti-correlation to produce the \(\varpi\) clustering). This is also the case in the rogue planet simulation, where the detached TNOs possessed various \(\Omega\) and \(\omega\), instead of showing a strong clustered peak as in Figure 2's left panel.
**Stellar Flyby.** Another scenario worth exploring is that the primordial alignment might correlate with a passing star's longitude of perihelion. Batygin et al. (2020) analytically showed that the particle's eccentricity evolution under a passing star is related to their \(\varpi\) difference. However, numerical studies of stellar encounters mainly focused on TNO \(a\), \(q\), \(i\) distributions (e.g., Ida et al., 2000; Brasser et al., 2012; Nesvorny et al., 2023) instead of their apsidal lines. Therefore, we conducted a simulation with a solar-mass star passing through a primordial scattering disk (\(5<q<25\) au), with the closest stellar approach at \(q_{*}=300\) au (Adams, 2010; Batygin et al., 2020). Its hyperbolic trajectory has \(i_{*}=15^{\circ}\) relative to the Solar System and \(v_{\infty}=1\) km/s, which is the typical velocity dispersion for young embedded clusters (Brasser et al., 2006).
Figure 3 shows the result of this simulation. Although such a passing star produced sednoids across a wide semimajor axis range, no strong clustering was formed in the \(\Delta\varpi\)-\(q\) panel. One observes a weak preference for particles with \(\Delta\varpi=90^{\circ}\) and \(270^{\circ}\) to be lifted from the scattering disk as the star passes (3's histogram) but this is insufficient to produce the correlation shown in Figure 1. While this preliminary simulation does not favor the stellar encounter for primordially-clustered sednoids, further exploration of the star's parameters with a focus on the \(\varpi\) distribution it creates is warranted.
## 5 Discussion
**Galactic tide.** The galactic tidal effect is generally considered beyond 2000 au, where the tidal torquing timescale is comparable to the planetary scattering timescale (Duncan et al., 1987). Sheppard et al. (2019) integrated forward the three sednoids considering the galactic tide and the four giant planets, where they found galactic tides have almost no effect on Sedna and 2012 VP\({}_{113}\), but do create a small \(\pm 6\) au oscillation of Leleakuhonua's perihelion
Figure 3: Orbital distributions of TNOs detached by a solar-mass passing star (\(q_{*}=300\) au and \(v_{\infty}=1\) km/s). The upper panel shows that this particular stellar encounter was able to create all three sednoids (crosses) in the \(a\)–\(q\) space. However, there was no strong primordial clustering of \(\varpi\) post flyby (lower panels).
(their figure 7). The authors conclude that Leleakuhonua is stable to the galactic tide and that the tide only produces tiny variations in \(a\), \(e\), \(i\). As a result, we did not incorporate the galactic tide in our simulation; adding the tide would slightly increase Leleakuhonua's \(\varpi\) uncertainty (Fig. 1).
**Other candidates**. We also tried integrating backward another five large-\(a\) and high-\(q\) TNOs: 2014 SR\({}_{349}\), (474640) Alicanto (a.k.a. 2004 VN\({}_{112}\)), 2010 GB\({}_{174}\), 2013 SY\({}_{99}\), and the recently-discovered 2021 RR\({}_{205}\). We judged these as the least prone to modifications by chaotic interactions with Neptune when the TNOs were near perihelion.
The \(a\) and \(\varpi\) histories are shown in Figure 4. The first three objects with \(a\approx 300\) au and \(q\approx 45\) au have relatively stable semimajor axis histories. However, their \(\varpi\) evolutions rapidly diverge from the start, with clones immediately possessing various rates of apsidal precession and even different directions; the latter is due to TNO proximity to very high-order Neptunian mean-motion resonances (Volk & Malhotra, 2022). The last two objects, with \(a\gtrsim 700\) au and \(q\gtrsim 50\), are slowly scattering in \(a\) due to dynamical diffusion caused by Neptune resonance overlap (Batygin et al., 2021; Hadden & Tremaine, 2023), with some clones not even stable over \(\sim\)4 Gyr. This results in a wide range
Figure 4: Backward integrations of five large-\(a\) and high-\(q\) TNOs, taking into account their orbital uncertainties. Nominal orbits are colored black, while clones are in different colors. Although 2014 SR\({}_{349}\), Alicanto, and 2010 GB\({}_{174}\) show very similar \(a\) evolutions across the clones (left panel), none of the integrated objects have sufficiently confined \(\varpi\) histories (right panel), rendering them still insufficient for the primordial alignment analysis.
of precession rates \(\dot{\varpi}\), leading to indeterminate \(\varpi\) at -4.5 Gyr. For 2014 SR\({}_{349}\), 2013 SY\({}_{99}\), and 2021 RR\({}_{205}\), future observations could potentially improve the uncertainties to the degree where a sufficiently-precise determination of their primordial \(\varpi\) may be possible.
## 6 Acknowledgement
We thank W. Fraser, J. Kavelaars, R. Pike, D. Raggozine, and K. Volk for useful discussions. BG acknowledges Canadian funding support from NSERC.
|
2309.04819 | Detecting Violations of Differential Privacy for Quantum Algorithms | Quantum algorithms for solving a wide range of practical problems have been
proposed in the last ten years, such as data search and analysis, product
recommendation, and credit scoring. The concern about privacy and other ethical
issues in quantum computing naturally rises up. In this paper, we define a
formal framework for detecting violations of differential privacy for quantum
algorithms. A detection algorithm is developed to verify whether a (noisy)
quantum algorithm is differentially private and automatically generate bugging
information when the violation of differential privacy is reported. The
information consists of a pair of quantum states that violate the privacy, to
illustrate the cause of the violation. Our algorithm is equipped with Tensor
Networks, a highly efficient data structure, and executed both on TensorFlow
Quantum and TorchQuantum which are the quantum extensions of famous machine
learning platforms -- TensorFlow and PyTorch, respectively. The effectiveness
and efficiency of our algorithm are confirmed by the experimental results of
almost all types of quantum algorithms already implemented on realistic quantum
computers, including quantum supremacy algorithms (beyond the capability of
classical algorithms), quantum machine learning models, quantum approximate
optimization algorithms, and variational quantum eigensolvers with up to 21
quantum bits. | Ji Guan, Wang Fang, Mingyu Huang, Mingsheng Ying | 2023-09-09T15:07:31Z | http://arxiv.org/abs/2309.04819v1 | # Detecting Violations of Differential Privacy for Quantum Algorithms
###### Abstract.
Quantum algorithms for solving a wide range of practical problems have been proposed in the last ten years, such as data search and analysis, product recommendation, and credit scoring. The concern about privacy and other ethical issues in quantum computing naturally rises up. In this paper, we define a formal framework for detecting violations of differential privacy for quantum algorithms. A detection algorithm is developed to verify whether a (noisy) quantum algorithm is differentially private and automatically generates bugging information when the violation of differential privacy is reported. The information consists of a pair of quantum states that violate the privacy, to illustrate the cause of the violation. Our algorithm is equipped with Tensor Networks, a highly efficient data structure, and executed both on TensorFlow Quantum and TorchQuantum which are the quantum extensions of famous machine learning platforms -- TensorFlow and PyTorch, respectively. The effectiveness and efficiency of our algorithm are confirmed by the experimental results of almost all types of quantum algorithms already implemented on realistic quantum computers, including quantum supremacy algorithms (beyond the capability of classical algorithms), quantum machine learning models, quantum approximate optimization algorithms, and variational quantum eigensolvers with up to 21 quantum bits.
Quantum Algorithm, Quantum Machine Learning, Differential Privacy Verification, Violation Detection, Quantum Noise +
Footnote †: journal: Computer Communications Communications
+
Footnote †: journal: Computer Communications Communications
+
Footnote †: journal: Computer Communications Communications
+
Footnote †: journal: Computer Communications Communications
+
Footnote †: journal: Computer Communications Communications
+
Footnote †: journal: Computer Communications Communications
+
Footnote †: journal: Computer Communications Communications
+
Footnote †: journal: Computer Communications Communications
+
Footnote †: journal: Computer Communications Communications
+
Footnote †: journal: Computer Communications Communications
+
Footnote †: journal: Computer Communications Communications
+
Footnote †: journal: Computer Communications Communications
+
Footnote †: journal: Computer Communications Communications
+
Footnote †: journal: Computer Communications Communications
+
Footnote †: journal: Computer Communications Communications
+
Footnote †: journal: Computer Communications Communications
+
Footnote †: journal: Computer Communications Communications
+
Footnote †: journal: Computer Communications Communications
+
Footnote †: journal: Computer Communications Communications
+
Footnote †: journal: Computer Communications Communications
+
Footnote †: journal: Computer Communications Communications
+
Footnote †: journal: Computer Communications Communications
+
Footnote †: journal: Computer Communications Communications
+
Footnote †: journal: Computer Communications Communications
+
Footnote †: journal: Computer Communications Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
Footnote †: journal: Computer Communications
+
convolution neural networks (Dev et al., 2017), quantum recurrent neural networks (Dev et al., 2017), quantum generative adversarial networks (Dev et al., 2017) and quantum reinforcement learning networks (Dev et al., 2017). Subsequently, these models have been tested to solve a wide range of real-world problems, such as fraud detection (in transaction monitoring) (Brock et al., 2018; Chen et al., 2019), credit assessments (risk scoring for customers) (Chen et al., 2019; Chen et al., 2019) and handwritten digit recognition (Chen et al., 2019). On the other hand, a series of quantum machine learning algorithms without the classical counterparts have also been designed to solve specific problems. For example, quantum approximate optimization algorithm (QAOA) is a toy model of quantum annealing and is used to solve problems in graph theory (Kirkpatrick et al., 2017), variational quantum eigensolver (VQE) applies classical optimization to minimize the energy expectation of an ansatz state to find the ground state energy of a molecule (Kirkpatrick et al., 2017). Furthermore, based on the famous classical machine learning training platforms -- TensorFlow and Pytorch, two quantum training platforms have been established: TensorFlow Quantum (Krizhevsky et al., 2012) and TorchQuantum (Krizhevsky et al., 2012), respectively.
The rapid development of quantum hardware enables those more and more experimental implementations of the algorithms mentioned above on concrete problems have been achieved (Kirkpatrick et al., 2017; Chen et al., 2019). Notably, quantum supremacy (or advantage beyond classical computation) was proved by Google's quantum computer _Sycamore_ with 53 noisy superconducting qubits (quantum bits) that can do a sampling task in 200 seconds, while the same task would cost (arguably) 10,000 years on the largest classical computer (Kirkpatrick et al., 2017). A type of Boson sampling was performed on USTC's quantum computer _Juzhang_ with 76 noisy photonic qubits in 20 seconds that would take 600 million years for a classical computer (Kirkpatrick et al., 2017). These experiments demonstrate the power of quantum computers with tens to hundreds of qubits in the current _Noisy Intermediate-Scale Quantum (NISQ)_ era where quantum noises cannot be avoided. Meanwhile, more and more quantum cloud computing platforms (e.g. IBM's Qiskit Runtime and Microsoft's Azure Quantum) are available for public use to implement quantum algorithms on realistic quantum chips.
**Differential Privacy: From Classical to Quantum**: Differential privacy has become a de facto standard evaluating an algorithm for protecting the privacy of individuals. It ensures that any individual's information has very little influence on the output of the algorithm. Based on this intuition, the algorithmic foundation of differential privacy in classical (machine learning) algorithms has been established (Kirkpatrick et al., 2017; Chen et al., 2019). However, developing algorithms with differentially private guarantees is very subtle and error-prone. Indeed, a large number of published algorithms violate differential privacy. This situation boosts the requirement of a formal framework for verifying the differential privacy of classical algorithms. Various verification techniques have been extended into this context (Kirkpatrick et al., 2017; Chen et al., 2019; Chen et al., 2019). Furthermore, a counterexample generator for the failure in the verification can be provided for the debugging purpose (Kirkpatrick et al., 2017).
With more and more applications, the privacy issue of quantum algorithms also rises. Indeed, from the viewpoint of applications, this issue is even more serious than its classical counterpart since it is usually hard for the end users to understand quantum algorithms. Inspired by its great success in applications, the notion of differential privacy has recently been extended to quantum computation, and some fundamental algorithmic results for computing privacy parameters have been obtained (Kirkpatrick et al., 2019; Chen et al., 2019; Chen et al., 2019) in terms of different definitions of the similarity between quantum states. However, the verification and violation detecting problem of differential privacy of quantum algorithms have not been touched in the previous works.
**Contributions of This Paper**: In this work, we define a formal framework for the verification of differential privacy for quantum algorithms in a principled way. Specifically, our main contributions are as follows:
1. [leftmargin=*,noitemsep,topsep=0pt]
2. _Algorithm_: An algorithm for detecting violations of differential privacy for quantum algorithms is developed. More specifically, this algorithm can not only efficiently check whether or not a (noisy) quantum algorithm is differentially private, but also automatically generate a pair of quantum states when the violation of differential privacy is reported. These two states that break the promising differential privacy provide us with debugging information.
3. _Case Studies_: Our detection algorithm is implemented both on TensorFlow Quantum (Krizhevsky et al., 2012) and TorchQuantum (Krizhevsky et al., 2012) which are based on famous machine learning platforms -- TensorFlow and PyTorch, respectively. The effectiveness and efficiency of our algorithm are confirmed by the experimental results of almost all types of quantum algorithms already implemented on realistic quantum computers, including quantum supremacy algorithms (beyond the capability of classical algorithms), quantum machine learning models, quantum approximate optimization algorithms, and variational quantum eigensolver algorithms with up to 21 qubits.
4. _Byproducts_: We show that quantum noises can be used to protect the privacy of quantum algorithms as in the case of classical algorithms, and establish a composition theorem of quantum differential privacy for handling larger quantum algorithms in a modular way.
### Related Works and Challenges
**Detecting Violations for Classical Algorithms:** Detecting the violations of differential privacy for classical (randomized) algorithms has been studied in (Kirkpatrick et al., 2017). Their approach is to analyze the (distribution of) outputs of classical algorithms in a statistical way. Specifically, it runs a candidate algorithm many times and uses statistical tests to detect violations of differential privacy. However, such a method has some limitations: if an algorithm satisfies differential privacy except with an extremely small probability then it may not detect the violations. To avoid this situation appearing in the quantum world, we introduce a series of linear algebra operations to analyze the output states of quantum algorithms. In particular, we characterize the verification of differential privacy as inequalities and solve them by computing eigenvalues and eigenvectors of some matrices, which are indexed by a quantum measurement outcome and represent the converse (dual) implementation of quantum algorithms. As a result, our developed verification algorithm is exact (sound and complete).
**Differential Privacy for Quantum Circuits:** Quantum differential privacy was first defined in (Kirkpatrick et al., 2017)-(Chen et al., 2019) for (noisy) quantum
circuits. However, the verification and violation detection problems for quantum differential privacy were not addressed there.
In this paper, we adapt the quantum differential privacy for quantum algorithms rather than quantum circuits, motivated mainly by our target applications. Roughly speaking, a quantum algorithm can be thought of as a quantum circuit together with a quantum measurement at the end to extract the computational outcome (classical information). Accordingly, the privacy for a circuit must be examined for all possible measurements, but the privacy for an algorithm should be defined for a fixed measurement. This subtle difference leads to different verification problems and solutions. In the case of algorithms, the verification problem can be solved by transferring the impact of algorithmic steps on input quantum states to the given quantum measurement. But it seems that the same idea cannot be applied to the case of circuits because the final measurement is unknown beforehand. On the other hand, the counterexample generator of differential privacy constructed in this paper can be used to detect differential privacy violations in quantum circuits by appending certain measurements to them.
## 2. Preliminaries
In this section, for the convenience of the reader, we introduce basic ideas of quantum algorithms in a mathematical way.
Roughly speaking, a quantum algorithm consists of a quantum circuit and a quantum measurement. The former is for implementing algorithmic instructions; the latter is to extract the classical information from the final state at the end of the circuit. The computational components in the quantum algorithm can be mathematically described by two types of matrices: (i) _unitary matrices_ for quantum gates and circuits; and (ii) _positive semi-definite matrices_ for density operators (quantum states) and (Positive Operator-Valued Measure) quantum measurements. Thus we start with a brief introduction of these two kinds of matrices in the context of quantum computation.
### Unitary and Positive Semi-definite Matrices
Before defining unitary and positive semi-definite matrices, we need to specify the state space we are interested in. Mathematically, a quantum algorithm works on a \(2^{n}\)-dimensional Hilbert (linear) space \(\mathcal{H}\), where \(n\) is the number of _quantum bits (qubits)_ (defined in the next section) involved in the algorithm. Thus, in this paper, all linear algebra operations are based on \(\mathcal{H}\). We choose to use standard quantum mechanical notation instead of that from linear algebra. This style of notation is known as the _Dirac notation_, and widely used in the field of quantum computation. For more details, we refer to textbook [(33)].
First of all, vectors in \(\mathcal{H}\) can be represented as the following Dirac notations:
1. \(\ket{\psi}\) stands for a \(2^{n}\)-dimensional complex unit (normalized) column vector1 in \(\mathcal{H}\) labelled with \(\psi\); Footnote 1: \(\ket{\psi}\) is a unit column vector if the inner product of \(\ket{\psi}\) and itself is one, i.e., \(\bra{\psi}\ket{\psi}=1\)
2. \(\bra{\psi}\) is a hermitian adjoint (complex conjugate and transpose) of \(\ket{\psi}\);
3. \(\bra{\psi_{1}\psi_{2}}:=(\ket{\psi_{1}},\ket{\psi_{2}})\) is the inner product of \(\ket{\psi_{1}}\) and \(\ket{\psi_{2}}\);
4. \(\ket{\psi_{1}}\bra{\psi_{2}}\) is the outer product;
5. \(\ket{\psi_{1},\psi_{2}}:=\ket{\psi_{1}}\ket{\psi_{2}}\) is a shorthand of the product state \(\ket{\psi_{1}}\otimes\ket{\psi_{2}}\).
**Unitary Matrices:** In the (\(2^{n}\)-dimensional) Hilbert space \(\mathcal{H}\), a unitary matrix \(U\) is a \(2^{n}\times 2^{n}\) matrix with \(U^{\dagger}U=UU^{\dagger}=I_{n}\), where \(U^{\dagger}=(U^{*})^{\top}\) is the (entry-wise) conjugate transpose of \(U\) and \(I_{n}\) is the identity matrix on \(\mathcal{H}\).
**Positive Semi-Definite Matrices:** A \(2^{n}\times 2^{n}\) matrix \(M\) is called _positive semi-definite_ if for any \(\ket{\psi}\in\mathcal{H}\), \(\bra{\psi}M\ket{\psi}\geq 0\). Subsequently, all eigenvalues of \(M\) are non-negative. That is, for any unit eigenvector \(\ket{\psi}\) of \(M\) (i.e., \(M\ket{\psi}=\lambda\ket{\psi}\)), we have \(\lambda\geq 0\).
Some examples of these two matrices with physical meanings will be provided in the next section for a better understanding.
### Quantum Algorithms
Now we turn to review the setup of quantum algorithms in their most basic form. A quantum algorithm is a set of instructions solving a problem (e.g., Shor's algorithm for finding the prime factors of an integer) that can be performed on a quantum computer. Physically, the algorithm is implemented by a quantum circuit that can be executed on quantum hardware. The computational flow of the quantum algorithm is drawn in the following.
With the notions introduced in the above subsection, we can explain the above procedures from the left side to the right one.
**Input Quantum States:** An input can be a _pure quantum state_, which is mathematically modeled as a complex unit column vector \(\ket{\psi}\) in a \(2^{n}\)-dimensional Hilbert (linear) space \(\mathcal{H}\), where \(n\) denotes the number of qubits in \(\ket{\psi}\). For example, a state of a qubit is a vector in a \(2\)-dimensional Hilbert space, written in the Dirac notation as
\[\ket{q}=\left(\begin{array}{c}a\\ b\end{array}\right)=a\ket{0}+b\ket{1}\text{ with }\ket{0}=\left(\begin{array}{c}1\\ 0\end{array}\right)\text{ and }\ket{1}=\left(\begin{array}{c}0\\ 1\end{array}\right),\]
where complex numbers \(a\) and \(b\) satisfy the normalization condition \(|a|^{2}+|b|^{2}=1\). Here, the orthonormal basis \(\ket{0}\), \(\ket{1}\) of the Hilbert space corresponds to the digital value \(0\), \(1\) of a bit in classical computers, respectively.
On a NISQ hardware, noises are unavoidable, and a pure state \(\ket{\psi}\) on \(\mathcal{H}\) may collapse into a _mixed state_, represented as an _ensemble_\(\{(p_{k},\ket{\psi_{k}})\}_{k}\), meaning that it is in \(\ket{\psi_{k}}\) with probability \(p_{k}\). Mathematically, the ensemble can be described by a \(2^{n}\times 2^{n}\) positive semi-definite matrix:
\[\rho=\sum_{k}p_{k}\ket{\psi_{k}}\bra{\psi_{k}}\]
with unit trace in the \(2^{n}\)-dimensional Hilbert (linear) space \(\mathcal{H}\), i.e., \(\operatorname{tr}(\rho)=1\), where trace \(\operatorname{tr}(\rho)\) of \(\rho\) is defined as the summation of diagonal elements of \(\rho\). We use \(\mathcal{D}(\mathcal{H})\) to denote the set of all (mixed) quantum states in \(\mathcal{H}\).
**(Noisy) Quantum Circuits:** The computational part (without the final measurement) of a quantum algorithm can be described by a quantum circuit. A quantum circuit \(U\) consists of a sequence (product) of _quantum logic gates_\(U_{i}\), i.e., \(U=U_{d}\cdots U_{1}\) ( See the orange boxes of the quantum circuit in Fig. 1). Here \(d\) is the depth of the circuit \(U\), and each \(U_{i}\) is mathematically modeled by a unitary matrix. For an input \(n\)-qubit state \(\rho\), the output of the circuit is a quantum state of the same size:
\[\rho^{\prime}=U\rho U^{\dagger}. \tag{1}\]
Example 2.1 ().: A set of typical quantum logic gates used in this paper are listed in the following.
1. \(1\)-qubit (parameterized) logic gates (\(2\times 2\) unitary matrices): \[X =\begin{pmatrix}0&1\\ 1&0\end{pmatrix}\qquad\qquad Y=\begin{pmatrix}0&-i\\ i&0\end{pmatrix}\quad Z=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}\] \[H =\frac{1}{\sqrt{2}}\begin{pmatrix}1&1\\ 1&-1\end{pmatrix}\quad S=\begin{pmatrix}1&0\\ 0&i\end{pmatrix}\quad T=\begin{pmatrix}1&0\\ 0&e^{i\pi/4}\end{pmatrix}.\]
2. \(1\)-qubit rotation gates that are rotation operators along \(x,y,z\)-axis by angle \(\theta\), respectively: \[R_{\pi}(\theta) =e^{-i\theta X/2}=\cos\frac{\theta}{2}I-i\sin\frac{\theta}{2}X= \begin{pmatrix}\cos\frac{\theta}{2}&-i\sin\frac{\theta}{2}\\ -i\sin\frac{\theta}{2}&\cos\frac{\theta}{2}\end{pmatrix}\] \[R_{y}(\theta) =e^{-i\theta Y/2}=\cos\frac{\theta}{2}I-i\sin\frac{\theta}{2}Y= \begin{pmatrix}\cos\frac{\theta}{2}&-\sin\frac{\theta}{2}\\ \sin\frac{\theta}{2}&\cos\frac{\theta}{2}\end{pmatrix}\] \[R_{\pi}(\theta) =e^{-i\theta Z/2}=\cos\frac{\theta}{2}I-i\sin\frac{\theta}{2}Z= \begin{pmatrix}e^{-i\theta/2}&0\\ 0&e^{i\theta/2}\end{pmatrix}.\] Rotation gates \(R_{\pi}(\theta),R_{y}(\theta),R_{x}(\theta)\) are widely used to encode classical data into quantum states and also construct quantum machine learning models (parameterized quantum circuits). These will be detailed in the later discussion.
3. \(2\)-qubit Controlled-U gates (\(4\times 4\) unitary matrices): For any \(1\)-qubit logic gate \(U\), we can get a \(2\)-qubit logic gate -- controlled-\(U\) (CU) gate, applying \(U\) on the second qubit (the target qubit) if and only if the first qubit (the control qubit) is \(|1\rangle\). See the following instances: 1. [(1)]
4. \(\mathrm{CNOT}\): \(\mathrm{CX}\) gate is also known as controlled NOT (\(\mathrm{CNOT}\)) gate and has a special circuit representation: \[\mathrm{CX}=\raisebox{-10.0pt}{\includegraphics[height=100.0pt]{figure-100.pdf}}= \begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&0&1\\ 0&0&1&0\end{pmatrix}.\]
5. \(\mathrm{CZ}\) gate: \[\mathrm{CZ}=\raisebox{-10.0pt}{\includegraphics[height=100.0pt]{figure-100.pdf}}= \begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&-1\end{pmatrix}\] 3. \(\mathrm{Controlled}\) parameterized gates: For example, the controlled Pauli X rotation gate with rotation angle \(\theta\) is: \[\raisebox{-10.0pt}{\includegraphics[height=100.0pt]{figure-100.pdf}}= \begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&\cos\frac{\theta}{2}&-i\sin\frac{\theta}{2}\\ 0&0&-i\sin\frac{\theta}{2}&\cos\frac{\theta}{2}\end{pmatrix}\] \[\raisebox{-10.0pt}{\includegraphics[height=100.0pt]{figure-100.pdf}}= \begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&\cos\frac{\theta}{2}&-i\sin\frac{\theta}{2}\\ 0&0&-i\sin\frac{\theta}{2}&\cos\frac{\theta}{2}\end{pmatrix}\] \[\raisebox{-10.0pt}{\includegraphics[height=100.0pt]{figure-100.pdf}}= \begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&\cos\frac{\theta}{2}&-i\sin\frac{\theta}{2}\\ 0&0&-i\sin\frac{\theta}{2}&\cos\frac{\theta}{2}\end{pmatrix}\] \[\raisebox{-10.0pt}{\includegraphics[height=100.0pt]{figure-100.pdf}}= \begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&\cos\frac{\theta}{2}&-i\sin\frac{\theta}{2}\\ 0&0&-i\sin\frac{\theta}{2}&\cos\frac{\theta}{2}\end{pmatrix}\] \[\raisebox{-10.0pt}{\includegraphics[height=100.0pt]{figure-100.pdf}}= \begin{pmatrix}1&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&1&0\end{pmatrix}.\]
In quantum circuits, each quantum gate \(U_{i}\) only non-trivially operates on one or two qubits. For example, if \(U_{i}\) represents a Hadamard gate on the first qubit, then \(U_{i}=H\otimes I_{n-1}\), where \(I_{n-1}\) is a \(2^{n-1}\times 2^{n-1}\) identity matrix applied on the rest \(n-1\) qubits. See the gates in Figure 2.
Figure 1: The Computational Model of Quantum Algorithms.
Figure 2: Examples of Quantum Machine Learning and Supremacy Algorithms
In the current NISQ era, a (noiseless) quantum circuit \(U\) can only have a noisy implementation modeled by a linear mapping \(\mathcal{E}\) from \(\mathcal{D}(\mathcal{H})\) to \(\mathcal{D}(\mathcal{H})\) satisfying the following two conditions:
* \(\mathcal{E}\) is trace-preserving: \(\operatorname{tr}(\mathcal{E}(\rho))=\operatorname{tr}(\rho)\) for all \(\rho\in\mathcal{D}(\mathcal{H})\);
* \(\mathcal{E}\) is completely positive: for any Hilbert space \(\mathcal{H}^{\prime}\), the trivially extended operator \(\operatorname{id}_{\mathcal{H}^{\prime}}\otimes\mathcal{E}\) maps density operators to density operators on \(\mathcal{H}^{\prime}\otimes\mathcal{H}\), where \(\operatorname{id}_{\mathcal{H}^{\prime}}\) is the identity map on \(\mathcal{H}^{\prime}\): \(\operatorname{id}_{\mathcal{H}^{\prime}}(\rho)=\rho\) for all \(\rho\in\mathcal{D}(\mathcal{H}^{\prime})\).
Such a mapping \(\mathcal{E}\) is called a _super-operator_ in the field of quantum computing and admits a _Kraus matrix form_[33]: there exists a finite set \(\{E_{k}\}_{k\in\mathcal{K}}\) of matrices on \(\mathcal{H}\) such that
\[\mathcal{E}(\rho)=\sum_{k\in\mathcal{K}}E_{k}\rho E_{k}^{\dagger}\quad\text{ with }\sum_{k\in\mathcal{K}}E_{k}^{\dagger}E_{k}=I_{n},\]
where \(\{E_{k}\}_{k\in\mathcal{K}}\) is called _Kraus matrices_ of \(\mathcal{E}\). In this case, \(\mathcal{E}\) is often represented as \(\mathcal{E}=\{E_{k}\}_{k\in\mathcal{K}}\). Thus, for an input state \(\rho\) fed into the noisy quantum circuit \(\mathcal{E}\), the output state is:
\[\rho^{\prime}=\mathcal{E}(\rho). \tag{2}\]
If \(\mathcal{E}\) degenerates to a unitary matrix \(U\), i.e., \(\mathcal{E}=\{U\}\), then the above equation (evolution) is reduced to the noiseless case in Eq. (1). Briefly, we write such \(\mathcal{E}=\{U\}\) as \(\mathcal{U}=\{U\}\) representing noiseless quantum circuit \(U\).
Similarly to a noiseless quantum circuit \(U\), a noisy quantum circuit \(\mathcal{E}\) also consists of a sequence (mapping composition) of quantum logic (noisy) gates \(\{\mathcal{E}_{i}\}\), i.e., \(\mathcal{E}=\mathcal{E}_{d}\circ\cdots\circ\mathcal{E}_{1}\), where each \(\mathcal{E}_{i}\) is either a noiseless quantum logic gate or a noisy one (e.g., the red dashed boxes of the noisy quantum circuit in Fig. 1). See the following examples of quantum noisy logic gates in a mathematical way.
**Example 2.2**.: Let us consider the following noise forming of a \(1\)-qubit gate \(U\):
\[\mathcal{E}_{U,\rho}(\rho)=(1-p)\rho+pU\rho U^{\dagger},\quad\forall\rho\in \mathcal{D}(\mathcal{H})\]
where \(0\leq p\leq 1\) is a probability measuring the noisy level (effect) and \(U\) is a unitary matrix. Then \(\mathcal{E}_{U,\rho}\) consists of Kraus matrices \(\{\sqrt{1-p}l,\sqrt{p}U\}\). Such \(\mathcal{E}_{U,\rho}\) can be used to model several typical \(1\)-qubit noises, depending on the choice of \(U\): \(U=X\) for bit flip, \(U=Z\) for phase flip and \(U=Y=iXZ\) for bit-phase flip [33, Section 8.3]. The depolarizing noise combines these three noises. It is represented by
\[\mathcal{E}_{D,\rho}=\{\sqrt{1-p}l,\sqrt{\frac{p}{3}}X,\sqrt{\frac{p}{3}}Y, \sqrt{\frac{p}{3}}Z\},\]
or equivalently
\[\mathcal{E}_{D,\rho}(\rho)=(1-p)\rho+\frac{p}{3}(X\rho X+Y\rho Y+Z\rho Z), \quad\forall\rho\in\mathcal{D}(\mathcal{H}).\]
**Quantum Measurement:** At the end of each quantum algorithm, a _quantum measurement_ is set to extract the computational outcome (classical information). Such information is a probability distribution over the possible outcomes of the measurement. Mathematically, a quantum measurement is modeled by a set \(\{M_{k}\}_{k\in\mathcal{O}}\) of positive semi-definite matrices on its state (Hilbert) space \(\mathcal{H}\) with \(\sum_{k}M_{k}=I\), where \(\mathcal{O}\) is a finite set of the measurement outcomes. This observing process is probabilistic: if the output of the quantum circuit before the measurement is quantum state \(\rho\), then a measurement outcome \(k\) is obtained with probability
\[p_{k}=\operatorname{tr}(M_{k}\rho). \tag{3}\]
Such measurements are known as _Positive Operator-Valued Measures_ and are widely used to describe the probabilities of outcomes without concerning the post-measurement quantum states (note that after the measurement, the state will be collapsed (changed), depending on the measurement outcome \(k\), which is fundamentally different from the classical computation.)
By summarizing the above ideas, we obtain a general model of quantum algorithms as depicted in Fig. 1:
**Definition 2.3**.: A quantum algorithm \(\mathcal{A}=(\mathcal{E},\{M_{k}\}_{k\in\mathcal{O}})\) is a randomized mapping \(\mathcal{A}:\mathcal{D}(\mathcal{H})\rightarrow\mathcal{D}(\mathcal{O})\) defined by
\[\mathcal{A}(\rho)=\{\operatorname{tr}(M_{k}\mathcal{E}(\rho))\}_{k\in \mathcal{O}}\quad\forall\rho\in\mathcal{D}(\mathcal{H}),\]
where:
1. \(\mathcal{E}\) is a super-operator on Hilbert space \(\mathcal{H}\) representing a noisy quantum circuit;
2. \(\{M_{k}\}_{k\in\mathcal{O}}\) is a quantum measurement on \(\mathcal{H}\) with \(\mathcal{O}\) being the set of measurement outcomes (classical information);
3. \(\mathcal{D}(\mathcal{O})\) stands for the set of probability distributions over \(\mathcal{O}\).
In particular, if \(\mathcal{E}\) represents a noiseless quantum circuit \(U\) written as \(\mathcal{U}=\{U\}\), then we call \(\mathcal{A}=(\mathcal{U},\{M_{k}\}_{k\in\mathcal{O}})\) a noiseless quantum algorithm.
According to the above definition, a quantum algorithm \(\mathcal{A}=(\mathcal{E},\{M_{k}\}_{k\in\mathcal{O}})\) is a randomized mapping, and thus we can estimate not only the distribution \(\{\operatorname{tr}(M_{k}\mathcal{E}(\rho))\}_{k\in\mathcal{O}}\) but also the summation \(\sum_{k\in\mathcal{S}}\{\operatorname{tr}(M_{k}\mathcal{E}(\rho))\}_{k\in \mathcal{O}}\) for any subset \(\mathcal{S}\subseteq\mathcal{O}\) in a statistical way. This observation is essential in defining differential privacy for quantum algorithms in the next section.
**Quantum Encoding:** To make quantum algorithms useful for solving practical classical problems, the first step is to encode classical data into quantum states. There are multiple encoding methods, but _amplitude encoding_ and _angle encoding_ are two of the most widely used.
* **Amplitude encoding** represents a vector \(\bar{v}\) as a quantum state \(|\bar{v}\rangle\), using the amplitudes of the computational basis states \(|\bar{v}\rangle\): \[\bar{v}=(v_{1},v_{2},\ldots,v_{N})\rightarrow|\bar{v}\rangle=\sum_{i=1}^{N} \frac{v_{i}}{\|\bar{v}\|}|i\rangle\] where \(\|\bar{v}\|\) normalizes the state. This encoding uses only \(\log_{2}N\)** qubits** to represent an \(N\)-dimensional vector. However, preparing the state \(|\bar{v}\rangle\) requires a deep, complex circuit beyond the current NISQ hardwares.
* **Angle encoding** encodes a vector \(\bar{v}\) by rotating each qubit by an angle corresponding to one element of \(\bar{v}\): \[\bar{v}=(v_{1},v_{2},\ldots,v_{n})\rightarrow|\bar{v}\rangle=\bigotimes_{j=1}^{n }R(v_{j})\,|0\rangle\] where \(R(v_{j})\) rotates qubit \(j\) by angle \(v_{j}\) along some axis, i.e., \(R\) can be one of \(R_{x},R_{y},R_{z}\). This encoding uses \(n\) qubits for an \(n\)-dimensional vector but only requires simple \(1\)-qubit rotation gates. As an example, encoding \(\bar{v}=(\pi,\pi,\pi)\) via \(R_{y}\) rotations yields \(|\bar{v}\rangle=|1,1,1\rangle=|1\rangle\otimes|1\rangle\otimes|1\rangle\). A key advantage of angle
encoding is its parallelizability. Each qubit undergoes a rotation gate simultaneously, enabling encoding in constant time as shown in the following. This makes angle encoding well-suited for the current NISQ devices. Therefore, angle encoding is commonly used in the experimental implementation of quantum algorithms on existing quantum computers for solving classical computational tasks.
With the above encoding methods for pure state \(|\bar{e}\rangle\), we can simply obtain a mixed state to carry the classical data \(\bar{e}\):
\[\rho_{\bar{e}}=|\bar{e}\rangle\langle\bar{e}|.\]
In this paper, we consider the differential privacy of quantum algorithms on NISQ computers. As such, all of our experiments in the Evaluation section (Section 5) use angle encoding to encode classical data, including credit records, public adult income dataset, and transactions dataset.
## 3. Formalizing differential privacy
In this section, we introduce the differential privacy for quantum algorithms and clarify the relationship between it and the differential privacy for quantum circuits defined in (Kang and Chuang, 2017). For the convenience of the reader, we put all proofs of theoretical results in the appendix.
Let us start by defining the differential privacy for quantum algorithms:
Definition 3.1 (Differential Privacy for Quantum Algorithms).: Suppose we are given a quantum algorithm \(\mathcal{A}=(\mathcal{E},\{M_{k}\}_{k\in\mathcal{O}})\) on a Hilbert space \(\mathcal{H}\), a distance metric \(D(\cdot,\cdot)\) on \(\mathcal{D}(\mathcal{H})\), and three small enough threshold values \(\epsilon,\delta,\eta\geq 0\). Then \(\mathcal{A}\) is said to be \((\epsilon,\delta)\)-differentially private within \(\eta\) if for any quantum states \(\rho,\sigma\in\mathcal{D}(\mathcal{H})\) with \(D(\rho,\sigma)\leq\eta\), and for any subset \(\mathcal{S}\subseteq\mathcal{O}\), we have
\[\sum_{k\in\mathcal{S}}\operatorname{tr}(M_{k}\mathcal{E}(\rho))\leq\exp( \varepsilon)\sum_{k\in\mathcal{S}}\operatorname{tr}(M_{k}\mathcal{E}(\sigma))+\delta. \tag{4}\]
In particular, if \(\delta=0\), we say that \(\mathcal{A}\) is \(\epsilon\)-differentially private within \(\eta\).
The above definition is essentially a quantum generalization of differential privacy for randomized algorithms (Kang and Chuang, 2017). Thus, it shares the intuition of differential privacy discussed in (Kang and Chuang, 2017): an algorithm must behave similarly on similar input states (considered as neighbors in the state space). In the quantum case, we have:
1. \(\eta\) defines the (noisy) neighboring relation between the two input states \(\rho\) and \(\sigma\), i.e., \(D(\rho,\sigma)\leq\eta\);
2. \(\epsilon\) and \(\delta\) through Eq.(4) guarantee the similarity between the outputs of \(\sum_{k\in\mathcal{S}}\operatorname{tr}(M_{k}\mathcal{E}(\rho))\) and \(\sum_{k\in\mathcal{S}}\operatorname{tr}(M_{k}\mathcal{E}(\sigma))\);
3. Since a quantum algorithm is a randomized function, it is reasonable to consider the probability \(\sum_{k\in\mathcal{S}}\operatorname{tr}(M_{k}\mathcal{E}(\rho))\) that the output is within a subset \(\mathcal{S}\subseteq\mathcal{O}\) rather than an exact value of \(\operatorname{tr}(M_{k}\mathcal{E}(\rho))\). The arbitrariness of \(\mathcal{S}\subseteq\mathcal{O}\) in Eq.(4) ensures the differential privacy in randomized functions as the same as in the classical case (Kang and Chuang, 2017).
Consequently, quantum differential privacy ensures that the indistinguishabilities of any neighboring quantum states are kept by quantum algorithms. Specifically, as shown in Fig. 3, an adversary is hard to determine whether the input state of the algorithm was indeed \(\rho\) or a neighboring state \(\sigma\) such that the information revealed in the \((\varepsilon,\delta)\)-difference between \(\rho\) and \(\sigma\) in Eq. (4) cannot be easily inferred by observing the output measurement distribution of the algorithm. Furthermore, quantum encoding allows quantum states to encode classical data, so \(\rho\) and \(\sigma\) can be regarded as \(\rho_{\bar{0}}\) and \(\sigma_{\bar{w}}\) which encodes classical vectors \(\bar{v}\) and \(\bar{w}\). Thus the distance bound \(\eta\) between \(\rho_{\bar{0}}\) and \(\sigma_{\bar{w}}\) can be used to represent the single element difference of classical data \(\bar{v}\) and \(\bar{w}\). Thus classical neighboring relation can be preserved by the quantum counterpart. Therefore, quantum differential privacy can be used as a proxy to ensure the original motivating privacy that the presence or absence of any individual data record will not significantly affect the outcome of an analysis. A concrete example is provided to detail this in the later of this section. Furthermore, this idea will be utilized in our case studies in Section 5 to demonstrate how quantum noise can enhance the privacy of encoded classical data.
It is easy to see that when considering noiseless trivial quantum circuits (i.e., \(\mathcal{E}=\operatorname{id}_{\mathcal{H}}\), the identity map on \(\mathcal{H}\)), the above setting degenerates to Aaronson and Rothblum's framework (Aaronson and Rothblum, 2017) where an elegant connection between quantum differential privacy and gentle measurements was established. In this paper, we consider a more general class of measurements, and a connection between quantum measurements and the verification of quantum differential privacy under quantum noise is revealed.
By Definition 3.1, if a quantum algorithm \(\mathcal{A}=(\mathcal{E},\{M_{k}\}_{k\in\mathcal{O}})\) is not \((\varepsilon,\delta)\)-differentially private, then there exists at least one pair of quantum states \((\rho,\sigma)\) with the distance of them being within \(\eta\), i.e., \(D(\rho,\sigma)\leq\eta\), and a subset \(\mathcal{S}\subseteq\mathcal{O}\) such that
\[\sum_{k\in\mathcal{S}}\operatorname{tr}(M_{k}\mathcal{E}(\rho))>\exp( \varepsilon)\sum_{k\in\mathcal{S}}\operatorname{tr}(M_{k}\mathcal{E}(\sigma)) +\delta. \tag{5}\]
Such a pair of quantum states \((\rho,\sigma)\) is called a \((\varepsilon,\delta)\)_-differentially private counterexample_ of \(\mathcal{A}\) within \(\eta\).
As said before, the notion of differential privacy for (noisy) quantum circuits has been defined in the previous works (Kang and Chuang, 2017; Chuang and Chuang, 2017). Using Definition 3.1, it can be reformulated as the following:
Definition 3.2 (Differential Privacy for Quantum Circuits).: Suppose we are given a (noisy) quantum circuit \(\mathcal{E}\) on a Hilbert space \(\mathcal{H}\), a distance metric \(D(\cdot,\cdot)\) on \(\mathcal{D}(\mathcal{H})\), and three small enough threshold values \(\varepsilon,\delta,\eta\geq 0\). Then \(\mathcal{E}\) is said to be \((\varepsilon,\delta)\)-differentially private within \(\eta\) if for any quantum measurement \(\{M_{k}\}_{k\in\mathcal{O}}\), the algorithm obtained from \(\mathcal{E}\) by adding the measurement at the end, i.e. \((\mathcal{E},\{M_{k}\}_{k\in\mathcal{O}})\), is \((\varepsilon,\delta)\)-differentially private within \(\eta\).
The relationship between differential privacy for quantum algorithms and quantum circuits can be visualized as Fig 4. More precisely, the differential privacy of a circuit \(\mathcal{E}\) implies that of algorithm \((\mathcal{E},\{M_{k}\}_{k\in\mathcal{O}})\) for any measurement \(\{M_{k}\}_{k\in\mathcal{O}}\). Conversely, for every measurement \(\{M_{k}\}_{k\in\mathcal{O}}\), a counterexample of algorithm \((\mathcal{E},\{M_{k}\}_{k\in\mathcal{O}})\) is also a counterexample of circuit \(\mathcal{E}\).
_Choice of Distances_: The reader should have noticed that the above definition of differential privacy for quantum algorithms is similar to that for the classical datasets. But an intrinsic distinctness between them comes from different notions of neighboring relation. In the classical case, the state space of classical bits is discrete and two datasets are considered as neighbors if they differ on a single bit. In the quantum case, two different neighboring relations for defining quantum differential privacy have been adopted in the literature:
1. As the state space of quantum bits is a continuum and thus uncountably infinite, a common way in the field of quantum computing to define a neighboring relation is to introduce a distance \(D\) that measures the closeness of two quantum states and set a bound \(\eta\) on the distance. In [30] and several more recent papers [32, 34], trace distance is used to measure closeness (neighborhood). Trace distance is essentially a generalization of the total variation distance between probability distributions. It has been widely used by the quantum computation and quantum information community [33, Section 9.2]. Formally, for two quantum states \(\rho,\sigma\in\mathcal{D}(\mathcal{H})\), \[D(\rho,\sigma)=\frac{1}{2}\text{tr}(|\rho-\sigma|),\] where \(|\rho-\sigma|=\Delta_{+}+\Delta_{-}\) if \(\rho-\sigma=\Delta_{+}-\Delta_{-}\) with \(\text{tr}(\Delta_{+}\Delta_{-})=0\) and \(\Delta_{\pm}\) being positive semi-definite matrix.
2. In [31], a way more similar to the setting of the classical database is introduced, where the neighboring relationship of two quantum states \(\rho\) and \(\sigma\) means that it's possible to reach either \(\sigma\) from \(\rho\), or \(\rho\) from \(\sigma\), by performing a quantum operation (super-operator) on a single quantum bit only.
Let us consider a simple example about 2-qubit quantum states to further clarify the difference between the above two approaches to defining quantum differential privacy. This example shows that the definition through approach (1) is more suitable for the setting of _noisy_ quantum algorithms.
**Example 3.3**: Given a 2-qubit state \(|0,1\rangle\) (its mixed state form is \(\rho=|0\rangle\langle 0|\otimes|1\rangle\langle 1|\)). Under the bit-flip noise with probability \(p_{1}\) (defined in Example 2.2) on the first qubit, the state \(\rho\) will be changed to
\[\sigma_{1} =\mathcal{E}_{X,p_{1}}(|0\rangle\langle 0|)\otimes|1\rangle \langle 1|\] \[=[(1-p_{1})|0\rangle\langle 0|+p_{1}|1\rangle\langle 1|] \otimes|1\rangle\langle 1|.\]
According to the above approach (2) \(\rho\) and \(\sigma_{1}\) are neighboring. They are also neighboring according to approach (1) if \(p_{1}\leq\eta\).
However, the quantum noise cannot ideally be restricted to a single qubit, but randomly effects on other qubits in the system. In this case, if the second qubit of \(\rho\) is simultaneously noisy under bit-flip with probability \(p_{2}\), then the state \(\rho\) will be further transferred to the following state:
\[\sigma_{2} =\mathcal{E}_{X,p_{1}}(|0\rangle\langle 0|)\otimes\mathcal{E}_{X,p_{ 2}}(|1\rangle\langle 1|)\] \[=[(1-p_{1})|0\rangle\langle 0|+p_{1}|1\rangle\langle 1|] \otimes[(1-p_{2})|1\rangle\langle 1|+p_{2}|0\rangle\langle 0|].\]
It is easy to see that \(\rho\) and \(\sigma_{2}\) are not neighbors under approach (2) even if the probability \(p_{2}\) is extremely small, while they are neighboring under approach (1) provided \(p_{1}+p_{2}-p_{1}p_{2}\leq\eta\).
Targeting the applications of detecting violations of differential privacy of quantum algorithms in the current NISQ era where noises are unavoidable, we follow approach (1) in this paper. In particular, \(D(\cdot,\cdot)\) in Definition 3.1 is chosen to be the trace distance, which is one of the more popular distances in the quantum computation and information literature.
_Remark._ As the trace distance of any two quantum states is within 1, the quantum differential privacy through approach (1) implies that through approach (2) with \(\eta=1\). However, the opposite direction does not hold.
Furthermore, trace distance can maintain the neighboring relation between classical data vectors that differ by a single element. This allows quantum differential privacy guarantees on quantum states to be transferred back to guarantees on the privacy of the encoded classical data.
Figure 4: The relationship between the differential privacy (DP) for quantum circuits (QCs) and quantum algorithms (QAs)
Figure 3: Quantum Differential Privacy
**Example 3.4**.: Consider two neighboring classical data vectors \(\bar{v}\) and \(\bar{w}\) that differ only in the \(j^{th}\) element. Using angle encoding, they can be encoded into quantum states \(\rho\) and \(\sigma\), respectively. It can then be computed that:
\[D(\rho,\sigma)=\sqrt{1-\left\langle 0\right|R_{j}(\bar{v}_{j}-w_{j})\left|0 \right\rangle\left\langle 0\right|R_{j}(w_{j}-\bar{v}_{j})\left|0\right\rangle}\]
where \(R_{j}\) is the rotation gate used to encode the \(j^{th}\) element of \(\bar{v}\) and \(\bar{w}\). In particular, for binary vectors \(\bar{v},\bar{w}\in\{0,1\}^{n}\), the trace distance between the corresponding quantum states \(\rho\) and \(\sigma\) satisfies \(D(\rho,\sigma)\leq\sin\frac{1}{2}\). This upper bound is achieved when \(R_{j}\) is chosen to be rotations about the \(x\)- or \(y\)-axis, i.e., \(R_{x}\) or \(R_{y}\). Therefore, by setting \(\eta=\sin\frac{1}{2}\) in the definition of quantum differential privacy (Definition 3.1), the neighboring relation in classical data can be transferred to a relation between quantum states under trace distance. In other words, if two classical data vectors are considered neighbors because they differ by one element, then their angle-encoded quantum state representations will have trace distance \(\sin\frac{1}{2}\). Subsequently, quantum differential privacy guarantees the privacy of the encoded classical data when used in quantum algorithms. By ensuring the quantum states satisfy differential privacy, the privacy of the original classical data is also ensured.
**Noisy Post-processing**: Similarly to the case of the classical computing [21] and noisy quantum circuits [30], the differential privacy for noiseless quantum algorithms is immune to noisy post-processing: without additional knowledge about a noiseless quantum algorithm, any quantum noise applied on the output states of a noiseless quantum algorithm does not increase privacy loss.
**Theorem 3.5**.: _Let \(\mathcal{A}=(\mathcal{U},\{M_{\}\}_{i\in\mathcal{O}})\) be a noiseless quantum algorithm. Then for any (unknown) quantum noise represented by a super-operator \(\mathcal{F}\), if \(\mathcal{A}\) is \((\varepsilon,\delta)\)-differentially private, then \((\mathcal{F}\circ\mathcal{U},\{M_{\}\}_{i\in\mathcal{O}})\) is also \((\varepsilon,\delta)\)-differentially private._
However, the above theorem does not hold for a general noisy quantum algorithm \(\mathcal{A}\) in the sense that unitary \(\mathcal{U}\) is replaced by a (noisy) quantum operation modeled as a super-operator \(\mathcal{E}\). With the help of our main theorem (Theorem 4.1) introduced later for differential privacy verification, a concrete example showing this fact is provided as Example 4.3 at the end of the next section.
**Composition Theorem**: In order to handle larger quantum algorithms in a modular way, a series of composition theorems for differential privacy of classical algorithms have been established [21]. Some of them can be generalized into the quantum case. Given two quantum algorithms \(\mathcal{A}_{k}=(\mathcal{E}_{k},\{M_{k,j_{k}}\}_{j_{k}\in\mathcal{O}_{k}})\)\((k=1,2)\), their parallel composition is \(\mathcal{A}_{\mathcal{S}_{1}}\otimes\mathcal{A}_{\mathcal{S}_{2}}=(\mathcal{E}_{1} \otimes\mathcal{E}_{2},\{M_{1,\mathcal{S}_{1}}\otimes M_{2,\mathcal{S}_{2}}, I-M_{1,\mathcal{S}_{1}}\otimes M_{2,\mathcal{S}_{2}}\})\) for some subsets \(\mathcal{S}_{k}\subseteq\mathcal{O}_{k}(k=1,2)\), where \(M_{k,\mathcal{S}_{k}}=\sum_{j_{k}\in\mathcal{S}_{k}}M_{k,j_{k}}\). Then we have:
**Theorem 3.6**.: _For any subsets \(\mathcal{S}_{k}\subseteq\mathcal{O}_{k}(k=1,2)\),_
1. _if_ \(\mathcal{A}_{k}\) _is_ \(\varepsilon_{k}\)_-differentially private within_ \(\eta_{k}\)__\((k=1,2)\)_, then_ \(\mathcal{A}_{\mathcal{S}_{1}}\otimes\mathcal{A}_{\mathcal{S}_{1}}\)__\((\varepsilon_{1}+\varepsilon_{2})\)_-differentially private within_ \(\eta_{1}\eta_{2}\)_;_
2. _if_ \(\mathcal{A}_{k}\) _is_ \((\varepsilon_{k},\delta_{k})\)_-differentially private within_ \(\eta_{k}\)__\((k=1,2)\)_, then_ \(\mathcal{A}_{\mathcal{S}_{1}}\otimes\mathcal{A}_{\mathcal{S}_{2}}\) _is_ \((\varepsilon_{1}+\varepsilon_{2},\delta_{1}+\delta_{2})\)_-differentially private within_ \(\eta_{1}\eta_{2}\)_._
_Remark_. There are quite a few papers on the robustness of quantum machine learning [35; 36]. In these papers, the quantum robustness of quantum classifier (which is mathematically a deterministic function) is the ability to make correct classification with a small perturbation to a given input state (a local property), while quantum differential privacy ensures that a quantum algorithm (which is mathematically a randomized function) must behave similarly on all similar input states (a global property). Therefore, quantum differential privacy and robustness mainly differ on the studied functions and the property type. However, a deeper connection between quantum differential privacy and robustness may be built if we make some generalizations. In classical machine learning, the trade-off phenomenon of differential privacy and robustness has been found and several similarities of them have been reported if we can generalize the definition of robustness to randomized functions and consider Renyi-differential privacy [37]. However, this is still unclear in the quantum domain as the study of trustworthy quantum machine learning is at a very early age. We are interested in exploring this as the next step.
## 4. Differential Privacy Verification
In this section, we develop an algorithm for the differential privacy verification of quantum algorithms. Formally, the major problem concerned in this paper is the following:
**Problem 1** (Differential Privacy Verification Problem).: _Given a quantum algorithm \(\mathcal{A}\) and \(1\geq\varepsilon,\delta,\eta\geq 0\), check whether or not \(\mathcal{A}\) is \((\varepsilon,\delta)\)-differentially private within \(\eta\). If not, then (at least) one counterexample of quantum states \((\rho,\sigma)\) is provided._
To solve this verification problem, we first find a necessary and sufficient condition for the differential privacy. Specifically, we show that the differential privacy of a quantum algorithm can be characterized by a system of inequalities. To this end, let us introduce several notations. For a positive semi-definite matrix \(M\), we use \(\lambda_{max}(M)\) and \(\lambda_{min}(M)\) to denote the maximum and minimum eigenvalues of \(M\), respectively. For a (noisy) quantum circuit modeled by a linear map \(\mathcal{E}\) in the Kraus matrix form \(\mathcal{E}=\{E_{k}\}_{k\in\mathcal{K}}\), the dual mapping of \(\mathcal{E}\), denoted as \(\mathcal{E}^{\dagger}\), is defined by
\[\mathcal{E}^{\dagger}(M)=\sum_{k\in\mathcal{K}}\mathcal{E}_{k}^{\dagger}ME_{k} \text{ for any positive semi-definite matrix }M.\]
**Theorem 4.1** (Sufficient and Necessary Condition).: _Let \(\mathcal{A}=(\mathcal{E},\{M_{k}\}_{k\in\mathcal{O}})\) be a quantum algorithm. Then:_
1. \(\mathcal{A}\) _is_ \((\varepsilon,\delta)\)_-differentially private within_ \(\eta\) _if and only if_ \[\delta\geq\max_{\mathcal{S}\subseteq\mathcal{O}}\delta_{\mathcal{S}}\] (6) _where_ \[\delta_{\mathcal{S}}=\eta\lambda_{\max}(M_{\mathcal{S}})-(\epsilon^{\varepsilon}+ \eta-1)\lambda_{\min}(M_{\mathcal{S}}),\] _and matrix_ \(M_{\mathcal{S}}=\sum_{k\in\mathcal{S}}\mathcal{E}^{\dagger}(M_{k})\)_._
2. _In particular,_ \(\mathcal{A}\) _is_ \(\varepsilon\)_-differentially private within_ \(\eta\) _if and only if_ \(\varepsilon\geq\epsilon^{*}\)_, the optimal bound (minimum value) of_ \(\varepsilon\)_, where_ \[\epsilon^{*}=\ln[(\kappa^{*}-1)\eta+1]\quad\text{ and }\quad\kappa^{*}=\max_{\mathcal{S}\subseteq\mathcal{O}}\kappa(M_{\mathcal{S}}),\] \[\kappa(M_{\mathcal{S}})=\frac{\lambda_{\min}(M_{\mathcal{S}})}{ \lambda_{min}(M_{\mathcal{S}})}\text{ is the condition number}^{2}\text{ of matrix }M_{\mathcal{S}},\text{ and if }\lambda_{\min}(M_{\mathcal{S}})=0,\text{ then }\kappa(M_{\mathcal{S}})=+\infty.\]
By the above theorem, we see that the verification problem (i.e. Problem 1) can be tackled by solving the system (6) of inequalities. Consequently, it can be solved by computing the maximum and minimum eigenvalues (and their eigenvectors) of positive semi-definite matrix \(M_{\mathcal{S}}\). In particular, for the case of \(\varepsilon\)-differential privacy, we have:
1. the maximum value \(1\leq\kappa^{*}\leq+\infty\) of condition numbers of \(M_{\mathcal{S}}\) over \(\mathcal{S}\subseteq\mathcal{O}\) measures the \(\varepsilon\)-differential privacy of noisy quantum algorithm \(\mathcal{H}=(\mathcal{E},\{M_{k}\}_{k\in\mathcal{O}})\) (for fixed \(\eta\)). For the extreme cases, 1. if \(k^{*}=1\), then \(\varepsilon^{*}=0\), and \(\mathcal{A}\) is \(\varepsilon\)-differentially private for any \(\varepsilon\geq 0\); 2. if \(k^{*}=+\infty\), then \(\varepsilon^{*}=+\infty\), and \(\mathcal{A}\) is not \(\varepsilon\)-differentially private for any \(\varepsilon\geq 0\). In the following evaluation (Section 5), we will compute \(\kappa^{*}\) for diverse noisy quantum algorithms with different noise levels on quantum circuits to show that quantum differential privacy can benefit from the quantum noises on quantum circuits.
2. we can characterize the \(\varepsilon\)-differential privacy of a noisy quantum algorithm for different values of \(\eta\), i.e., the optimal bound \(\varepsilon*\) can be regarded as a function \(\varepsilon^{*}(\cdot)\) of \(\eta\) as follows: \[\varepsilon^{*}(\eta)=\ln[(\kappa^{*}-1)\eta+1]\ \ \text{where}\ \ \kappa^{*}\geq 1.\] As we can see from the above equation, the value of \(\varepsilon^{*}\) logarithmically increases with \(\eta\). This reveals that as the quantum noise level on input states increases, the differential privacy increases because \(\eta\) can measure the noisy neighboring relation of the input states effected by the quantum noises, which has been illustrated after Definition 3.1 and by Example 3.3. This finding provides the theoretical guarantee that adding noises to input states is a way to improve the differential privacy of quantum algorithms.
In summary, quantum differential privacy can benefit from the quantum noise on either quantum circuits or input states.
Furthermore, we are able to give a characterization of differential privacy counterexamples:
Theorem 4.2 (Counterexamples).: _If \(\mathcal{A}\) is not \((\varepsilon,\delta)\)-differentially private within \(\eta\), then for any \(\mathcal{S}\subseteq\mathcal{O}\) with \(\delta<\delta_{\mathcal{S}}\) (defined in Theorem 4.1), any pair of quantum states \((\gamma,\phi)\) of the form:_
\[\gamma=\eta\psi+(1-\eta)\phi\qquad\phi=|\phi\rangle\langle\phi|\]
_is a \((\varepsilon,\delta)\)-differential privacy counterexample within \(\eta\), where \(\psi=|\psi\rangle\langle\psi|\), and \(|\psi\rangle\) and \(|\phi\rangle\) are normalized eigenvectors of \(M_{\mathcal{S}}\) (defined in Theorem 4.1) corresponding to the maximum and minimum eigenvalues, respectively._
Now we are ready to provide an example showing that Theorem 3.5 does not hold for noisy quantum algorithms. This example also demonstrates the method for solving the verification problem (Problem 1) using Theorem 4.1 and 4.2.
Example 4.3 ().: Let \(\mathcal{H}\) be a \(2\)-qubit Hilbert space, i.e.,
\[\mathcal{H}=\operatorname{span}\{\ket{0,0},\ket{0,1},\ket{1,0},\ket{1,1}\},\]
and \(\mathcal{A}=(\mathcal{E},\{M_{0},M_{1}\})\) be a noisy quantum algorithm on \(\mathcal{H}\), where \(\mathcal{E}\) is not a unitary but a super-operator with the Kraus matrix form \(\mathcal{E}=\{E_{i}\}_{i=1}^{4}\) with
\[E_{1} =\frac{1}{\sqrt{3}}\left(\ket{0,0}+\ket{1,0}+\ket{1,1}\right) \bra{0,0}\] \[E_{2} =\frac{1}{\sqrt{3}}\left(\ket{0,1}+\ket{1,0}+\ket{1,1}\right) \bra{0,1}\] \[E_{3} =\frac{1}{\sqrt{6}}\left(\ket{0,0}+\ket{0,1}+2\ket{1,0}\right) \bra{1,0}\] \[E_{4} =\frac{1}{\sqrt{6}}\left(\ket{0,0}+\ket{0,1}+2\ket{1,1}\right) \bra{1,1}\]
and measurement operators
\[M_{0}=|0,0\rangle\langle 0,0|+|0,1\rangle\langle 0,1|\quad M_{1}=|1,0\rangle\langle 1,0|+|1,1\rangle\langle 1,1|.\]
It can be calculated that
\[\mathcal{E}^{\dagger}(M_{0}) =\frac{1}{3}(|0,0\rangle\langle 0,0|+|0,1\rangle\langle 0,1|+|1,0 \rangle\langle 1,0|+|1,1\rangle\langle 1,1|)\] \[\mathcal{E}^{\dagger}(M_{1}) =\frac{2}{3}(|0,0\rangle\langle 0,0|+|0,1\rangle\langle 0,1|+|1,0 \rangle\langle 1,0|+|1,1\rangle\langle 1,1|).\]
Then
\[\lambda_{\max}(\mathcal{E}^{\dagger}(M_{0}+M_{1}))=\lambda_{\min}(\mathcal{E} ^{\dagger}(M_{0}+M_{1}))=1\] \[\lambda_{\max}(\mathcal{E}^{\dagger}(M_{0}))=\lambda_{\min}( \mathcal{E}^{\dagger}(M_{0}))=\frac{1}{3}\] \[\lambda_{\max}(\mathcal{E}^{\dagger}(M_{1}))=\lambda_{\min}( \mathcal{E}^{\dagger}(M_{1}))=\frac{2}{3}.\]
Consequently, \(\kappa^{*}=1\) implies \(\varepsilon^{*}=0\) by Theorem 4.1 and then \(\mathcal{A}\) is \(\varepsilon\)-differentially private for any \(\varepsilon\geq 0\).
However, if we choose a quantum noise represented by the following super-operator
\[\mathcal{F}=\{|0,0\rangle\langle 0,0|,|1,0\rangle\langle 0,1|,|1,0\rangle\langle 1,0|,|1,1\rangle\langle 1,1|\}\]
such that
\[(\mathcal{F}\circ\mathcal{E})^{\dagger}(M_{0}) =\mathcal{E}^{\dagger}(\mathcal{F}^{\dagger}(M_{0}))\] \[=\frac{1}{6}(2|0,0\rangle\langle 0,0|+|1,0\rangle\langle 1,0|+|1,1 \rangle\langle 1,1|).\]
Then
\[\lambda_{\max}((\mathcal{F}\circ\mathcal{E})^{\dagger}(M_{0}))=\frac{1}{3} \qquad\lambda_{\min}((\mathcal{F}\circ\mathcal{E})^{\dagger}(M_{0}))=0\]
with normalized eigenvectors \(|0,0\rangle\) and \(|0,1\rangle\), respectively. Thus \(\kappa^{*}=+\infty\) implies \(\varepsilon^{*}=+\infty\) by Theorem 4.1. Subsequently, the noisy quantum algorithm \(\mathcal{A}^{\prime}=(\mathcal{F}\circ\mathcal{E},\{M_{0},M_{1}\})\) is not \(\varepsilon\)-differentially private for any \(\varepsilon\geq 0\). Furthermore, in this case, by Theorem 4.2, \((\gamma,\phi)\) is a \(\varepsilon\)-differential privacy counterexample of the algorithm for any \(\varepsilon\geq 0\), where
\[\gamma=\eta|0,0\rangle\langle 0,0|+(1-\eta)|0,1\rangle\langle 0,1|\qquad\phi=|0,1 \rangle\langle 0,1|.\]
### Differential Privacy Verification Algorithm
Theorems 4.1 and 4.2 provide a theoretical basis for developing algorithms for verification and violation detection of quantum differential privacy. Now we are ready to present them. Algorithm 1 is designed for verifying the \((\varepsilon,\delta)\)-differential privacy for (noisy) quantum algorithms. For estimating parameter \(\varepsilon\) in the \(\varepsilon\)-differential privacy, Algorithm 2 is developed to compute the maximum condition number \(\kappa^{*}\) (with a counterexample) as in Theorem 4.1. By calling Algorithm 2, an alternative way for verifying \(\varepsilon\)-differential privacy is obtained as Algorithm 3.
In the following, we analyze the correctness and complexity of Algorithm 1. Those of Algorithms 2 and 3 can be derived in a similar way.
**Correctness:** Algorithm 1 consists of two components -- a verifier (Lines 1-12) and a counterexample generator (Lines 14-15). Following the verification procedure in the first part of Theorem 4.1, the verifier is designed to check whether or not a quantum algorithm is \((\epsilon,\delta)\)-differentially private within \(\eta\). The counterexample generator is constructed using Theorem 4.2 asserting that \((\eta\psi+(1-\eta)\phi,\phi)\) is a \((\varepsilon,\delta)\)-differential privacy counterexample if there is a subset \(\mathcal{S}\subseteq\mathcal{O}\), i.e., \(\mathcal{S}^{*}\) in the algorithm, such that \(\delta^{*}=\delta_{\mathcal{S}^{*}}>\delta\), where \(|\psi\rangle\) and \(|\phi\rangle\) are normalized eigenvectors of \(M_{\mathcal{S}^{*}}\) (defined in Theorem 4.1) corresponding to the maximum and minimum eigenvalues, respectively.
**Complexity:** The complexity of Algorithm 1 mainly attributes to the calculations in Lines 2, 6 and 14. In Line 2, computing \(W_{k}=\sum_{j\in\mathcal{J}}E_{j}^{\dagger}M_{k}E_{j}\) for each \(k\in\mathcal{O}\) needs \(O(2^{5n})\) as the multiplication of \(2^{n}\times 2^{n}\) matrices needs \(O(2^{3n})\) operations, and the number \(|\mathcal{J}|\) of the Kraus operators \(\{E_{j}\}_{j\in\mathcal{J}}\) of \(\mathcal{E}\) can be at most \(2^{2n}\)[39, Chapter 2.2]; In Line 6, calculating \(\sum_{k\in\mathcal{S}}W_{k}\) and its maximum and minimum eigenvalues (and the corresponding eigenvectors for \(\mathcal{S}=\mathcal{S}^{*}\) in Line 14) for each \(A\subseteq\mathcal{O}\) costs \(O(2^{|\mathcal{O}|}|\mathcal{O}|2^{2n})\) since the number of subsets of \(\mathcal{O}\) is \(2^{|\mathcal{O}|}\), \(|\mathcal{S}|\leq|\mathcal{O}|\) for any \(\mathcal{S}\subseteq\mathcal{O}\) and computing maximum and minimum eigenvalues with corresponding eigenvectors of \(2^{n}\times 2^{n}\) matrix by basic power method [40] costs \(O(2^{2n})\). Therefore, the total complexity of Algorithm 1 is \(O(2^{5n}+2^{|\mathcal{O}|}|\mathcal{O}|2^{2n})\).
```
0: A quantum algorithm \(\mathcal{A}=(\mathcal{E}=\{E_{j}\}_{j\in\mathcal{J}},\{M_{k}\}_{k\in\mathcal{O}})\) on a Hilbert space \(\mathcal{H}\) with dimension \(2^{n}\).
0: The maximum condition number \(\kappa^{*}\) and a counterexample as in Theorems 4.1 and 4.2, respectively.
1:for each \(i\in\mathcal{O}\)do
2:\(W_{i}=\mathcal{E}^{\dagger}(M_{k})=\sum_{j\in\mathcal{J}}E_{j}^{\dagger}M_{k }E_{j}\)
3:endfor
4:\(\kappa^{*}=0\), \(\mathcal{S}^{*}=\emptyset\) be an empty set and \(M_{\mathcal{S}^{*}}=\emptyset\), the zero matrix.
5:for each \(\mathcal{S}\subseteq\mathcal{O}\)do
6:\(\kappa(M_{\mathcal{S}})=\frac{\lambda_{\max}(M_{\mathcal{S}})}{\lambda_{ \min}(M_{\mathcal{S}})}\) for \(M_{\mathcal{S}}=\sum_{k\in\mathcal{S}}W_{k}\)
7:if\(\kappa(M_{\mathcal{S}})>\kappa^{*}\)then
8:\(\kappa^{*}=\kappa(M_{\mathcal{S}})\), \(\mathcal{S}^{*}=\mathcal{S}\) and \(M_{\mathcal{S}^{*}}=M_{\mathcal{S}}\)
9:endif
10:endfor
11:\(|\psi\rangle\) and \(|\phi\rangle\) are obtained from two normalized eigenvectors corresponding to the maximum and minimum eigenvalues of \(M_{\mathcal{S}^{*}}\), respectively.
12:return\(\kappa^{*}\) and \((\eta\psi+(1-\eta)\phi,\phi)\)
```
**Algorithm 2**\(\mathrm{DP}_{\epsilon}(\mathcal{A},\eta)\)
The above calculations are also the main computational cost in Algorithms 2 and 3, so the two algorithms share the same complexity with Algorithm 1.
```
0: A quantum algorithm \(\mathcal{A}=(\mathcal{E}=\{E_{j}\}_{j\in\mathcal{J}},\{M_{k}\}_{k\in\mathcal{O}})\) on a Hilbert space \(\mathcal{H}\) with dimension \(2^{n}\), and real numbers \(\epsilon,\eta\geq 0\).
0:true indicates \(\mathcal{A}\) is \(\epsilon\)-differentially private within \(\eta\) or **false** with a counterexample \((\rho,\sigma)\) indicates \(\mathcal{A}\) is not \(\epsilon\)-differentially private within \(\eta\).
1:\((\kappa^{*},(\eta\psi+(1-\eta)\phi,\phi))=\mathrm{DP}_{\kappa}(\mathcal{A})\)// Call Algorithm 2
2:if\(\epsilon\geq\ln\left((\kappa^{*}-1)\eta+1\right]\)then
3:return true
4:else
5:return false and \((\eta\psi+(1-\eta)\phi,\phi)\)
6:endif
```
**Algorithm 3**\(\mathrm{DP}_{\epsilon}(\mathcal{A},\eta)\)
Theorem 4.4.: _The worst-case complexities of Algorithms 1, 2 and 3, are all \(O(2^{5n}+2^{|\mathcal{O}|}|\mathcal{O}|2^{2n})\), where \(n\) is the number of the qubits in quantum algorithms and \(|\mathcal{O}|\) is the number of the measurement outcome set \(\mathcal{O}\)._
_Remark._ As we can see in Theorem 4.4, the main limitation of our verification algorithms is the exponential complexity in the number of qubits. To overcome this scaling issue, we apply optimization techniques based on tensor networks to capture the locality and regularity of quantum circuits. This allows us to speed up the calculations involved in verification. As a result, we are able to verify quantum algorithms with up to 21 qubits, as shown in the later experimental section.
Further improving the scalability of verified qubits is possible by adapting classical approximation methods to the quantum domain, as they have successfully analyzed large-scale classical machine learning algorithms [41]. Two promising techniques are:
* Abstraction-based approximation using abstract interpretation provides over-approximations of concrete program semantics.
If a property holds for the abstracted version, it also holds for the original. This technique has boosted verification scalability for classical neural network robustness (Wang et al., 2017) and correctness of quantum circuits up to 300 qubits (Wang et al., 2017).
* Bound-based approximation derives efficiently computable bounds on algorithm properties. If the algorithm satisfies the bound, it satisfies the property, but the converse is unknown. This has enabled robustness verification for large-scale classical neural networks (Wang et al., 2017) and quantum classifiers (Wang et al., 2018).
These approximation methods trade off formal guarantees for scalability in verifying algorithm properties. Since quantum algorithms rely on quantum circuits, we can follow similar approaches (Wang et al., 2018; Wang et al., 2017) to improve the scalability of verifying quantum differential privacy.
## 5. Evaluation
In this section, we evaluate the effectiveness and efficiency of our Algorithms on noisy quantum algorithms.
**Implementation:** We implemented our Algorithms on the top of Google's Python software libraries: Cirq for writing and manipulating quantum circuits, TensorFlowNetwork for converting quantum circuits to tensor networks. Our implementation supports circuit models not only written in Cirq but also imported from IBM's Qiskit, and accepts quantum machine learning models from both TensorFlow Quantum and TorchQuantum.
**Optimization Techniques:** We convert quantum circuits into tensor networks, which is a data structure exploiting the regularity and locality contained in quantum circuits, while matrix representation cannot. The multiplication of matrices in our algorithm is transformed into the contraction of tensor networks. For the tensor network of a quantum circuit, its complexity of contraction is \(T^{O(1)}\exp[O(qd)]\)(Cai et al., 2017), where \(T\) is the number of gates (tensors), \(d\) is the depth in the circuit (tensor network) and \(q\) is the number of allowed interacting qubits, i.e., the maximal number of qubits (legs of a tensor) a gate applying on. So we can avoid the exponential complexity of the number \(n\) of qubits with the cost of introducing exponential complexity of \(qd\), where \(d\) and \(q\) capture the regularity and locality of the quantum circuit, respectively. Usually, \(q=2\) for controlled gates, and then the complexity turns out to be \(T^{O(1)}\exp[O(d)]\). Even though the worst-case (presented in the complexity) is exponential on \(d\), there are a bulk of efficient algorithms to implement tensor network contraction for practical large-size quantum circuits. As a result, we can handle (up to) 21 qubits in the verification experiments avoiding the worst-case complexities of our algorithms presented in Theorem 4.4 that the time cost is exponential with the number \(n\) of qubits. For more details on tensor networks representing quantum circuits, we refer to (Zhu et al., 2017).
**Platform:** We conducted our experiments on a machine with Intel Xeon Platinum 8153 @ 2.00GHz \(\times\) 256 Cores, 2048 GB Memory, and no dedicated GPU, running Centos 7.7.1908.
**Benchmarks:** To evaluate the efficiency and utility of our implementation, we test our algorithms on four groups of examples, including quantum approximate optimization algorithms, quantum supremacy algorithms, variational quantum eigensolver algorithms and quantum machine learning models (well-trained algorithms) for solving classical tasks with angle encoding introduced in Section 2. All of them have been implemented on current NISQ computers.
### Quantum Approximate Optimization Algorithms
The Quantum Approximate Optimization Algorithm (QAOA) is a quantum algorithm for producing approximate solutions for combinatorial optimization problems (Han et al., 2016). Fig. 5. shows a 2-qubit example of QAOA circuit. In our experiment, we use the circuit for hardware grid problems in (Han et al., 2016) generated from code in Recirq (Zhu et al., 2017). Circuit name _qaoa_D_ represents such a QAOA circuit with \(D\) connected qubits on Google's _Sycarmore_ quantum processor.
### Variational Quantum Eigensolver Algorithms
The circuit of Variational Quantum Eigensolver (VQE) Algorithms comes from the experiments in (Han et al., 2016), which uses Google's _Sycarmore_ quantum processor to calculate the binding energy of hydrogen chains. Fig. 6. shows an 8-qubit basis rotation circuit for \(H_{8}\) used in the VQE algorithm. In our experiment, VQE circuit is obtained from Recirq and named _tf_E_ with \(E\) being the number of qubits.
### Quantum Supremacy Algorithms
The quantum supremacy algorithm includes a specific random circuit designed to show the quantum supremacy on grid qubits (Zhu et al., 2017). In general, the circuit contains a number of cycles consisting of 1-qubit (\(X^{1/2}\), \(Y^{1/2}\) and \(T\) gate) and 2-qubit quantum gates (CZ gate). The 2-qubit gates are implemented in a specific order according to the topology of the grid qubits, where each qubit in the middle of the circuit is connected to four qubits, and the qubits on the edges
Figure 5. A 2-qubit QAOA circuit.
Figure 6. An 8-qubit Hatree-Fock VQE circuit.
and corners are connected to three and two qubits, respectively. The circuit is implemented on Google _Sycarmore_ quantum processor to show the quantum supremacy (Goyal et al., 2017). In our experiment, the circuits are named by \(inst\_A\times B\_C\), representing an \((A\times B)\)-qubit circuit with depth \(C\). See Fig. 2b for an example of \(2\times 2\)-qubit quantum supremacy algorithms.
### Quantum Machine Learning Models
There are two frameworks, TensorFlow Quantum and TorchQuantum, which are based on famous machine learning platforms -- TensorFlow and Pytorch, respectively, for training and designing quantum machine learning models. TensorFlow Quantum uses Cirq to manipulate quantum circuits, and so does our implementation. TorchQuantum supports the conversion of models into quantum circuits described by Qiskit, which can also be converted to Cirq by our implementation. Thus, our implementation is fully compatible with both TensorFlow Quantum and TorchQuantum.
We collect two quantum machine learning models using TensorFlow Quantum for financial tasks, as described in (Zhu et al., 2017). All classical financial data are encoded into quantum states using the angle encoding method introduced in Section 2.
* The model named _GC_9_, trained on public German credit card dataset (Zhu et al., 2017), is used to classify whether a person has a good credit.
* The model named _AI_8_, trained on public adult income dataset (Zhu et al., 2017), is used to predict whether an individual's income exceeds \(\$50,000/\)year or not.
Additionally, we train a model called _EC_9_ to detect fraudulent credit card transactions. The model is trained on a dataset of European cardholder transactions (Zhu et al., 2017).
Furthermore, we evaluate two quantum machine learning models from the TorchQuantum library paper (Goyal et al., 2017), which introduces a PyTorch framework for hybrid quantum-classical machine learning.
* The model _MNIST_10_, trained on MNIST (Zhu et al., 2017), is used to classify handwritten digits.
* The model _Fashion_4_, trained on Fashion MNIST (Zhu et al., 2017), is used to classify fashion images.
As before, handwritten digits and fashion images are encoded into quantum states via angle encoding.
### Differential Privacy Verification and Analysis
**Verification Algorithms:** As shown in Theorem 4.4, the complexities of our Algorithms 1, 2 and 3 are the same, so for convenience, we only test the implementation of Algorithm 2 since it only requires quantum algorithms as the input without factors \(\epsilon,\delta,\eta\) for verifying differential privacy. In addition, to demonstrate the noisy impact on quantum algorithms in the NISQ era, we add two types of quantum noises -- depolarizing and bit flip with different levels of probability to each qubit in all circuits of our examples. Then we run Algorithm 2 to evaluate the maximum condition number \(\kappa^{*}\) of all examples. The evaluation results are summarized in Tables 1-4. It can be seen that the higher level of noise's probability, the smaller value of the maximum condition number \(\kappa^{*}\). So similarly to protecting classical differential privacy by adding noises, quantum algorithms also benefit from quantum noises on circuits in terms of quantum differential privacy. It is worth noting that in all experiments, we also obtain differential privacy counterexamples by Algorithm 2 at the running time presented in the tables, but as they are large-size (up to \(2^{21}\times 2^{21}\)) matrices, we do not show them here.
**Optimal Bound Function \(\epsilon^{*}(\eta)\):** After the above verification process, we have the values of \(\kappa^{*}\) for all experiments. We choose the one in every kind of experiment with the largest qubits as the benchmark to depict the optimal bound function \(\epsilon^{*}(\eta)\) in Figs. 7-10, respectively. At the same time, we add more noise levels to further
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Circuit** & **\#Qubits** & **Noise Type** & \(p\) & \(\kappa^{*}\) & **Time (s)** \\ \hline \multirow{4}{*}{_qaoa\_20_} & \multirow{4}{*}{20} & depolarizing & 0.01 & 62.39 & 285.80 \\ & & & 0.001 & 747.21 & 312.38 \\ & & bit flip & 0.01 & 88.53 & 220.73 \\ & & & 0.001 & 852.94 & 216.86 \\ \hline \multirow{4}{*}{_qaoa\_21_} & \multirow{4}{*}{21} & depolarizing & 0.01 & 97.58 & 644.51 \\ & & & 0.001 & 1032.48 & 514.83 \\ \cline{1-1} & & bit flip & 0.01 & 91.27 & 583.85 \\ \cline{1-1} & & & 0.001 & 923.85 & 594.24 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Experimental results of the maximum condition number \(\kappa^{*}\) on _Quantum Approximate Optimization Algorithms_ with different noise levels.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Circuit** & **\#Qubits** & **Noise Type** & \(p\) & \(\kappa^{*}\) & **Time (s)** \\ \hline \multirow{4}{*}{_hf\_8_} & \multirow{4}{*}{8} & depolarizing & 0.01 & 135.50 & 277.37 \\ & & & 0.001 & 1412.58 & 212.06 \\ & & & 0.01 & 98.39 & 248.36 \\ & & bit flip & 0.001 & 991.73 & 259.37 \\ \hline \multirow{4}{*}{_hf\_10_} & \multirow{4}{*}{10} & depolarizing & 0.01 & 132.21 & 477.70 \\ & & & 0.001 & 1423.75 & 482.10 \\ & & bit flip & 0.01 & 97.64 & 409.25 \\ & & & 0.001 & 988.26 & 427.58 \\ \hline \multirow{4}{*}{_hf\_12_} & \multirow{4}{*}{12} & depolarizing & 0.01 & 140.58 & 955.22 \\ & & & 0.001 & 1438.94 & 962.34 \\ \cline{1-1} & & bit flip & 0.01 & 95.27 & 890.26 \\ \cline{1-1} & & & 0.001 & 978.87 & 816.83 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Experimental results of the maximum condition number \(\kappa^{*}\) on _Variational Quantum Eigensolver Algorithms_ with different noise levels.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Circuit** & **\#Qubits** & **Noise Type** & \(p\) & \(\kappa^{*}\) & **Time (s)** \\ \hline \multirow{4}{*}{_inst\_4x4\_10_} & \multirow{4}{*}{16} & depolarizing & 0.01 & 59.67 & 254.05 \\ & & & 0.001 & 748.51 & 247.42 \\ & & bit flip & 0.01 & 82.39 & 207.39 \\ & & & 0.001 & 901.74 & 213.18 \\ \hline \multirow{4}{*}{_inst\_4x5\_10_} & \multirow{4}{*}{20} & depolarizing & 0.01 & 62.05 & 13176.98 \\ & & & 0.001 & 823.85 & 7493.24 \\ \cline{1-1} & & bit flip & 0.01 & 88.72 & 8120.35 \\ \cline{1-1} & & & 0.001 & 918.87 & 8203.71 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Experimental results of the maximum condition number \(\kappa^{*}\) on _Quantum Supremacy Algorithms_ with different noise levels.
explore the tendency of the optimal bound function \(\varepsilon^{*}(\eta)\). All experimental results confirm that the quantum noises on input states can logarithmically enhance the differential privacy as we claimed before. Furthermore, as quantum differential privacy protects the privacy of encoded classical data, as shown in Example 3.4, introducing quantum noise can further enhance the differential privacy of the encoded data, much like how adding classical noise improves the privacy of original classical data [21].
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Circuit** & **\#Qubits** & **Noise Type** & \(p\) & \(\kappa^{*}\) & **Time (s)** \\ \hline \multirow{4}{*}{_EC\_9_} & \multirow{4}{*}{9} & \multirow{4}{*}{depolarizing} & 0.01 & 3.370 & 5.49 \\ & & & 0.001 & 32.199 & 3.61 \\ & & & bit flip & 0.01 & 3.144 & 3.95 \\ & & & & 0.001 & 29.466 & 3.85 \\ \hline \multirow{4}{*}{_GC\_9_} & \multirow{4}{*}{9} & \multirow{4}{*}{depolarizing} & 0.01 & 4.236 & 5.12 \\ & & & 0.001 & 41.077 & 3.92 \\ & & & 0.01 & 4.458 & 4.09 \\ & & & bit flip & 0.001 & 42.862 & 3.80 \\ \hline \multirow{4}{*}{_AI\_8_} & \multirow{4}{*}{8} & \multirow{4}{*}{depolarizing} & 0.01 & 4.380 & 3.54 \\ & & & 0.001 & 42.258 & 2.58 \\ & & & bit flip & 0.01 & 5.025 & 2.20 \\ & & & 0.001 & 50.108 & 2.44 \\ \hline \multirow{4}{*}{_Mnist\_10_} & \multirow{4}{*}{10} & \multirow{4}{*}{depolarizing} & 0.01 & 1.170 & 18.90 \\ & & & 0.001 & 7.241 & 17.44 \\ & & & bit flip & 0.01 & 1.132 & 17.39 \\ & & & 0.001 & 6.677 & 17.14 \\ \hline \multirow{4}{*}{_Fashion\_4_} & \multirow{4}{*}{4} & \multirow{4}{*}{depolarizing} & 0.01 & 1.052 & 3.29 \\ & & & 0.001 & 5.398 & 3.18 \\ \cline{1-1} & & & bit flip & 0.01 & 1.057 & 3.26 \\ \cline{1-1} & & & 0.001 & 5.635 & 3.27 \\ \hline \hline \end{tabular}
\end{table}
Table 4. Experimental results of the maximum condition number \(\kappa^{*}\) on various _Quantum Machine Learning Models_ with different noise levels.
Figure 8. Comparison of \(\varepsilon\)-differential privacy on _Quantum Approximate Optimization Algorithms_ with different noise levels.
Figure 7. Comparison of \(\varepsilon\)-differential privacy on _Variational Quantum Eigensolver Algorithms_ with different noise levels.
Figure 9. Comparison of \(\varepsilon\)-differential privacy on _Quantum Supremacy Algorithms_ with different noise levels.
## 6. Conclusion
In this paper, we established a formal framework for detecting violations of differential privacy for quantum algorithms. In particular, we developed an algorithm to not only verify whether or not a quantum algorithm is differentially private but also provide counterexamples when the privacy is unsatisfied. The counterexample consists of a pair of quantum states violating the privacy to reveal the cause of the violation. For practicability, we implemented our algorithm on TensorFlow Quantum and TorchQuantum, the quantum extensions of famous machine learning platforms -- TensorFlow and PyTorch, respectively. Furthermore, for scalability, we adapted Tensor Networks (a highly efficient data structure) in our algorithm to overcome the state explosion (the complexity of the algorithm is exponential with the number of qubits) such that the practical performance of our algorithm can be improved. The effectiveness and efficiency of our algorithm were tested by numerical experiments on a bulk of quantum algorithms ranging from quantum supremacy (beyond classical computation) algorithms to quantum machine learning models with up to 21 qubits, which all have been implemented on current quantum hardware devices. The experimental results showed that quantum differential privacy can benefit from adding quantum noises on either quantum circuits or input states, which is consistent with the obtained theoretical results presented as Theorem 4.1.
For future works, extending the techniques developed for quantum algorithms in this paper to verify the differential privacy for quantum databases is an interesting research topic for protecting the privacy of quantum databases. As we discussed in Section 3, the neighboring relation for defining the differential privacy of quantum databases is the reachability between two quantum states by performing a quantum operation (super-operator) on a single quantum bit only (Krover, 2017), while that for our setting in this paper is the trace distance of two quantum states. Due to this fundamental difference in the neighboring relation, additional extensions will be required such as developing a reachability-based search algorithm to find the violations of the differential privacy for quantum databases. Another challenging research line is to study how to train a quantum machine learning algorithm with a differential privacy guarantee. This has been done for classical machine learning algorithms (Han et al., 2017), but untouched at all for quantum algorithms.
###### Acknowledgements.
This work was partly supported by the Youth Innovation Promotion Association CAS, the National Natural Science Foundation of China (Grant No. 61832015), the Young Scientists Fund of the National Natural Science Foundation of China (Grant No. 62002349), the Key Research Program of the Chinese Academy of Sciences (Grant No. ZDRW-XX-2022-1).
|
2310.20172 | Compact Binary Systems Waveform Generation with Generative Pre-trained
Transformer | Space-based gravitational wave (GW) detection is one of the most anticipated
GW detection projects in the next decade, which promises to detect abundant
compact binary systems. At present, deep learning methods have not been widely
explored for GW waveform generation and extrapolation. To solve the data
processing difficulty and the increasing waveform complexity caused by the
detector's response and second-generation time-delay interferometry (TDI 2.0),
an interpretable pre-trained large model named CBS-GPT (Compact Binary Systems
Waveform Generation with Generative Pre-trained Transformer) is proposed. For
compact binary system waveforms, three models were trained to predict the
waveforms of massive black hole binaries (MBHB), extreme mass-ratio inspirals
(EMRIs), and galactic binaries (GB), achieving prediction accuracies of at most
99%, 91%, and 99%, respectively. The CBS-GPT model exhibits notable
generalization and interpretability, with its hidden parameters effectively
capturing the intricate information of waveforms, even with the complex
instrument response and a wide parameter range. Our research demonstrates the
potential of large models in the GW realm, opening up new opportunities and
guidance for future researches such as complex waveforms generation, gap
completion, and deep learning model design for GW science. | Ruijun Shi, Yue Zhou, Tianyu Zhao, Zhoujian Cao, Zhixiang Ren | 2023-10-31T04:40:20Z | http://arxiv.org/abs/2310.20172v3 | # Compact Binary Systems Waveform Generation with Generative Pre-trained Transformer
###### Abstract
Space-based gravitational wave detection is one of the most anticipated gravitational wave (GW) detection projects in the next decade, which will detect abundant compact binary systems. However, the precise prediction of space GW waveforms remains unexplored. To solve the data processing difficulty in the increasing waveform complexity caused by detectors' response and second-generation time-delay interferometry (TDI 2.0), an interpretable pre-trained large model named **CBS-GPT** (**C**ompact **B**inary **S**ystems Waveform Generation with **G**enerative **P**re-trained **T**ransformer) is proposed. For compact binary system waveforms, three models were trained to predict the waveforms of massive black hole binary (MBHB), extreme mass-ratio inspirals (EMRIs), and galactic binary (GB), achieving prediction accuracies of 98%, 91%, and 99%, respectively. The CBS-GPT model exhibits notable interpretability, with its hidden parameters effectively capturing the intricate information of waveforms, even with complex instrument response and a wide parameter range. Our research demonstrates the potential of large pre-trained models in gravitational wave data processing, opening up new opportunities for future tasks such as gap completion, GW signal detection, and signal noise reduction.
+
Footnote †: Corresponding author; [email protected]
+
Footnote †: Corresponding author; [email protected]
## I Introduction
The first direct detection of binary black hole merger event (GW150914)[1; 2] by Laser Interferometer Gravitational-Wave Observatory (LIGO) has opened an innovative window to understand the universe, which provides direct evidence for the validity of Einstein's General Relativity. Gravitational wave (GW) observations will clarify many questions in astrophysics, cosmology, and fundamental physics[3; 4; 5; 6; 7; 8; 9; 10]. So far, the ground-based gravitational wave detector has reported over a hundred compact binary coalesces (CBC) events[11], and recently pulsar timing array (PTA) has also successfully detected sound evidence of the existence of Stochastic Gravitational Wave Background (SGWB)[12; 13; 14; 15]. To gain a deeper understanding and an overall picture of gravitational wave cosmology[16], the field of low-frequency gravitational waves needs to be widely covered. Hence, space-based gravitational wave detector projects have been launched. For example, Laser Interferometer Space Antenna (LISA)[17], Taiji[18; 19] and Tianqin[20] are all scheduled for the 2030s. The space-based gravitational wave detection could avoid terrestrial noise[21] and make the detection of low-frequency (\(10^{-4}-0.1\)Hz) gravitational wave signals more promising. In particular, future space-based gravitational wave detections are expected to detect a richer variety of gravitational wave sources including massive black hold binary (MBHB), extreme mass-ratio inspirals (EMRIs), and galactic binary (GB)[17].
In space-based and ground-based detections, GW signals are both extremely weak and usually buried in instrumental noise. Traditional methods will face challenges in meeting the demands of gravitational wave data analysis in the future. Artificial Intelligence (AI)-driven frameworks have shed some new light on this issue. Specifically, AI techniques have been successfully applied in various subjects such as gravitational wave signal detection[22; 23; 24; 25; 26; 27], parameter estimation[28; 29; 30], signal extraction[31] and noise reduction[32; 33; 34; 35] with promising results. Additionally, the target of space gravitational wave detectors is also one type of complex and multi-scale waveforms (such as MBHB, EMRIs, and GB). Therefore, AI is expected to be a valuable tool for understanding and predicting waveform, as well as providing guidance and enlightenment on data-driven methods for GW framework design in the near future. Some previous studies focused on generating black hold binaries (BHB) waveforms. Lee et al.[36] employed a Recurrent Neural Network (RNN) that is capable of generating BHB waveforms during the merging and ringdown phases of non-spinning binary black hole coalescence. Khan et al.[37] demonstrated that vanilla transformer can learn quasi-circular, spinning, non-precessing binary black hole mergers waveform. Similarly, Chua et al.[38] used a greedy algorithm to build a reduced basis, allowing for the rapid generation of BHB waveforms. Recently, large-scale language models (LLM) based on attention mechanism have shown their tremendous power[39; 40; 41; 42]. Some studies indicate that similar architectures can be applied to the GW data analysis[31; 35]. Developing a deep learning data processing framework for space-based GW detection is an impending necessity.
However, the above investigations did not consider the space-based GW detectors' response, and the second-generation time-delay interferometry (TDI 2.0) combinations. Moreover, prior investigations have limited their analysis to a restricted range of waveform parameters. In our paper, we are dedicated to further investigation on more complex waveforms and training a pre-trained model
to facilitate solving downstream problems. Hence, we proposed **CBS-GPT** (**C**ompact **B**inary **S**ystems Waveform Generation with **G**enerative **P**re-trained **T**ransformer) model, which is an interpretable, transformer-based, and self-supervised large model for prediction of compact binary sources (MBHB, EMRIs, and GB). In CBS-GPT, patching and hybrid embedding mechanisms are proposed for full extraction of waveform features. By utilizing the self-attention mechanism and mean square error loss, CBS-GPT is trained for each GW waveform source. The experiment results illustrate that CBS-GPT can accurately predict the subsequent waveform based on the input waveform. The average overlap between the predicted and target waveforms of MBHB, EMRIs, and GB reaches 0.981, 0.912, and 0.991, respectively. We have also discovered that waveform complexity can significantly influence the model's prediction performance and CBS-GPT can match the key frequencies effectively. Finally, through attention map visualization and calculating the correlation between the waveform and hidden parameters of the model, we discover that CBS-GPT can pay attention to the area with the same phase, confirming that CBS-GPT can adapt itself to learn waveform features even under complex instrument response and wide parameter range.
The rest of this paper is organized as follows. Section II describes data generation and the CBS-GPT model architecture. In Section III, we present our overlap and attention map results and discuss the overlap and CBS-GPT interpretability outcomes. Finally, Section IV highlights our findings and proposes potential future research based on the results.
## II Methodology
### Data
Space-based GW detectors' targets are GW signals at frequencies of \([10^{-4},0.1]\)Hz. We focus on three compact binary sources that are of major interest to LISA: MBHB, EMRIs, and GB. Figure 2 displays data examples. Detailed information of the data generation process is given below.
#### ii.1.1 Mbhb
MBHB is one of LISA's main detection targets[17]. The mass spectrum, formation, and evolution of MBHB, as
Figure 1: **Overview of CBS-GPT pipeline.** Inputs contain three kinds of GW sources (MBHB, EMRIs, and gb), after feeding into CBS-GPT, the successive waveforms are extrapolated.
well as the interplay between MBHB and their host galaxies, can all be better understood through observations. In this paper, SEOBNRv4_opt[43] with including \(l=m=2\) mode is used to generate MBHB waveform. The parameter space of the MBHB dataset is shown in Table 1(a). In Figure 2(a), the TDI 2.0 transfer function significantly affects high-frequency transmissions due to the lower total mass of MBHB. Firstly, we generate MBHB time-series waveforms with a length of 20,000 points with a sampling rate of 5 seconds. During the training phase, each sample of training waveforms is truncated to 4,200 points, with 4,000 before and 200 points after the location of merge time. During the inference phase, the 4,000 valid points before the merge time are fed into CBS-GPT to predict the succeeding 200 points, hence achieving a 20:1 extrapolation prediction of MBHB waveform.
Figure 2: **TDI2.0 response complicates waveforms.** To simplify waveform comparison here, all waveforms were normalized to have a maximum amplitude of 1. The \(\Delta t\) in the figure represents the sampling rate. The effects of different parameters on time and frequency domain are shown on the left and right panels, respectively. (a) **MBHB waveforms** at different \(M_{tot}\). At high frequencies, the TDI response function has a greater impact. The gray line represents the TDI 2.0 transfer function in the frequency domain. (b) **EMRIs waveforms** at different \(e_{0}\). As the eccentricity increases, the EMRIs waveform becomes more and more complex in the frequency domain. (c) **GB waveforms** at different \(f\). The GB signal is relatively simple and is a single-frequency signal.
#### ii.1.2 EMRIs
EMRIs are a kind of black hole binary system with a mass ratio of \(m/M\simeq 10^{-4}-10^{-7}\) and massive black holes have a mass range of \(M\simeq 10^{5}-10^{7}M_{\odot}\). EMRIs waveforms are able to encapsulate the properties of space-time near a massive black hole. EMRIs are among the primary detection targets for the space-based GW detectors, possessing the potential to unveil new physical phenomena[44, 45, 46]. We employ FastEMRIsWaveforms (FEW) package[47] to generate EMRIs waveforms with a sampling rate of 5s. A random slice of 4,200 points is selected to form a waveform for training and inference. Training dataset with randomly sampled parameters is repeatedly regenerated to guarantee complete parameter space coverage. Before feeding the waveform into the model, we standardized it to help the model capture the information from EMRIs waveforms. The parameter space of the EMRIs dataset is shown in Table 1(b). The complexity of the EMRIs waveform is visible in Figure 2(b). As the eccentricity increases, the complexity of the waveform also increases, which is more obvious in the frequency domain.
#### ii.1.3 Galactic binary
Within the Milky Way galaxy, a substantial population of binary white dwarf systems exists, posing foreground noise challenges for space-based gravitational wave detectors. We use the following GB model (Equation 1) to generate GB waveform[48]:
\[\begin{split} h^{\rm src}_{+}(t)&=\mathcal{A}(1+ \cos^{2}\iota)\cos\Phi(t),\\ h^{\rm src}_{\times}(t)&=2\mathcal{A}\sin\iota \sin\Phi(t),\\ \Phi(t)&=\phi_{0}+2\pi f_{0}t+\pi\dot{f}_{0}t^{2}+ \frac{\pi}{3}\ddot{f}_{0}t^{3},\\ \ddot{f}_{0}&=\frac{11}{3}\frac{\dot{f}_{0}^{2}}{f _{0}}.\end{split} \tag{1}\]
Similar to EMRIs, GB waveforms are generated with a duration of 1 year and a sampling rate of 1/15 Hz. A
\begin{table}
\begin{tabular}{l l l} \hline Parameter & Description & Parameter distribution \\ \hline \(M_{tot}\) & Total mass of massive black hole binaries \(m_{1}+m_{2}\) & log-Uniform \([5.5,7]M_{\odot}\) \\ \(q\) & Mass ratio \(\frac{m_{2}}{m_{1}}\) & Uniform \([0.1,1]\) \\ \(S_{1}^{z},S_{2}^{z}\) & Spin parameters of two black holes & Uniform \([-0.99,0.99]\) \\ \(\iota\), \(\psi\) & The inclination angle and polarization angle & Uniform \([0,\pi]\) \\ \(\phi_{c}\) & Coalescence phase. & Uniform \([0,2\pi]\) \\ \(\lambda\) & Ecliptic longitude & Uniform \([0,2\pi]\) \\ \(\beta\) & Ecliptic latitude & Uniform \([0,\pi]\) \\ \hline \end{tabular} (b) Parameters space of EMRIs dataset
\begin{tabular}{l l l} \hline Parameter & Description & Parameter distribution \\ \hline \(M\) & The mass of massive black hole (MBH) & Uniform \([10^{5}-10^{7}]M_{\odot}\) \\ \(m\) & The mass of stellar-mass compact & Fix \([10M_{\odot}]\) \\ \(a\) & Spin parameter of MBH & Uniform \([10^{-3},0.8]\) \\ \(p_{0}\) & Semi-latus rectum & Uniform \([10,16]\) \\ \(e_{0}\) & Eccentricity & Uniform \([10^{-3},0.4]\) \\ \(t_{0}\) & The orbit’s inclination angle from the equatorial plane & Uniform \([-0.98,0.98]\) \\ \(\theta_{S}\), \(\phi_{S}\) & The polar and azimuthal sky location angle & Uniform \([10^{-3},\pi]\) \\ \(\theta_{K}\), \(\phi_{K}\) & The azimuthal and polar angles describing the orientation & Uniform \([10^{-3},\pi]\) \\ & of the spin angular momentum vector of the MBH & \\ \(\Phi_{\varphi,0}\), \(\Phi_{\theta,0}\), \(\Phi_{r,0}\) & The phase of azimuthal, polar, and radial modes & Fix \([0]\) \\ \hline \end{tabular} (c) Parameters space of GB dataset
\begin{tabular}{l l l} \hline Parameter & Description & Parameter distribution \\ \hline \(f\) & Frequency & log-Uniform \([-4,-2]\) Hz \\ \(\dot{f}\) & The derivative of \(f\) & Fix \([10^{-14}]\) \\ \(A\) & Amplitude & Uniform \([10^{-23},10^{-21}]\) \\ \(\iota_{0}\), \(\psi\), \(\phi_{0}\) & The inclination angle, polarization angle and initial phase & Uniform \([0,\pi]\) \\ \(\lambda\) & Ecliptic longitude & Uniform \([0,2\pi]\) \\ \(\beta\) & Ecliptic latitude & Uniform \([0,\pi]\) \\ \hline \end{tabular} (1)
\end{table}
Table 1: Parameters distribution of training set and test set
slice of 4,200 points is randomly truncated for training and inference. The parameter space of the GB dataset is shown in Table 1(c).
#### ii.1.4 Detector response and TDI 2.0
After generating the waveform, we project it into the LISA detector[49]:
\[\begin{split} h_{I,II}(t)=&\frac{\sqrt{3}}{2}h_{+}(t )F^{+}_{I,II}(\theta_{S},\phi_{S},\psi_{S})+\\ &\frac{\sqrt{3}}{2}h_{\times}(t)F^{\times}_{I,II}(\theta_{S}, \phi_{S},\psi_{S}),\end{split} \tag{2}\]
where \(F^{+}_{I,II}\) and \(F^{\times}_{I,II}\) are the antenna pattern functions, \(\phi_{S}\) is the polarisation angle, \(\theta_{S},\phi_{S}\) are the polar and azimuthal sky location angles, respectively. Space-based gravitational wave detectors have unequal arm lengths, which result in significant laser frequency noise. To mitigate this issue, TDI techniques are commonly employed to suppress laser frequency noise[50; 51]. As a result, the detector response and the second-generation Michelson combinations (TDI 2.0) response of GW are calculated using Fastlisaresponse[52] in this case. TDI 2.0 generates three channels X, Y, and Z. By combining X, Y, and Z, three independent channels A, E, and T are obtained,
\[\begin{split} A&=(Z-X)/\sqrt{2},\\ E&=(X-2Y+Z)/\sqrt{6},\\ T&=(X+Y+Z)/\sqrt{3}.\end{split} \tag{3}\]
In contrast to the waveform template, the incorporation of response functions and TDI 2.0 combination introduces increased complexity to the waveform, especially in the high-frequency part. As depicted in Figure 2, MBHB waveforms exhibit significant differences at various parameter values. The increase in complexity presents more challenges in our work while also making our method more practical in future space-based GW research.
### CBS-GPT Model
Transformer[53] is a class of deep learning models that has exhibited excellent performance in various tasks, such as natural language processing (NLP)[40; 41; 42] and computer vision[54]. We incorporate the masked self-attention mechanism and feed-forward neural network of transformer to build our CBS-GPT model.
**Patching** Firstly, each input waveform \(x^{i}\) is divided into non-overlapping patches, and we refer to each patch as a "token" here. The number of tokens depends on the length of each patch and the total length of the waveform. For all three types of signal sources, we have an input waveform with 4,200 sampling points, which is segmented into 1,050 tokens, and each token has a length of 4. Prior to patching, the input waveform is preprocessed by Z-score normalization, which facilitates the model in capturing waveform information more effectively:
\[\text{Normal}(s)=\frac{s-\text{Mean}(s)}{\text{Std}(s)} \tag{4}\]
where \(s\) represents the input waveform, \(\text{Mean}(\cdot)\) and \(\text{Std}(\cdot)\) represent finding the mean and standard variance of the waveform respectively.
**Hybrid Embedding** Different from NLP applications, in our scenario, each token contains richer physical information, and can not be tokenized by common tokenizers as in NLP. Hence, the hybrid embedding module is utilized in our model. As Figure 1 shows, it includes a dense layer and positional embedding layer. The dense layer performs linear projection, which achieves dimension mapping and meanwhile preserves the entire information of the input sequence. The dense layer projects the token into a high-dimensional space of dimension \(d_{model}=1280\). The positional layer is also a linear layer with learnable parameters that encodes positional relationships between tokens, which is rather important in improving prediction accuracy[53].
**Encoder block** The encoder contains 36 blocks. Each block mainly consists of an attention module and a feed-forward neural network. As for the attention module, masked multi-head self-attention (MMHA) is adopted in our work, which enables information to be projected into matrices in different ways, thereby enhancing the expressive capacity of the model. The computation process of the attention module is as follows:
\[\mathit{head}_{i}(Q_{i},K_{i},V_{i})=\text{softmax}\left(\frac{Q_{i}K_{i}^{T} \cdot\text{mask}}{\sqrt{d}}\right)V_{i}, \tag{5}\]
where \(Q_{i}=W^{Q}x_{i},K_{i}=W^{K}x_{i},V_{i}=W^{V}x_{i}\) represent queries, keys and values of the \(i\)-th attention head respectively. Then, concatenate the outputs of all the heads and apply a linear transformation to obtain the final multi-head attention output:
\[\text{MMHA}(Q,K,V)=\text{Concat}(head_{1},...,head_{H})W^{H}\:, \tag{6}\]
\[\text{H}^{\prime}=\text{LayerNorm}(\text{MMHA}(Q,K,V))+x_{i}\:, \tag{7}\]
where \(x_{i}\) represents the hybrid embedding output or the output of the previous encoder block, and \(H^{\prime}\) is the output of the attention module. In MMHA block, \(W^{Q}\), \(W^{K}\), \(W^{V}\) and \(W^{H}\) are learnable parameters.
The feed-forward network (**FFN**) that composed of two dense layers is connected to each attention module. In order to get better results, we employ the residual connection, which helps reduce the vanishing gradient problem.
\[\text{Inter}(H^{\prime})=\text{GeLU}(H^{\prime}W_{1}+b_{1})W_{2}+b_{2} \tag{8}\]
\[\text{FFN}(H^{\prime})=\text{LayerNorm}(\text{Inter}(H^{\prime}))+H^{\prime} \tag{9}\]
**Loss Function** Mean Squared Error (MSE) loss is used to measure the difference between the predictions and the true values as Equation 10 illustrates.
\[\mathcal{L}=\mathbb{E}_{\mathbf{y}}\frac{1}{M}\sum_{i=1}^{M}\|\hat{\mathbf{y}}^{(i)}-\mathbf{ y}^{(i)}\|_{2}^{2}. \tag{10}\]
Where \(\mathbf{y}^{(i)}\) and \(\hat{\mathbf{y}}^{(i)}\) represent prediction and true value of each token. \(M\) represents the total input sampling points which equals 4,200.
### Training and Inference
CBS-GPT contains a total of 36 encoder blocks, with each block having 20 attention heads. The hidden size of each attention head is 64. And the hidden size of two dense layers of FFN is 5,120 and 1,280, respectively. During training, Adam[55] optimizer with \(\beta_{1}=0.9\), \(\beta_{2}=0.999\) is used, and the initial learning rate is 2e-4. To make sure that the training dataset covers the entire parameter space, the training dataset is continuously updated, and the parameters of each waveform are randomly selected from the parameter space of the sources during training. After passing through the LISA response, each waveform is divided into three channels (A, E, and T). In this study, the E channel is selected to train the model. The model was trained on two NVIDIA V100 GPU for approximately 30 hours.
During inference, the initial input sequence contains 1,000 valid tokens and 50 masked tokens that are masked with zero, whose corresponding value in the mask matrix equals 1. In the first step, the 1,001-st token is predicted and replaces the previous 1,001-st token, and so forth, 50 successive tokens are predicted based on 1,000 valid input tokens.
## III Results and Discussion
### Overlap Result
During inference, for each signal source, 10,000 waveforms with random uniform parameter distribution are generated to test GP-GPT's performance. By providing the former part of each signal (4,000 sampling points), the following 200 sampling points are extrapolated through CBS-GPT. Overlap is defined to evaluate the extrapolation accuracy of the predicted waveform.
Overlap is calculated between the target waveform and predicted waveform generated by CBS-GPT as stated in Equation 11. The overlap \(\mathcal{O}\) ranges between \([0,1]\), with values closer to 1 indicating that the predicted waveform is more similar to the target waveform.
\[\mathcal{O}(h_{t},h_{p})=\max_{t_{c}}\left(\hat{h}_{t}|\hat{h}_{p}[t_{c}] \right)^{1/2}, \tag{11}\]
with
\[(h|s) =2\int_{f_{\min}}^{f_{\max}}\frac{\tilde{h}^{*}(f)\tilde{s}(f)+ \tilde{h}(f)\tilde{s}^{*}(f)}{S_{n}(f)}df, \tag{12}\] \[\hat{h} =\frac{h}{\sqrt{(h|h)}}\]
where \(t_{c}\) represents time-shifted and we set \(S_{n}(f)=1\). Overall, in the context of extrapolation tasks targeting MBHB, GB, and EMRIs signals, CBS-GPT has demonstrated remarkable efficacy, with over 50% of the overlaps exceeding 0.99. These results illustrate CBS-GPT's capability to learn from instrumentally responsive gravitational waveforms. Figure 3 demonstrates the prediction performance of each waveform under varying parameter conditions. This demonstrates that the CBS-GPT model can learn waveform characteristics with a wide variety of parameters.
#### iii.3.1 Results of MBHB
During inference, as described in Section II.3,a consistent input-to-prediction length ratio of 20:1 is applied. The distribution of overlap and examples of predicted waveforms are shown in Figure 3(a) and Figure 4(a), with mean and median overlap of 0.981 and 0.992, respectively. In particular, because of the short merge-ringdown phase, a smaller time range is shown on the left sub-figure in Figure 4(a). The overlap results reveal that CBS-GPT can forecast the waveform of the merge-ringdown phase based on the inspiral phase characteristics. CBS-GPT exhibits optimal inference performance when considering a total mass of approximately \(10^{6.5}M_{\odot}\) as shown in Figure 3(a). Our observations reveal a notable pattern: CBS-GPT is similar to the gravitational wave detector that has its sensitive frequency band, as the MBHB waveform frequency is directly linked to the total mass of MBHB. This phenomenon has also been observed in other signal sources.
In particular, the CBS-GPT model is sensitive to total mass, mass ratio, and spin parameters. Here we use the effective spin parameter \(\chi_{\text{eff}}\) to represent the spin parameter [56]:
\[\chi_{\text{eff}}=\frac{S_{1}^{z}}{1+q}+\frac{qS_{2}^{z}}{1+q}. \tag{13}\]
The overlap is higher in low total mass and high effective spin \(\chi_{\text{eff}}\) area as shown in Figure 3(a). As the total mass decreases, the frequency of its waveform gradually increases. Since the TDI 2.0 transfer functions in the high-frequency part is more complex[57, 58], the waveform is also more complex. Consequently, the model's performance experiences a slight decrease. Nevertheless, as depicted in the left panel of Figure 4(a), CBS-GPT demonstrates the ability to learn the predominant features of the waveform, even in the presence of a complex
Figure 3: **The overlap distributions of MBHB, EMRIs, and GB are shown in the left panel. The parameters that have a greater impact on overlap are displayed in the middle and the right panels. MBHB, EMRIs, and GB have a mean overlap of 0.981, 0.912, and 0.991, respectively. (a) **For MBHB waveforms**, the biggest impact on the prediction results is \(M_{tot}\) and spin parameters \(\chi_{\text{eff}}\). The middle sub-figure showcases differences in overlap distribution associated with \(M_{tot}\), and the right sub-figure portrays the overlap heat map of \(M_{tot}\) and \(\chi_{\text{eff}}\). (b) **For EMRIs waveforms**, \(e_{0}\) and \(M\) have the greatest impact on overlap. As \(e_{0}\) increases, the EMRIs waveform becomes increasingly complex. The middle sub-figure showcases differences in overlap distribution associated with \(e_{0}\), and the right sub-figure portrays the overlap heat map of \(e_{0}\) and \(M\). (c) **For GB waveforms**, the parameter with the greatest impact is the frequency parameter. The middle sub-figure showcases differences in overlap distribution associated with \(f\), and the right sub-figure portrays the overlap heat map of \(f\) and overlap.
waveform. While at lower frequencies (or higher total mass), the accumulation period of the input waveform is constrained. Comparing low and high-mass situations to those involving intermediate masses, the performance of mid-frequency band prediction is the best. Moreover, even in these less ideal circumstances, CBS-GPT still can successfully recover a significant portion of the signals.
#### iii.1.2 Results of Continous Waveform: EMRIs and GB
Similarly, CBS-GPT tests are performed on 10,000 EMRIs and GB waveforms with a 20:1 extrapolation rate. Overlap distribution of the EMRIs and GB is shown in Figure 3(b) and Figure 3(c), respectively. Regarding GB, its mean and median overlap both exceed 0.99. And mean and median overlap of EMRIs are equal to 0.912 and 0.997, respectively. While the mean overlap of EMRIs is slightly lower, its median overlap aligns with that observed in the MBHB and GB waveforms. Examples of predicted EMRIs and GB waveform are shown in Figure 4(b) and Figure 4(c).
Specifically, the overlap distribution of EMRIs significantly influenced by the mass parameters and eccentricity parameters. As depicted in Figure 3(b), when \(e_{0}\) is less than 0.1, the majority of overlaps remain below 0.9. As the eccentricity increases, the waveform features become more complex in waveform amplitude. Therefore, when the eccentricity is higher, the corresponding overlap tends to decrease.
In contrast to MBHB and EMRIs signals, the GB signal presents a comparatively straightforward, single-frequency waveform. In the GB dataset, the frequency parameter has the greatest impact on the waveform. When the frequency is > \(10^{-3.5}\)Hz, the overlap is basically higher than 0.9. The overlap analysis for GB signals demonstrates the model's frequency sensitivity, with a distinct preference for learning the characteristics associated with intermediate frequency signals. Meanwhile, as the extrapolation length increases, the prediction accuracy for the three sources decreases.
### Interpretability
From a probabilistic perspective, CBS-GPT's prediction of the waveform can be expressed by the conditional probability of predicting the next token as \(x_{i+1}\) by given the previous \(i\) tokens[41]:
\[L(x_{i+1})=\sum_{k=1}^{i-1}\log P(x_{i}|x_{i-k},\dots,x_{1};\Theta)\:, \tag{14}\]
where \(\Theta\) represents the model parameters. The probability can be represented by the attention mechanism, which can be effectively visualized by an attention map. The attention map allows us to visualize the reasoning process and attention mechanism when predicting waveform, making it easier to get knowledge to interpret gravitational wave data. Here, the inference process is visualized using the attention map:
\[A=\frac{1}{H}\sum_{i=1}^{H}\text{softmax}(\frac{Q_{i}K_{i}^{T}\cdot\text{mask }}{\sqrt{d}})\:, \tag{15}\]
where \(H\) represents all attention heads of the last encoder block. In Figure 5, the vertical axis represents the model input waveform, and the horizontal axis represents the predicted waveform.
When predicting continuous gravitational waveforms (EMRIs and GB), CBS-GPT tends to pay attention to the input points of the same phase, and the periodicity is shown in each row and column of the attention map (Figure 5(g) - 5(f)). The attention map exhibits a grid-like pattern that is closely related to the phase of the waveform, with the scale of the grid expanding as the frequency decreases. Hence, we perform a correlation calculation between the waveform and the hidden parameters, with detailed information available in Appendix A. In our sample, the correlation coefficient for continuous waveforms exceeds 0.8. This demonstrates that the model can accurately match the waveform's frequency and phase information. This mode assists CBS-GPT in successfully extrapolating input data, demonstrating its capacity to learn information across various scales.
During the prediction of the merge-ringdown phase of MBHB waveforms, attention primarily focuses on near-diagonal elements as showcased in Figure5(a) - 5(c). In contrast to the continuous gravitational wave signals, the emphasis is more centralized around the information proximate to the merger phase. The amplitude reaches zero after the merge-ringdown, and the main focus of the attention mechanism lies on the merging stage and the stage after the merge, with relatively less attention directed towards information in the inspiral phase. The focus of the attention map also changes under different total masses, which demonstrates that CBS-GPT predicts waveforms by matching waveform phases and frequency. Moreover, as Figure 4 demonstrates that CBS-GPT accurately estimates the main frequency, the attention map also effectively captures the phase information of the waveform.
## IV Conclusion
In this paper, we introduce the CBS-GPT model, which consists of hybrid embedding and encoder blocks. The CBS-GPT is applied to the prediction of GW waveform after the response of the space-based gravitational wave detector and TDI 2.0 combination. Three models are trained separately for MBHB, EMRIs, and GB wave sources to predict the next 200-point waveform by feeding the previous 4,000-point waveform. In the case of complex instrument response, wide parameter range, and large
frequency band span, the average overlaps between the predicted waveform and the target waveform of MBHB, EMRIs, and GB reach 0.981, 0.912, and 0.991, respectively. Intermediate-frequency waveforms exhibit superior prediction performance compared to both high-frequency and low-frequency counterparts.
At the same time, the visualization of the attention map illustrates that CBS-GPT adjusts the attention map based on the varying frequencies of the waveform. We discovered that the correlation coefficient between the hidden parameters of the CBS-GPT model and the waveforms is relatively high, indicating that the model can match the waveform's phase information extremely well. This allows our model to accurately match phase and frequency information. Our results show that CBS-GPT has the ability to comprehend detailed waveform properties, make predictions based on the input waveform data, and exhibit remarkable learning performance over waveforms of varied frequencies. We are confident that large models can be further used in gravitational wave data processing in the future. For instance, the powerful learning capabilities of CBS-GPT hold the potential for future applications in tasks such as the gap completion of data from space-based GW detectors, GW signal detection, and signal noise reduction.
Figure 4: **CBS-GPT prediction results of MBHB, EMRIs, and GB.** We set the predicted starting point at time zero. To clarify, the blue line represents the conjunction of the last part of the input waveform and target label, the orange line is the predicted waveform, and the gray line is the difference between the predicted and target waveform. The subfigure on the upper-left represents the corresponding spectrum and difference in the frequency domain.
## Acknowledgments
This research was supported by the Peng Cheng Laboratory and Peng Cheng Cloud-Brain. This work was also supported in part by the National Key Research and Development Program of China Grant No. 2021YFC2203001 and in part by the NSFC (No. 11920101003 and No. 12021003). Z.C. was supported by the "Interdisciplinary Research Funds of Beijing Normal University" and CAS Project for Young Scientists in Basic Research YSBR-006.
## Appendix A Correlation coefficient between waveform and hidden parameters
We introduce the correlation coefficient between the waveform and hidden parameters to assess the level of correlation and demonstrate that the attention map can
Figure 5: **Attention maps of the last encoder layer.** For clear presentation, only part of the attention map is displayed. The blue lines on the left and bottom panels represent the input waveforms, whose 1,001-st to 1,050-th tokens are padded with zero-value during inference, and the orange line represents the waveform predicted by CBS-GPT. The term **Similarity** in the title of each figure denotes the correlation coefficient between the waveform and the hidden parameters, which is significantly high.
capture the phase information of the waveform. Firstly, we compute the mean value of each token of the patched input to get the sequence \(M\). Subsequently, the outer product of \(M\) is computed, resulting in the auto-correlation matrix. As the attention map is processed by masking and normalized by the softmax function, we do a similar adjustment to the auto-correlation matrix as Equation 1 shows.
\[\begin{split} R_{\text{mask}}=\text{Mask}(M\otimes M-\min(M \otimes M))\\ R_{\text{Norm}}=\text{RowNorm}\left(R_{\text{mask}}\right),\end{split} \tag{10}\]
where \(\text{RowNorm}(\cdot)\) denotes the normalization of each row of the matrix. To assess the correlation between the two matrices, we calculate the Pearson correlation coefficient between the flattened attention map and flattened \(R_{\text{Norm}}\) (Equation 11):
\[\begin{split}\rho_{A,R_{\text{Norm}}}=\rho\left\{\text{Flatten}(A ),\text{Flatten}(R_{\text{Norm}})\right\}&=\frac{n\sum_{1}^{n}A_{ Fi}R_{Fi}-\sum_{1}^{n}A_{Fi}\sum_{1}^{n}R_{Fi}}{\sqrt{n\sum_{1}^{n}A_{F}^{2}-(\sum_{1}^{n}A_{ Fi})^{2}}\sqrt{n\sum_{1}^{n}R_{F}^{2}-(\sum_{1}^{n}R_{Fi})^{2}}}\end{split} \tag{11}\]
where \(\text{Flatten}(\cdot)\) denotes flattening the matrix into one dimension, and \(n\) represents the length after flattening. Finally, \(\rho_{A,R_{\text{Norm}}}\) is defined as the correlation coefficient between waveform and hidden parameters.
|
2307.00123 | How Do Human Users Teach a Continual Learning Robot in Repeated
Interactions? | Continual learning (CL) has emerged as an important avenue of research in
recent years, at the intersection of Machine Learning (ML) and Human-Robot
Interaction (HRI), to allow robots to continually learn in their environments
over long-term interactions with humans. Most research in continual learning,
however, has been robot-centered to develop continual learning algorithms that
can quickly learn new information on static datasets. In this paper, we take a
human-centered approach to continual learning, to understand how humans teach
continual learning robots over the long term and if there are variations in
their teaching styles. We conducted an in-person study with 40 participants
that interacted with a continual learning robot in 200 sessions. In this
between-participant study, we used two different CL models deployed on a Fetch
mobile manipulator robot. An extensive qualitative and quantitative analysis of
the data collected in the study shows that there is significant variation among
the teaching styles of individual users indicating the need for personalized
adaptation to their distinct teaching styles. The results also show that
although there is a difference in the teaching styles between expert and
non-expert users, the style does not have an effect on the performance of the
continual learning robot. Finally, our analysis shows that the constrained
experimental setups that have been widely used to test most continual learning
techniques are not adequate, as real users interact with and teach continual
learning robots in a variety of ways. Our code is available at
https://github.com/aliayub7/cl_hri. | Ali Ayub, Jainish Mehta, Zachary De Francesco, Patrick Holthaus, Kerstin Dautenhahn, Chrystopher L. Nehaniv | 2023-06-30T20:29:48Z | http://arxiv.org/abs/2307.00123v1 | # How Do Human Users Teach a Continual Learning Robot
###### Abstract
Continual learning (CL) has emerged as an important avenue of research in recent years, at the intersection of Machine Learning (ML) and Human-Robot Interaction (HRI), to allow robots to continually learn in their environments over long-term interactions with humans. Most research in continual learning, however, has been _robot-centered_ to develop continual learning algorithms that can quickly learn new information on static datasets. In this paper, we take a _human-centered_ approach to continual learning, to understand how humans teach continual learning robots over the long term and if there are variations in their teaching styles. We conducted an in-person study with 40 participants that interacted with a continual learning robot in 200 sessions. In this between-participant study, we used two different CL models deployed on a Fetch mobile manipulator robot. An extensive qualitative and quantitative analysis of the data collected in the study shows that there is significant variation among the teaching styles of individual users indicating the need for personalized adaptation to their distinct teaching styles. The results also show that although there is a difference in the teaching styles between expert and non-expert users, the style does not have an effect on the performance of the continual learning robot. Finally, our analysis shows that the constrained experimental setups that have been widely used to test most continual learning techniques are not adequate, as real users interact with and teach continual learning robots in a variety of ways. Our code is available at [https://github.com/aliayub7/cl_hri](https://github.com/aliayub7/cl_hri).
## I Introduction
We envision a future of general-purpose assistive robots that can help users with a variety of tasks in dynamic environments, such as homes, offices, etc. It would be necessary that such assistive robots are personalized to their users' needs and their environments [1]. However, over the long term, users' needs, preferences, and environments will continue to change, which makes it impossible to pre-program the robot with all the tasks it might be required to perform. A solution to this problem is to allow people to continually teach their robots new tasks and changes in their environments on the fly, an approach known as continual learning (CL) [2, 3].
Continual learning has been extensively studied in recent years to allow robots to learn over long periods of time [3, 4]. As it is imperative for a robot to learn the objects in its environment, the majority of research on CL has focused on machine learning (ML) models for object recognition in recent years [4, 5, 6]. Most of these techniques were tested on static object recognition datasets with a large number of training images for each object class. In real-world environments, however, robots will need to learn from individual interactions with their users who might be unwilling to provide a large number of training examples for each object.
In the past few years, robotics researchers developed CL techniques that can learn from only a few training examples per object, an approach known as Few-Shot Class Incremental Learning (FSCIL) [3, 7, 8]. Although FSCIL techniques produced promising results on real robots, they were only tested with systematically collected datasets by their experimenters. Overall, most research in continual learning has been _robot-centered_, to develop efficient CL algorithms that can learn from static datasets or interaction with robot experimenters. However, in the real world, robots will learn from real users who might be unfamiliar with robot programming and learning. Therefore, an equally important area of research in continual learning is _human-centered_, to understand how human users interact with and teach continual learning robots over the long term. To the best of our knowledge, we know of no other work on developing long-term user studies where human users teach modern CL models deployed on robots over multiple interactions.
In this paper, we have a human-centered focus to uncover the diversity and evolution of human teaching when interacting with a continual learning robot over repeated sessions. We developed a CL system that integrates a graphical user interface (GUI) with CL models of object learning deployed on the Fetch mobile manipulator robot [9]. We conducted a long-term between-participant study (N=40) where participants interacted with and taught everyday household objects to a Fetch robot that used two different CL models. We analyzed the data collected in the study to characterize various aspects of human teaching of a continual learning robot in an unconstrained manner. Our results highlight the variation in the teaching styles of different users, as well as the influence of the robot's performance on users' teaching styles over multiple sessions. Our results indicate that the constrained experimental setups traditionally used to test most CL models are inadequate, as real users teach continual learning robots in a variety of ways.
## II Related Work
In this section, we first present an overview of modern CL methods mostly tested without human users, and then
introduce current approaches to robot teaching, highlighting the need for a human-centered approach at the intersection of CL and human-robot interaction (HRI).
### _Continual Learning_
The goal of CL models is to continuously adapt and learn new information over time while preserving past knowledge. Most research in the CL literature has focused on class-incremental learning (CIL) in which a machine learning model learns from labeled training data of different classes in each increment and is then tested on all the classes it has learned so far [4]. One of the main problems faced by class-incremental learning models is _catastrophic forgetting_, in which the model completely forgets the previously learned classes when learning new classes in an increment [10]. Various research directions have been pursued in the past to tackle the catastrophic forgetting problem, such as replay-based techniques that store and replay data of the old classes when learning new classes [4, 11], regularization techniques [12, 13], and generative replay based techniques that generate old data using stored class statistics [14, 15]. These techniques, however, are not suitable for learning from human users who might be unwilling to provide hundreds or thousands of images per object class.
In the past couple of years, researchers also developed class-incremental learning models that can learn from only a few labeled examples per class, a direction known as few-shot class incremental learning (FSCIL) [16]. However, CIL and FSCIL approaches were either tested on static datasets, or on data captured by a robot while interacting with experimenters in systematically controlled setups [2, 3, 16]. To the best of our knowledge, all of the FSCIL approaches were robot-centered and none of these approaches were tested with actual participants (users).
### _Human-Robot Teaching_
Human-centered research for robot learning through HRI has been limited. A few user studies have been conducted in the past with simulated and real robots to understand the characteristics of human teaching. Most of these studies were conducted in Wizard of Oz setups where the robot did not learn from human teaching [17, 18]. Some research has been conducted on interactive reinforcement learning through HRI for learning manipulation tasks through physical human corrections, learning kitchen-related tasks in simulation, or learning natural language description of images from humans [19, 20, 21]. However, most of these studies were designed to test the performance of the reinforcement learning models or understand the perceptions of users towards these models and were not focused on understanding patterns of human teaching. Furthermore, these studies were only tested in a single interaction with users. However, for continual learning robots, it is imperative to design multi-session studies to understand how human teaching of continual learning robots evolves over the long term. In contrast to prior work, to the best of our knowledge, we conducted the first long-term user study at the intersection of continual machine learning and HRI, to understand patterns of human teaching with a continual learning robot over multiple interactions.
## III Method
We investigated human teaching patterns when interacting with a continual learning robot to teach an object recognition task. The subsections below describe our CL system and the method for our long-term study.
### _Continual Learning System_
In this experiment, in each session, the user taught the robot household objects in a table-top environment and then tested the robot to find and point to the requested object on the table. Figure 1 shows the table-top experimental setup for this study. The simplicity of the setup and the task makes it clear what the user should do to teach the robot different objects, and what the robot should do to find the learned objects during the testing phase.
For this setup, we developed a CL system for the object recognition task, which integrates CL models with a Fetch mobile manipulator robot [9], as well as a graphical user interface (GUI) for interactive and transparent learning from human users. Figure 2 shows our system for the object recognition task. In this system, the user interacts with the robot through the GUI on an Android tablet (Figure 3). The user provides labels of new objects placed in front of the robot through the GUI and saves the images of objects processed through the object detection module in the robot's
Fig. 1: (Left) Experimental layout for the CL setup with the participant and the robot. (Right) Corresponding real-world setup.
Fig. 2: Our complete CL system. Processed RGB images from the robot’s camera are sent to the GUI for transparency and also passed on to the CL Model. The user sends object names to the CL model either for training the CL model or finding an object. The arm trajectory planner takes point cloud data, processed RGB data, and predicted object labels from the CL model as input and sends the arm trajectory for the Fetch robot to point to the object.
memory. The robot then uses the saved object images in each session to train the CL model. After teaching, the user can test the robot by asking it to find objects on the table through the GUI. The robot passes the pre-processed images to the CL model to get the predicted object labels. If the object requested by the user is found, the robot finds the 3D location of the object on the table and points to the object using its arm.
#### Iii-B1 Continual Learning Models
We consider two CL models in this study. For the first model, we consider a naive finetuning (FT) approach [4] in which a convolutional neural network (CNN) [22] is trained on the image data of the object classes in each increment (i.e. in an interactive session with the user). The model does not train on any of the objects learned in the previous increments (sessions) and therefore it forgets the previously learned objects. This model can serve as a baseline for forgetting in continual learning [4, 11].
For the second model, we consider a state-of-the-art CL approach specifically designed for FSCIL in robotics applications [3]. This approach, termed centroid-based concept learning (CBCL), mitigates forgetting by creating separate clusters for different object classes. CBCL stores cluster centroids of object classes in memory and uses these centroids to make predictions about labels of new objects. More details about these models can be found in [3, 4]. Note that all of these models were only tested on systematically collected object datasets in prior work, and have never been tested in real-time with human participants.
### _Participants_
We recruited 40 participants (19 female (F); 21 male (M), all students) from the University of Waterloo, between the ages of 18 and 37 years (\(M=23.48\), \(SD=4.49\)). 20 participants (ages: \(M=24.15\), \(SD=4.21\), 10 F, 10 M) were randomly assigned to _FT_ condition, and the other 20 (ages: \(M=22.78\), \(SD=4.68\), 9 F, 11 M) were randomly assigned to _CBCL_ condition. The participants had diverse backgrounds in terms of their majors, but most of them (65%) were engineering and computer science students. Based on their self-assessments in a pre-experiment survey, 40% of the participants reported that they were familiar with robot programming, 55% reported that they had previously interacted with a robot, 5% were familiar with the Fetch robot, and 10% had previously participated in an HRI study. For the remainder of the paper, we will call 40% participants that had prior robot programming experience 'experts' and the rest of the participants 'non-experts'. All procedures were approved by the University of Waterloo Human Research Ethics Board.
### _Research Questions_
We analyze the data collected in our study to answer the following research questions and test the associated hypotheses. These hypotheses are guided by previous research that was discussed in Section II: Prior HRI research showed that users' interactions and perceptions towards a robot are correlated with the performance of the robot and the time and effort spent in interacting with the robot. Also, there might be differences in how different users interact with the robot, especially if they had prior experience programming robots. Further, prior CL research showed that CL models can forget previous knowledge over time, and thus their performance decreases. However, there is a difference in the rate of forgetting for different CL models.
**RQ1**: How do different human users label objects when teaching a continual learning robot over multiple sessions?
**H1.1**: Labelling strategies for objects vary among different users.
**RQ2**: Does the continual learning robot's performance affect the way users teach over multiple sessions?
**H2.1**: Classification performance of the robot affects the teaching style of the participants over multiple sessions.
**H2.2**: Users teach a robot that forgets previous objects differently than a robot that remembers previous objects.
**RQ3**: Do users change the way they teach the continual learning robot over multiple sessions?
**H3.1**: Teaching styles of users change over multiple sessions regardless of the CL model.
**RQ4**: Is there a difference in teaching style and robot performance for expert and non-expert users?
**H4.1**: Continual learning robots taught by expert users perform better than the ones taught by non-expert users.
**H4.2**: There is a difference between the teaching styles of expert and non-expert users.
### _Procedure_
We conducted five repeat sessions (each lasting \(\sim\)20-30 minutes) with each participant in a robotics laboratory. All sessions were video recorded. We also stored the image data of the objects taught and tested by the participants. Each participant was randomly assigned to one of the two experimental conditions using one of the two CL models, CBCL and FT. Before their first session, each participant
Fig. 3: The graphical user interface (GUI) used to interact with the robot. The RGB camera output with bounding boxes is on the top left. The buttons at the bottom can be used to teach objects to the robot and ask it to find objects in the testing phase. The top right of the GUI shows information sent by the robot to the user.
was asked to complete a consent form and a pre-experiment survey online. After completing the consent form and the pre-experiment survey, the experimenter greeted the participant and gave a brief oral introduction to the experiment. The participant then interacted with the robot in a demo session to understand how to teach and test the robot. In the demo phase, the robot did not learn any objects.
During the demo phase, the experimenter explained to the participant how to start a teaching session using the GUI, teach an object to the robot, and test the robot to find the object. The participant then tried teaching a demo object (this object was not used later) to the robot. The participant then tested the robot to find the demo object on the table using the GUI. After the demo phase (\(\sim\)5 minutes), the experimenter gave a paper sheet, which served as a memory aid, to the participant to write down the names of the objects of the current session. In this way, the participants could remember the object names when they needed the robot to find these objects in the next sessions. The experimenter then took the tablet from the participant and loaded the program for the actual session on the tablet. The experimenter handed the tablet back to the participant and placed five objects to be taught in the session on one side of the table. The experimenter then mentioned to the participant that they can start their session and start teaching the five objects.
The experimenter then went to a secluded area and the participant taught and tested objects to the robot. At the end of the session, the experimenter came out of the secluded area and asked the participant to finish a post-experiment survey. The participant then scheduled their next session. In the next four sessions, the same procedure was repeated, except for replacing the objects to be taught between sessions. Figure 4 shows the 25 objects used in our study. Participants were also told that they can bring a maximum of two objects per session of their own choice in sessions 3-5 to teach to the robot. If participants brought their own objects, we replaced some of the objects from our set (Figure 4) with participants' objects (total objects taught over 5 sessions was still 25). Participants did not go through a demo interaction in the next four sessions. At the end of the last session, the experimenter asked the participant to have a short interview to answer some questions describing their experience with the robot. This interview was audio recorded. Analyses of the post-experiment survey and audio interview are not reported since they go beyond the scope of this paper, and will be reported in future publications. Examples of the teaching and testing phases are shown in the supplementary video.
### _Measures_
We used both qualitative and quantitative measures to analyze the data for the two conditions. We analyzed the object names given by the participants to different objects using the image data stored for objects during teaching sessions. We report the variety and frequency of labels used by the participants for each object. We also coded the video recordings to calculate the frequency of teaching by the participants in all 5 sessions, and if they re-taught any objects to the robot in case the robot was not able to correctly find them on the table.
We also analyzed the performance of two CL approaches. Classification accuracy per session (increment) has been commonly used in the CL literature [4, 7] for quantifying the performance of CL models for object recognition tasks. Therefore, for each session, during the testing phase, we recorded the total number of objects tested by the participant and the total number of objects that were correctly found by the robot. Using this data, we calculated the accuracy \(\mathcal{A}\) of the robot in each session as:
\[\mathcal{A}=\frac{number\ of\ objects\ correctly\ found}{number\ of\ objects\ tested} \tag{1}\]
We use the accuracy of the models to determine the teaching quality of the participants in each condition and over multiple sessions. Further, using the image data stored for the objects, we calculated the average number of times
\begin{table}
\begin{tabular}{c|c|c}
**Object** & **No. of Different Labels** & **Most Common Label** \\ \hline Green Cup & 10 & Cup (59\%) \\ Honey & 13 & Honey (46.5\%) \\ Bowl & 10 & Bowl (65\%) \\ Glue & 6 & Glue (76\%) \\ Spoon & 6 & Spoon (81\%) \\ \hline Apple & 3 & Apple (90\%) \\ Banana & 3 & Banana (90\%) \\ Red Cup & 14 & Red Cup (25\%) \\ Blue Marker & 11 & Marker (58\%) \\ Orange & 5 & Orange (77\%) \\ \hline Mug & 7 & Mug (72\%) \\ Fork & 6 & Fork (76\%) \\ Sharpei & 8 & Sharpei (48\%) \\ Plate & 10 & Plate (61\%) \\ Stapler & 6 & Stapler (86\%) \\ \hline Book & 4 & Book (86\%) \\ Red Marker & 4 & Red Marker (31\%) \\ Blue Pen & 7 & Pen (60\%) \\ Pepsi & 7 & Pepsi (54\%) \\ White Bottle & 8 & Water Bottle (62\%) \\ \hline Coca Cola & 8 & Coke (36\%) \\ Milk & 7 & Milk (77\%) \\ Phone & 5 & Phone (68\%) \\ 7Up & 12 & 7Up (44\%) \\ Water Bottle & 8 & Water (47\%) \\ \end{tabular}
\end{table} TABLE I: The number of different labels given by the participants to all 25 objects in the study together with the most common label for each object with the percentage of participants that chose this label. Objects are ordered from top to bottom as they were taught in 5 sessions with 5 objects per session. Note that the first column shows some reference names for the objects to be able to identify them individually in the paper.
Fig. 4: The twenty-five objects used in our study.
each object was taught by the participants in each session to determine the effort spent by the participants in teaching the robot. Finally, we analyze how the above-mentioned variables are affected by the sessions, choice of the CL model, and previous robot programming experience of the participants.
## IV Results
In this section we present the results of our analysis in terms of different labeling strategies and teaching styles of the participants. We also report the effect of participants' teaching styles on the robot's performance and vice versa.
### _Object Labeling by Human Teachers_
Table I shows the number of different labels given to the 25 objects by 40 participants in the study. To identify each object we add a generic name for each object in the table. For example, for the plastic apple used in our study, we identify it as an apple in the table. Overall, there was a significant variation in the labeling of objects by the participants, ranging from 3 (for Apple) to 14 (for Red Cup) different labels for the objects. Among such labels, some were quite simple and generic, such as _Honey_, _Bowl_, _Milk_, etc. whereas some were quite specific, such as _Almost Empty Yellow Honey Jar_, _Light Green Flat Bowl_, _Empty Milk Carton_, etc. We also report the most common label given to each object and the percentage of participants that chose that label. The consensus among the participants for labeling the objects varied from 25% for _Red Cup_ to 90% for _Apple_.
We also noticed some unique labeling strategies by the participants. Some participants labeled different objects in different sessions using the same label. For example, multiple participants gave the label _Cup_ to _Green Cup_ in Session 1, _Red Cup_ in Session 2, and _Mug_ in Session 3. In total, 10 out of 40 participants (25%) gave the same label to at least two different objects. Further, some participants also gave multiple labels to the same objects. For example, one participant labeled _Milk_ as both _Milk Box_ and _Milk Pouch_. Overall, there were 7 out of 40 participants (17.5%) that gave more than one label to at least one object. Finally, we noticed that some participants gave labels that did not match the objects. For example, one participant named glue _Insert Stick Joke Here_, another participant named bowl _Plate_, and another participant named stapler _The Better Robot_.
### _Participants' Teaching Styles and Robot Performance_
We performed a three-way ANOVA with three independent variables: the two conditions (CBCL, FT), session number (total 5 sessions) treated as within subject, and previous robot programming experience of the participants (experts, and non-experts). The ANOVA was performed to understand the effect of the three independent variables on the teaching style of the participants and the robot's performance in the testing phase. The dependent variables were the classification accuracy of the robot in the testing phases for 200 sessions, the average number of images per object shown by the participants in each session, the number of teaching phases started by the participants in each session, and the number of times participants retaught misclassified objects in each session.
Table II represents the \(p\) values and significance levels for the ANOVA. For classification accuracy, we see a significant effect based on the session number, the choice of the CL model (CBCL or FT condition), and the interaction between the session number and the CL model. For the number of images taught per object, we noticed a significant effect based on the previous programming experience of the participants and the interaction between the CL model and the programming experience. For the number of teaching phases per session, we only saw a borderline effect by the programming experience and interaction between the CL model and the programming experience. Finally, reteaching of misclassified objects was not significantly affected by any independent variables.
For significant ANOVAs, we performed the post hoc Tukey HSD test. However, the data for sub-groups for some dependent variables were not normally distributed, therefore, we also performed the Wilcoxon rank sum test [24] with false discovery rate correction [25] for pairwise comparisons between sub-groups for each dependent variable.
#### Iv-B1 Model Accuracy
As the data for classification accuracy was normally distributed, we performed the post hoc Tukey HSD test for significant ANOVAs. Figure 5 shows the average classification accuracy of the continual learning robot over five sessions. As displayed in Figure 4(a), the accuracy is significantly affected by the choice of the CL model. For the first session, both models have similar accuracy (\(\mu=0.53\), \(\sigma=0.19\) for CBCL; \(\mu=0.52\)
\begin{table}
\begin{tabular}{c|c|c|c|c} & **Accuracy** & **No. images** & **Teaching phases** & **Reteaching** \\ \hline Session number & **0.0003** (0.08) & 0.056 & 0.775 & 0.407 \\ CL model & **\(<\)0.0001** (0.32) & 0.161 & 0.774 & 0.748 \\ Programming experience & 0.487 & **0.034** (0.09) & 0.486 & 0.945 \\ Session number : CL model & **\(<\)0.0001** (0.09) & 0.194 & 0.534 & 0.335 \\ Session number : Programming experience & 0.392 & **0.039** (0.01) & 0.239 & 0.341 \\ CL model : Programming experience & 0.597 & **0.040** (0.08) & 0.379 & 0.375 \\ Session number : CL model : Programming experience & 0.964 & 0.091 & 0.739 & 0.136 \\ \end{tabular}
\end{table} TABLE II: Results (\(p\) values) of the three-way ANOVA using session number, continual learning model, and previous programming experience as independent variables. Columns for the accuracy of the models, number of images per object, number of teaching phases, and reteaching misclassified objects show \(p\) values for the dependent variables. Effect sizes (generalized eta [23]) for significant ANOVAs are shown in the brackets. Significance levels (\(*:=p<.05;**:=p<0.01;***:=p<0.001\); \(***:=p<0.0001\)).
\(\sigma=0.18\) for FT). For the next four sessions, there is a statistically significant difference between the two models: when comparing CBCL (\(\mu=0.54\), \(\sigma=0.16\)) to FT (\(\mu=0.29\), \(\sigma=0.13\)) with \(p<0.0001\) for session 2, comparing CBCL (\(\mu=0.54\), \(\sigma=0.15\)) to FT (\(\mu=0.22\), \(\sigma=0.13\)) with \(p<0.0001\) for session 3, comparing CBCL (\(\mu=0.53\), \(\sigma=0.20\)) to FT (\(\mu=0.24\), \(\sigma=0.13\)) with \(p<0.0001\) for session 4, and comparing CBCL (\(\mu=0.55\), \(\sigma=0.19\)) to FT (\(\mu=0.26\), \(\sigma=0.14\)) with \(p<0.0001\) for session 5. Further, when considering the two models separately, significant differences are seen between the first and the subsequent sessions for FT only.
As evident from the ANOVA, there was no statistically significant difference in classification accuracy for expert and non-expert users (based on their previous programming experience). Results in Figure 4(b) correlate with the ANOVA.
#### V-B2 Number of Images per Object
We performed the post hoc Tukey HSD test for the significant ANOVAs for the number of images as the dependent variable. Figure 5(a) details the difference between the two CL models and expert and non-expert participants in terms of the number of images taught per object. We notice a statistically significant difference between experts and non-experts irrespective of the CL model with (\(\mu=3.93\), \(\sigma=3.71\)) for non-experts and (\(\mu=6.09\), \(\sigma=4.67\)) for experts with \(p=0.034\). However, this difference seems to stem from participants in the FT condition only. In terms of individual sessions, there is a statistically significant difference between experts ((\(\mu=6.75\), \(\sigma=5.32\))) and non-experts ((\(\mu=3.91\), \(\sigma=3.79\))) in session 5 only with \(p=0.003\).
To further investigate experts and non-experts in the FT condition, we performed a Wilcoxon rank sum test [24] between experts and non-experts in the FT condition over five sessions. As displayed in Figure 7, there is a statistically significant difference between experts and non-experts for sessions 4 and 5 only i.e. when comparing experts (\(\mu=9.05\), \(\sigma=4.41\)) to non-experts (\(\mu=3.68\), \(\sigma=3.25\)) with \(p=0.035\), \(W=10.5\) in session 4, and comparing experts (\(\mu=11.49\), \(\sigma=7.03\)) to non-experts (\(\mu=3.14\), \(\sigma=2.49\)) with \(p=0.035\), \(W=10.0\) in session 5.
#### V-B3 Number of Teaching Phases per Session
As the ANOVA for the number of teaching phases was not significant, we did not perform a post hoc Tukey HSD test. Overall, 20 out of 40 participants had at least one session where they started more than one teaching phase with the robot. Overall, 50 out of 200 sessions had more than one teaching phase ranging from 2 to 9 teaching phases in a single session.
#### V-B4 Reteaching after Misclassification
The ANOVA for the dependent variable reteaching after misclassification was not significant and there were no borderline values. Therefore, we did not perform a post hoc Tukey HSD test. Overall, we noticed that 18 out of 40 participants retaught at least one object after it was misclassified by the robot during the testing phase. In total, there were 46 out of 200 sessions in which participants retaught misclassified objects with a maximum of 7 reteaching of misclassified objects.
Note that the above statistic only counts the reteaching of misclassified objects from the current session only i.e. if an object taught in the previous sessions was misclassified and retaught in a session it is not covered in the above statistic. Overall, there were only 11 sessions when participants retaught at least one object from the previous sessions, with a maximum number of 4 old objects taught in a session. In terms of the number of participants, only 6 out of 40 participants retaught objects from previous sessions in subsequent sessions.
Fig. 5: Boxplots for _accuracy_ for two conditions. Significance levels (\(*:=p<.05;**:=p<0.01;***:=p<0.0001\)) are indicated on bars between columns.
Fig. 6: Boxplots for _number of images per object_. Significance levels (\(*:=p<.05;**:=p<0.01\)) are indicated on bars between columns.
Fig. 7: Boxplot for _number of images per object_ for experts and non-experts in FT condition. Significance levels (\(*:=p<.05\)) are indicated on bars between columns.
## V Discussion
Results from the qualitative and quantitative analyses of the data collected in our study allow us to validate the hypotheses in Section III-C and answer the research questions.
For object labeling, we noticed significant variations in the labeling strategies of different participants. None of the 25 objects used in the study had a single consistent label across all 40 participants, even for simple objects, such as _Apple_. Further, some participants also labeled different objects with the same label, and some participants gave multiple labels to the same object. These distinct labeling strategies seem to affect the performance of the continual learning robot as depicted by the high standard deviation in classification accuracy of both CL models (Figure 5a). As a consequence, we can accept **H1.1**. These results also indicate the need for developing personalized robots that adapt to their users' labeling strategies and learn, and understand, their environment such that both the user and the robot can effectively communicate about the entities in the environment.
The classification accuracy of the continual learning robot was significantly affected by the choice of the CL model which was expected as the FT model forgets previous objects over the five sessions. However, classification accuracy was not affected by the previous robot programming experience of the participants. This result was surprising as it indicates that even expert users who have previous programming experience might not be familiar with continual learning over the long term. Therefore, the teaching effectiveness of both expert and non-expert users might be similar for a continual learning robot. Consequently, we have to reject **H4.1**.
We quantified participants' teaching styles by calculating the number of images taught per object, the number of teaching phases started in each session, the number of times objects were retaught after being misclassified by the robot, and the number of times objects from previous sessions were taught by the participants. For the number of images per object, we did not find a statistically significant difference regarding the choice of the CL model or the session number in ANOVA. However, the previous robot programming experience of the participants did have a significant effect on the number of images per object. There was also an interaction of the previous robot programming experience with the session number and the CL model. Upon further investigation, we noticed that the difference between experts and non-experts occurred because expert users showed a significantly larger number of objects than non-expert users during later sessions in the FT condition. These results show that based on their previous experiences expert users might try to compensate for the degraded performance of the robot in later sessions by teaching more images per object. Note, that this might still not affect the robot's classification performance, as users might not be familiar with continual learning.
In terms of the number of teaching phases per session, there was no statistically significant effect of the choice of the CL model or the session number. However, we did see that half of the participants started more than a single teaching phase in 200 sessions. Note that in the demo session participants were shown only a single teaching phase. Therefore, this result indicates that users might teach continual learning robots differently than the experimenter, i.e., not entirely following their instructions.
For reteaching objects based on misclassification by the robot, we did not see any significant effect of the session number, choice of the CL model, or the previous programming experience of the participants. However, we did notice that almost half of the participants retaught objects if they were misclassified by the robot. This result indicates that, unlike static datasets, the continual learning robots might get more data for the objects if the robot misclassifies them in a session. Finally, we also noticed that almost half (45%) of the participants also retaught some of the objects from previous sessions to the robot. Note that in the study instructions, and during the demo phase, participants were not told that they cannot re-teach old objects, therefore many of the participants retaught objects from previous sessions if the robot misclassified them during the testing phases. These results further demonstrate the difference between constrained CL test setups and testing in the real world with real users. Particularly, unlike constrained CL setups, users will reteach objects that they previously taught the robot if the robot does not classify them correctly. Finally, these results show that most users in the study were motivated to improve the performance of the robot, even though they were not given any specific incentive to do so. This is quite promising, as it indicates that users might be motivated to improve the performance of their personal robots over long-term interactions. Based on the above results **H2.1** can be accepted partially as we noticed that almost half of the participants retaught objects to the robot based on the robot's classification performance. **H2.2** (users teach a forgetful robot differently) has to be rejected as none of the three dependent variables for teaching style were affected by the choice of the model. Furthermore, **H3.1** (evolution of teaching styles over sessions) has to be partially accepted as we noticed a change in the number of images per object for expert users in the last session. Finally, we can accept **H4.2** (difference between the teaching styles of experts and non-experts) partially, as we did notice a difference in the number of images per object for expert and non-expert users. However, there was no difference in terms of the number of teaching phases per session and reteaching old objects between expert and non-expert users.
## VI Conclusions
In this paper, we considered a human-centered approach to continual learning to understand how users interact with and teach continual learning robots over the long term. We designed a long-term between-participant HRI study with a continual learning robot using two different CL models and analyze the data to understand the different teaching styles of participants, and how these styles are influenced by the performance of the robot over multiple sessions. Our results indicate that different users might teach household objects
to the continual learning robot in a variety of ways, which could also affect the classification performance of the CL models. Moreover, the results show that the classification performance of the robot in prior sessions could influence the teaching style of the users in subsequent sessions, which is different from constrained CL test setups. The results also show that the previous programming experiences of the users can also significantly influence the way they interact with and teach the continual learning robot over multiple sessions. Finally, these results demonstrate the limitations of current CL test setups and CL models. Therefore, based on the results of this study, we recommend future CL models focus on adapting to the teaching style of their users, and that CL models should be tested in more realistic test setups.
## VII Limitations and Future Work
We conducted our study in an unconstrained setup, where participants could teach and test the robot flexibly. However, the study was conducted in a robotics lab and not in a realistic household environment. In future work, we plan to conduct a similar study in a smart home with the same robot to understand the influence of the household environment on the interactions and teaching styles of the users. We conducted the user study with a mix of expert and non-expert users, however, they were all university students between the ages of 18 and 37 years. In future work, we plan to conduct this study with participants who might be less familiar with robots to understand the effectiveness of continual learning robots for assistive applications. Finally, the study was conducted with one particular robot and with two CL models. Expanding this work to other robots and CL models can help us understand the larger design space of continual learning robots and users' teaching patterns when interacting with these robots.
Despite these limitations, our user study took the first step toward a human-centered approach to continual learning by integrating machine learning-based CL models with HRI. We hope that our results can help ML and HRI researchers design CL models that can adapt to their users' teaching styles and test these models in realistic experimental setups where embodied agents interact with human users.
|
2305.00982 | Two-phase Dual COPOD Method for Anomaly Detection in Industrial Control
System | Critical infrastructures like water treatment facilities and power plants
depend on industrial control systems (ICS) for monitoring and control, making
them vulnerable to cyber attacks and system malfunctions. Traditional ICS
anomaly detection methods lack transparency and interpretability, which make it
difficult for practitioners to understand and trust the results. This paper
proposes a two-phase dual Copula-based Outlier Detection (COPOD) method that
addresses these challenges. The first phase removes unwanted outliers using an
empirical cumulative distribution algorithm, and the second phase develops two
parallel COPOD models based on the output data of phase 1. The method is based
on empirical distribution functions, parameter-free, and provides
interpretability by quantifying each feature's contribution to an anomaly. The
method is also computationally and memory-efficient, suitable for low- and
high-dimensional datasets. Experimental results demonstrate superior
performance in terms of F1-score and recall on three open-source ICS datasets,
enabling real-time ICS anomaly detection. | Emmanuel Aboah Boateng, Jerry Bruce | 2023-04-30T18:13:40Z | http://arxiv.org/abs/2305.00982v1 | # Two-phase Dual COPOD Method for Anomaly Detection in Industrial Control System
###### Abstract
Critical infrastructures like water treatment facilities and power plants depend on industrial control systems (ICS) for monitoring and control, making them vulnerable to cyber attacks and system malfunctions. Traditional ICS anomaly detection methods lack transparency and interpretability, which make it difficult for practitioners to understand and trust the results. This paper proposes a two-phase dual Copula-based Outlier Detection (COPOD) method that addresses these challenges. The first phase removes unwanted outliers using an empirical cumulative distribution algorithm, and the second phase develops two parallel COPOD models based on the output data of phase 1. The method is based on empirical distribution functions, parameter-free, and provides interpretability by quantifying each feature's contribution to an anomaly. The method is also computationally and memory-efficient, suitable for low- and high-dimensional datasets. Experimental results demonstrate superior performance in terms of F1-score and recall on three open-source ICS datasets, enabling real-time ICS anomaly detection.
Anomaly detection, industrial control systems, machine learning, cyber-physical systems.
## I Introduction
Modern critical infrastructures like water treatment facilities, oil refineries, power grids, and nuclear and thermal power plants all include industrial control systems (ICS) and are used to control and monitor a physical process. A system known as an ICS is created by combining computational, and communication components, sensors, actuators, Programmable Logic Controllers (PLCs), Human Machine Interfaces (HMIs), and Supervisory Control and Data Acquisition (SCADA) systems are some of the devices and subsystems that make up an ICS.
For a given ICS setup, the physical layer's field devices, or sensors and actuators, control and manage the underlying industrial process. The distributed control layer's PLCs get information about the process's present condition via sensors. Pumps, valves, generators, and circuit breakers are examples of actuators that receive control actions from the PLCs and carry them out. For the purpose of executing human-assisted control actions, other devices, such as the SCADA and HMIs at the supervisory control layer, enable communication between a plant operator and the PLCs [1]. Figure 1 shows a conceptual view of a typical ICS.
Because of the deployment of ICSs in critical infrastructure, ICSs are desirable targets for adversaries. Any of the ICS subsystems may be compromised by a skilled adversary who would then be able to manipulate the actuators or sensor readings over time until a nefarious objective is achieved. ICSs may experience service interruptions or material losses as a result of malicious cyberattacks, which could have a detrimental impact on the quality of life in societies. A significant instance is the 2010 Stuxnet-based cyberattack against the Iranian nuclear program [2]. The worm used pertinent data from the ICS to harm the centrifuges inside the factory by repeatedly altering their rotation speed after infecting the Programmable Logic Controllers (PLCs). Another such is the cyberattack in late 2015 [3] that targeted the electrical grid in Ukraine. Three local power distribution businesses had their information systems compromised by the attackers using the BlackEnergy malware.
Machine learning-based intrusion detection solutions for ICSs have been developed in response to the rising frequency and complexity of these attacks [4, 5, 6]. A common and effective method among the several strategies suggested in the scientific literature is the one-class classification [4, 7]. In the training stage, one-class classification-based solutions create a model that embodies the ICS's typical behavior. The detection system employs that model during the production stage to check whether the behavior of the live system adheres to a required standard. Anomalies are deviations from the norm that are typically identified using classification error criteria. These algorithms may be particularly sensitive to unusual behavior, including zero-day anomalies or cyberattacks and malfunctioning hardware or sensors that might avoid detection.
Another key challenge of deploying machine learning techniques for ICS is the computational demands of the machine learning detection model [8]. ICS technology has been around
Fig. 1: Conceptual View of Industrial Control Systems (ICS).
for several decades; as a result, deploying complex machine learning models with high computational requirements can be challenging. However, existing approaches are expensive to compute. Existing approaches performances are limited to low-dimensional data due to the curse of dimensionality. The majority of existing methods require the selection and tuning of hyperparameters. Sometimes the selection of the hyperparameters is arbitrary and subjective, producing various results during different experiments of the same ICS setup.
Model interpretability is a crucial challenge in ICS anomaly detection. The majority of the existing literature makes use of complex machine learning techniques [9, 4, 5] or an ensemble of different anomaly detection algorithms [10, 11, 12] in an effort achieve high anomaly detection performance. However, these complex anomaly detection techniques suffer from model interpretability. In ICS setup, it is essential to understand the root cause of anomalies to make informed decisions during ICS troubleshooting and during ICS forensic analysis. Because ICS control critical infrastructure in our society, anomaly detection models must be capable of providing reasons for flagging any system operation as an anomaly.
False alarms are also a challenge in ICS anomaly detection. To preserve system availability and operational continuity, every anomaly detection system must lower the erroneous categorization of ICS routine procedures. A false alarm could occur due to the selection of an improper machine learning approach for ICS anomaly detection. For example, supervised learning has been used by the vast majority of researchers for ICS anomaly detection [13, 6]. Because this supervised anomaly detection method is unable to detect unidentified cyberattacks, huge, up-to-date databases, including every possible cyberattack, are required to train the classification model. Such requirements are infeasible in practice. Unsupervised machine learning techniques have also been used in ICS anomaly detection [4, 9] in an effort to account for unknown anomalies. Unsupervised anomaly detection algorithms assume that all training data are a true representation of the normal operation of the ICS because anomalies are unknown ahead of time. The assumption may not hold for complex ICS with several sensors and actuators. Sensor and actuator readings may be noisy. If noisy data is used in training anomaly detection models, their predictions may not be a true reflection of the normal ICS operation. There is a need to remove the noise and any apparent outliers in the data before training.
This work tackles the aforementioned challenges by proposing a novel unsupervised Two-phase Dual (TPD) Copula-Based Outlier Detection (COPOD) method for ICS anomaly detection. The proposed TPD COPOD is an anomaly detection method that consists of two sequential modeling stages and a dual (parallel) modeling stage. The first phase of the method makes use of Empirical Cumulative Distribution Functions (ECOD) to remove any obvious noise in the training dataset in order to substantiate the hypothesis that the training dataset truly represents the normal operation of a given ICS dataset. ECOD uses information about the data distribution of the training dataset to identify anomalies based on the assumption that anomalies are mostly isolated events located in the tails of a distribution [14]. The second phase consists of a dual COPOD architecture that utilizes the normal process data (output) of phase 1 to develop two COPOD models. The raw normal process data is used to develop the first COPOD model. The second COPOD model is developed using normalized process data (excluding discrete variables). COPOD algorithm uses the normal dataset to formulate an empirical copula, and then uses the copula to predict the tail probabilities of each observation in the dataset [15]. The magnitude of the tail probabilities represents the degree to which an observation is an outlier. A decision function strategy is also introduced for assigning final anomaly scores for the second phase. The motivation for the proposed architecture is to be able to exploit ICS data by examining the data using two latent representations to extract useful information to minimize overfitting and improve the model's anomaly detection capabilities.
The contributions of this work can be summarized below:
1. Propose first known two-phase dual COPOD method for ICS anomaly detection;
2. Introduce an efficient and scalable anomaly detection method suitable for both low and high dimensional ICS datasets; and capable of real-time ICS anomaly detection; and
3. Introduce a robust, parameter-free ICS anomaly detection method based on empirical distribution functions. The deterministic nature of all stages of the method leads to the mitigation of the challenges involved in hyperparameter selection.
4. Propose a highly interpretable anomaly detection method that quantifies each feature's contribution towards an ICS anomaly
The rest of this article is organized as follows. Section II presents an overview of the related works, followed by Sections III, which discusses the details of the experimental setup and various datasets used for developing the proposed anomaly detection method. Section IV presents the proposed method, and after that, section V presents the results and discussions. Finally, Section VI presents conclusions and provides recommendations for future work.
## II Related Work
Anomaly and intrusion detection in ICSs has been the subject of extensive research. System state prediction is the first stage in physics-based detection, as mentioned in [16]. For instance, some research employed linear dynamical system modeling [17, 18] or autoregressive models [19]. The linear dynamical system proposed in [17] is based on a secure state estimation algorithm that involves Kalman filters for attack detection and state estimation against sensor attacks in a noiseless dynamical system. The work in [19] makes use of Autoregressive Moving Average models for meter management data anomaly detection. The ARMA model in [19] is a linear process that assumes that the meter management data is stationary and when the data fluctuates, it does so uniformly around a particular time. Unfortunately, the linearity of the modeled system, which is generally not satisfied in ICSs, is one of the assumptions made by both techniques.
Invariant-based techniques [20, 21] and specification-based system modeling [22] have been shown to be efficient anomaly detection approaches. Specificity or the requirement that the solution is suited to the system and its operating circumstances is one of the key downsides of invariant-based techniques and specification-based systems.
Approaches based on statistical models in the context of anomaly detection typically start by fitting probability distributions to data points. Using the fitted models, the statistical models decide which points are outliers. Parametric and nonparametric methods are two main classes into which these techniques are typically divided. The main distinction is that parametric methods take the data to come from a parametric distribution under the assumption; hence fitting such a distribution entails learning the parameters of the assumed parametric distribution. Both linear regression and Gaussian mixture models (GMM) are used parametric techniques for anomaly detection [23, 24]. In contrast, nonparametric techniques do not rely on a parametric model of the data. Examples include histogram-based techniques (HBOS) [25], Kernel Density Estimation (KDE) [26]. After the model fitting process, parametric models for anomaly detection are typically quick to utilize; however, nonparametric models for anomaly detection can be more expensive to deal with.
To obtain reliable anomaly detection results, ensemble-based methods integrate the output from different base outlier detectors. Notable works of ensemble based anomaly detection methods include feature bagging [27], isolation forest [28], locally selective methods [29], and scalable unsupervised outlier detection [30]. Feature bagging method employs a variety of sub-feature spaces. Isolation forests method combines data from multiple base trees. Locally selective combination in parallel outlier ensembles method dynamically selects the best base estimator for each data point. Scalable unsupervised outlier detection method employs numerous heterogeneous estimators. In general, ensemble-based approaches for anomaly detection frequently perform well in practice, even for high-dimensional datasets [31, 32]. However, ensemble-based methods can require non-trivial tweaking, such as choosing the appropriate meta-detectors [33]. Ensemble anomaly detection methods are frequently difficult to interpret [34].
The COPOD anomaly detection algorithm is modeled after copulas for multivariate data distribution [15]. Copulas are mathematical functions that let COPOD distinguish between the dependency structure of a given multivariate distribution and the marginal distributions of a dataset. Empirical Cumulative Distribution Functions (ECDF) are first calculated by the authors using a specified dataset. The empirical copula functions are then created using the (ECDF). Finally, the COPOD anomaly detection algorithm uses the empirical copula to approximate the tail probabilities and quantifies the probabilities as the anomaly scores of the data records. The authors in [15] evaluated their proposed algorithm on 30 public outlier detection benchmark datasets. In each instance, 60% of the data was used for training, and the remaining 40% of the data was used for testing. Through extensive experiments conducted, the authors claim the detection algorithm is a state-of-the-art outlier detection algorithm in terms of detection accuracy and computational cost [15].
An unsupervised outlier detection algorithm known as ECOD, which is inspired by the fact that outliers are often the rare events that occur at the tails of a distribution, is proposed in [14]. The authors' method models each dimension in the dataset using a nonparametric statistical method. The proposed method calculates the ECDF per dimension for each data point in order to first estimate the underlying distribution of the input dataset. The algorithm then estimates the dataset's tail probabilities across all dimensions using the obtained ECDF. Finally, quantification of the tail probability is utilized to score the data records for outliers in the dataset. The authors conducted in-depth tests using 30 benchmark datasets and claimed that the ECOD algorithm outperforms 11 state-of-the-art baselines in terms of detection accuracy, model efficiency, and scalability. Despite their model's robust performance, the work in [14] made a simplistic unimodal assumption about the input dataset, and so the approach is not appropriate for multimodal distributions in which an outlier could be in neither left nor right tails.
The detection of anomalies in PLCs has also been done using NN-based prediction algorithms [35]. For the purpose of identifying PLC anomalies employing long short-term memory (LSTM), the authors in [35] presented a non-intrusive power-based anomaly detection approach. With the proposed models, accuracy of up to 98% was obtained. However, identifying malicious code by monitoring the power consumption of the PLC is insufficient as a faulty power supply and a power electronics breakdown, can result in false positive readings [36].
## III Experiment Setup
This section provides the details of the threat model for this study and the various experimental setups and datasets considered for developing and evaluating the proposed anomaly detection method.
### _Threat Model_
In this work, it is assumed that an attacker is capable of physically compromising sensors and actuators inside the ICS, gaining remote access to the SCADA workstation, and gaining access to the networked control system. This work further assumes that the attacker is familiar with the targeted ICS, including the physical characteristics measured by each sensor and the effects of actuation commands. The objective of the attacker is to harm or alter the ICS activities using the aforementioned capabilities and prior information. This includes; manipulating actuator states through an attack akin to Man-In-the-Middle (MITM) attack in which the attacker sends orders to the actuator rather than a PLC; delivering forged sensor values to the PLC to influence the PLC to make poor judgments; and altering the PLC firmware with the intention to disable the ICS or alter the programmed logic.
### _Datasets Description_
The proposed method is trained, evaluated, and compared with previous work using three open source ICS datasets,
namely Secure Water Treatment (SWaT) [37], Water Distribution (WADI) [38], and the Traffic Light (TLIGHT) datasets [4, 39]. This section provides a summary of the main properties of the datasets utilized in this work.
The SWaT testbed is a typical representation of the dynamic nature of Cyber-physical Systems (CPSs) used in our societies [40]. The dataset is subdivided into a 7-day fraction of regular operations, which serve as the training set, and a 4-day chunk of anomalous operations and 36 attacks produced using the attack model in [38]. The WADI dataset represents a scaled-down version of a large water distribution network in a city. The data records in the WADI dataset, each with 123 attributes broken down into 69 sensor readings and 54 actuator states, were gathered over the course of 16 days. For the first 14 days, normal operating circumstances were recorded and divided into training (95%) and validation (5%) sets. The test set consists of the final two days containing 15 attacks interspersed with normal operating conditions. A total of 784,568 data records were used for training the proposed model and 172,800 data records were used for testing the proposed model. The experimental setup for emulating the behavior of the traffic light (TLIGHT) system for the purposes of data collection is described in [4, 39]. The TLIGHT dataset consisting of normal and abnormal data records, is recorded for a period of four days. Overall, five test sets are presented in the TLIGHT dataset, with each set containing normal and anomalous TLIGHT system operations.
## IV Proposed Method
This section describes the fundamental background of the two-phase dual COPOD anomaly detection technique.
### _Two-phase Dual COPOD Method_
Anomaly detection algorithms assume that all training data are true representations of the normal operation of a system because the anomalies are unknown ahead of time. The assumption may not hold for complex ICS with several sensors and actuators. Sensor and actuator readings may be noisy. If noisy data is used in training anomaly detection models, anomaly predictions may not be accurate. Therefore, noise and apparent outliers need to be removed in the data before training. To this end, a two-phase dual (TPD) COPOD method is proposed. The proposed TPD COPOD is an anomaly detection method that consists of two sequential modeling stages and a dual (parallel) modeling stage. Figure 2 shows the high-level overview of the TPD COPOD. The first phase is responsible for data cleaning and noise removal using ECOD. The second stage of the method is trained using the cleaned data from phase 1. The details of the individual stages of the method are discussed in the subsequent sections.
### _Phase 1: ECOD Model_
Phase 1 of the method uses the ECOD algorithm [14] to remove anomalies and obvious noise in the training dataset so that the assumption about the training dataset being an authentic representation of the normal ICS operations holds. Anomalies are rare data points that occur in low-density areas of a probability distribution [27, 41]. If the distribution is unimodal, these rare events are found in the distribution's tails. Determining the likelihood of finding a data point at least as "extreme" as \(X_{i}\) in terms of tail probabilities forms the basis of the approach ECOD algorithm of phase 1. In this work, it is assumed that for a given ICS dataset, \(n\) data points \(X_{1},X_{2},...,X_{n}\in R^{d}\) are sampled independently and identically distributed. Figure 3 shows the architecture of phase 1 in the TPD COPOD method.
The joint cumulative distribution function over all \(d\) dimensions is represented by \(F:R^{d}\rightarrow[0,1]\). Then, for a vector \(z\in R^{d}\), let \(z^{(j)}\) represent the \(j-th\) entry. Hence by definition of a joint CDF, for any \(x\in R^{d}\),
\[F(x)=P(X^{(1)}\leq x^{(1)},X^{(2)}\leq x^{(2)},...,X^{(d)}\leq x^{(d)}) \tag{1}\]
Regarding the left tail, (1) determines how extreme \(X_{i}\) is. The smaller \(F(X_{i})\) is, the less likely a point \(X\) sampled from the same distribution as \(X_{i}\) will satisfy the inequality \(X\leq X_{i}\). Similarly, \(1-F(X_{i})\) measures the extremeness of \(X_{i}\) by focusing on the right tails of each dimension rather than the left tails. As a result, if either \(F_{X}(X_{i})\) or \(1-F(X_{i})\) is extremely small, then this suggests that \(X_{i}\) corresponds to a rare event and is, therefore, likely to be an anomaly. To simplify, assume that a dataset's various dimensions are independent so that the combined CDF can be factored
\[F(x)=\prod_{j=1}^{d}F^{(J)}(x^{(j)})\quad\text{for}\quad x\in R^{d},\]
where \(F^{(j)}:R\rightarrow[0,1]\) represents the univariate CDF of the j-th dimension. Now it is sufficient to notice that univariate CDFs can be accurately estimated by utilizing the ECDF. The left tail ECDF is
\[\hat{F}_{left}^{(j)}(z)=\frac{1}{n}\sum_{i=1}^{n}\mathbbm{1}\{X_{i}^{(j)} \leq z\}\quad\text{for}\quad z\in R \tag{2}\]
Fig. 3: Phase 1 Architecture of the Proposed Method.
Fig. 2: Two-phase Dual COPOD Method.
where \(\mathbb{1}\{\cdot\}\) is the indicator function that is 1 when its argument is true and is 0 otherwise. Similarly, the right tail ECDF is
\[\hat{F}_{right}^{(j)}(z)=\frac{1}{n}\sum_{i=1}^{n}\mathbb{1}\{X_{i}^{(j)}\geq z \}\quad\text{for}\quad z\in R. \tag{3}\]
For every point \(X_{i}^{(j)}\), the tail probabilities \(\hat{F}_{left}^{(j)}(X_{i}^{(j)})\) and \(\hat{F}_{right}^{(j)}(X_{i}^{(j)})\) are aggregated by multiplying them together to achieve the final anomaly score \(O_{i}\in[0,\infty)\). Higher \(O_{i}\) means more likely to be an anomaly.
The skewness of a dimension's distribution is used in aggregating the tail probabilities to determine whether the left or right tail probability should be used for a given dimension. The sample skewness coefficient, \(\tilde{\mu_{3}}\) of dimension \(j\), for a given dataset can be derived as [42]
\[\tilde{\mu_{3}}=\frac{\sum_{i=1}^{n}(X_{i}^{(j)}-\bar{X^{(j)}})^{3}}{(n-1) \times\sigma^{3}} \tag{4}\]
where \(\sigma\), the standard deviation is
\[\sigma=\sqrt{\frac{\sum_{i=1}^{n}(X_{i}^{(j)}-\bar{X}^{(j)})^{2}}{(n-1)}}\]
and \(\bar{X}^{(j)}\) is the mean of dimension \(j\)'s distribution, and \(\bar{X}^{(j)}=\sum_{i=1}^{n}X_{i}^{(j)}/n\). When \(\tilde{\mu_{3}}<0\), points on the left tail can be considered outliers, whereas when \(\tilde{\mu_{3}}>0\), points on the right tail can be considered outliers.
The tail probabilities are transformed to log negative probabilities to calculate the final anomaly score per data point for a given training dataset. The log negative probability to the left and right tails are
\[O_{left}(X_{i})=-\sum_{j=1}^{d}\log(\hat{F}_{left}^{(j)}(X_{i}^{(j)})), \tag{5}\]
and
\[O_{right}(X_{i})=-\sum_{j=1}^{d}\log(\hat{F}_{right}^{(j)}(X_{i}^{(j)})), \tag{6}\]
respectively.The automated form of selecting the left or right tail of the \(j-th\) dimension based on whether \(\tilde{\mu_{3}}<0\) or \(\tilde{\mu_{3}}>0\) is
\[\begin{split} O_{auto}(X_{i})&=\sum_{j=1}^{d}[ \mathbb{1}\left\{\tilde{\mu_{3}}<0\right\}\log(\hat{F}_{left}^{(j)}(X_{i}^{( j)}))\\ &+\mathbb{1}\left\{\tilde{\mu_{3}}\geq 0\right\}\log(\hat{F}_{right}^{( j)}(X_{i}^{(j)}))].\end{split} \tag{7}\]
The final anomaly score is calculated in the space of negative log probabilities, where a lower probability translates into a larger negative log probability and, as a result, a higher likelihood of being an anomaly. The largest magnitude of the three computed anomaly scores is chosen as the final output anomaly score \(O_{i}\) for the data point \(X_{i}\) with
\[O_{i}=\max\{O_{left}(X_{i}),O_{right}(X_{i}),O_{auto}(X_{i})\}. \tag{8}\]
Algorithm 1 contains the ECOD algorithm's pseudocode, which is utilized in phase 1 of the proposed method. Also, \(O_{i}^{(j)}\) is the dimensional anomaly score for dimension j of \(X_{i}\). Since the log function is monotonic, \(O_{i}^{(j)}\) represents the degree of outlierliness and the anomaly contribution by \(X_{i}^{(j)}\). This representation creates model interpretability for the ECOD model in phase 1 of the method.
```
Input: Training dataset \(\{X_{i}^{(J)}|i=1,2,3...,n\}\) where \(X_{i}^{(J)}\) refers to the \(j-th\) feature (dimension) of the \(i-th\) data point Output: Set of anomaly scores \(O_{i}\in R^{n}\) for each dimension j in 1,...,d do Estimate left and right tail ECDFs (using equation 2 and IV-B) Compute the sample skewness coefficient \(\{\tilde{\mu_{3}}\}\) for the j-th feature's distribution using (4) end for for for each sample i in 1,...,n do Aggregate tail probabilities of \(X_{i}\) to obtain anomaly score \(O_{i}\): \(O_{left}(X_{i})=-\sum_{j=1}^{d}\log(\hat{F}_{left}^{(j)}(X_{i}^{(j)}))\) \(O_{right}(X_{i})=-\sum_{j=1}^{d}\log(\hat{F}_{right}^{(j)}(X_{i}^{(j)}))\) \(O_{auto}(X_{i})=\sum_{j=1}^{d}[\mathbb{1}\left\{\tilde{\mu_{3}}<0\right\}\log( \hat{F}_{left}^{(j)}(X_{i}^{(j)}))+\mathbb{1}\left\{\tilde{\mu_{3}}\geq 0 \right\}\log(\hat{F}_{right}^{(j)}(X_{i}^{(j)}))]\). Set the final anomaly score for point \(X_{i}\) as: \(O_{i}=\max\{O_{left}(X_{i}),O_{right}(X_{i}),O_{auto}(X_{i})\}\). end for return Anomaly scores, \(O=(O_{1},...,O_{n})\)
```
**Algorithm 1**Phase 1: Noise Reduction using ECOD.
### _Phase 2: Dual COPOD Model_
Phase 2 of the TPD COPOD consists of two parallel COPOD models. This stage receives the input data from phase 1, which is separated into discrete and continuous data points. One COPOD model receives the discrete input, whereas the other receives the continuous input. Phase 2 of the TPD COPOD model is responsible for the actual anomaly detection after training. The details of the phase 2 architecture are discussed in this section. Figure 4 shows phase 2 architecture of the proposed method.
Copulas are mathematical operations that allow us to decouple marginal distributions from a given multivariate distribution's dependence structure. Formally, a d-variate copula \(C:[0,1]^{d}\in[0,1]\), is the CDF of a random vector \((U^{(1)},U^{(2)},...,U^{(d)})\) with uniform marginals given by
\[C_{U}(u)=P(U^{(1)}\leq u^{(1)},...,U^{(d)}\leq u^{(d)}) \tag{9}\]
where \(P(U^{(j)}\leq u^{(j)})=u^{(j)}\) for \(j\in{1,...,d}\) and \(u^{(j)}\in[0,1]\). By using inverse sampling, uniform distributions can be transformed into any desired distributions, such as
\[X_{j}=F_{j}^{-1}(U_{j})\sim F_{j} \tag{10}\]
It has been shown in [43] that for any random variables \(X_{1},\cdots,X_{d}\) with joint distribution function \(F(x_{1},\cdots,x_{d})\) and marginal distributions \(F_{1},\cdots,F_{d}\), there exists a copula such that
\[F(x)=C(F_{1}(x_{1}),\cdots,F_{d}(x_{d})) \tag{11}\]
With copula, a joint distribution of \((X_{1},\cdots,X_{d})\) can be described using only their marginals. This provides flexibility when modeling high-dimensional ICS datasets because the dataset's dimensions can be modeled separately, and a linkage of marginal distributions together to form the joint distribution is guaranteed. Furthermore [43] shows that if \(F\) has univariate marginal distributions \(F_{1},\cdots,F_{d}\), then there exists a copula \(C(\cdot)\) such that (11) holds. Additionally, if the marginal distributions are continuous, then \(C(\cdot)\) can be uniquely determined. The expression for the copula equation in terms of the joint CDF and inverse CDFs is derived by replacing (9) with the inverse (10) to yield
\[C(u)=F_{x}(F_{x_{d}}^{-1}(u_{1}),\cdots,F_{x_{d}}^{-1}(u_{1})). \tag{12}\]
Together, (11) and (12) are referred to as Sklar's Theorem, which ensures the existence of a copula for any multivariate CDF with continuous marginals and offers a closed form equation for constructing the copula.
The dual COPOD algorithm of phase 2 uses a nonparametric approach based on fitting ECDFs called empirical copula [15]. Let \(X_{i}^{(j)}\) represent the i-th observation of the j-th feature/dimension for the given d-dimensional ICS training dataset \(X\) with \(n\) observations. The empirical CDF is
\[\hat{F}(x)=P((-\infty,x])=\frac{1}{n}\sum_{i=1}^{n}[\mathbb{1}\{X_{i}\leq x_{ i}\}. \tag{13}\]
It is possible to determine \(\hat{U_{i}}\) by using the empirical copula observations as the inverse of (10), such that
\[(\hat{U}_{i}^{(1)},\cdots,\hat{U}_{i}^{d})=(\hat{F}^{(1)}(X_{i}^{(1)}),\cdots, \hat{F}^{(d)}(X_{i}^{(d)})). \tag{14}\]
Finally, by substituting the empirical copula observations of (14) into the first equality of (12), the empirical copulas are
\[\hat{C}(u^{(1)},\cdots,u^{(d)})=\frac{1}{n}\sum_{i=1}^{n}\mathbb{1}\{\hat{U}_ {i}^{(1)}\leq u^{(1)},\cdots,\tilde{U}_{i}^{(d)}\leq u^{(d)}\}. \tag{15}\]
An empirical copula, \(\hat{C}(u)\) with multivariate CDFs supported on \(n\) points in the grid \(\{1/n,2/n,\cdots,1\}^{d}\), has discrete uniform marginals on \(\{1/n,2/n,\cdots,1\}\), and asymptotically converges to \(C(u)\) as a result of the central limit theorem [44].
The COPOD algorithm adopts a three-stage process in detecting ICS anomalies. First, the COPOD algorithm computes the ECDFs based on the input dataset received from phase 1. Secondly, the COPOD algorithm uses the ECDFs to produce the empirical copula function. Finally, the COPOD algorithm uses the empirical copula in (15) to approximate the tail probability described in (11).
The COPOD algorithm's objective is to determine the probability of detecting a point at least as extreme as each observation in the input dataset, \(x_{i}\). That is, assume that \(x_{i}\) is distributed according to d-distribution function \(F_{X}(X_{i})\), the COPOD algorithm needs to calculate the tail probabilities, \(F_{X}(x_{i})=P(X\leq x_{i})\) and \(1-F(x_{i})=P(X\geq x_{i})\). If \(x_{i}\) is an anomaly, the probability of observing a point at least as extreme as \(x_{i}\) should be small. Therefore, if either \(F_{X}(x_{i})\) or \(1-F(x_{i})\) is extremely small, point \(x_{i}\) occurs infrequently and is likely to be an anomaly. In the COPOD algorithm, \(F_{X}(x_{i})\) is known as the left tail probability of \(x_{i}\), and \(1-F(x_{i})\) is known as the right tail probability of \(x_{i}\). Therefore, an anomaly is considered as an observation that has a small tail probability if either of the two quantities (\(F_{X}(x_{i})\) and \(1-F(x_{i})\)) are small.
High-dimensional data spaces have challenges not found in low-dimensional spaces. These challenges are known as the curse of dimensionality, and are common in anomaly detection domain [15]. In order to prevent diminishing tail probabilities and to exploit the monotonicity property of the \(\log()\) function, the COPOD algorithm uses the sum of the negative log probabilities similar to the ECOD algorithm of phase 1 in (5) and (6).
### _The Dual COPOD Approach to Anomaly Detection_
The cleaned dataset from phase 1 goes to phase 2 for training the dual COPOD models. In phase 2, the input data is separated into discrete and continuous data points. The discrete data points go to the upper COPOD model (COPOD 1 of Figure 4), whereas the continuous data is input to the lower COPOD model (COPOD 2 of Figure 4). The reason for the dual architecture model is to allow phase 2 of the architecture to exploit ICS data by examining the use of two latent representations to extract useful information to minimize overfitting and improve the model's anomaly detection capability.
Each of the COPOD models in the dual architecture in Figure 4 requires a d-dimensional input dataset from phase 1; \(X=(X_{i}^{(1)},X_{i}^{(2)},\cdots,X_{i}^{(d)})\), where \(i=1,\cdots,n\), and produces an anomaly score vector \(O(X)=[X_{1},X_{2},\cdots,X_{n}]\). The anomaly scores are between \((0,\infty)\), and are to be used comparatively. The anomaly score does not indicate the probability of \(X_{i}\) being an anomaly but rather the relative measure
Fig. 4: Phase 2 Architecture of the Proposed Method.
of how likely \(X_{i}\) is when compared to other points in the dataset. Larger \(O(X_{i})\) signifies \(X_{i}\) is more likely anomalous.
Each COPOD model fits d-dimensional left tail CDFs, using (13) and d-dimensional right tail CDFs by replacing \(X\) in (13) with \(-X\). Also, d-dimensional skewness vector \(\tilde{\mu_{3}}\) is computed using (4). Next, the empirical copula observations for each \(X_{i}\) are computed using (14) to obtain the left tail copulas \(\hat{U}_{left}^{j}\) and right tail copulas \(\hat{U}_{right}^{j}\). Then, the skewness corrected empirical copula observations are calculated as
\[\hat{W}_{i}^{(j)}=\begin{cases}\hat{U}_{left}^{j}&\text{if}\quad\tilde{\mu_{3 }}<0\\ \hat{U}_{right}^{j}&\text{otherwise}.\end{cases}\]
Finally, the probability of observing a point at least as extreme as each \(x_{i}\) along each dimension is computed. Similar to (8), the maximum of the negative log of the probabilities generated by the left tail empirical copula, right tail empirical copula, and skewness corrected empirical copula is selected as the final anomaly score. That is, the smaller the tail probability is, the bigger its negative log, and so a data point is considered an outlier if it has a small left tail probability, a small right tail probability, or a small skewness corrected tail probability.
The final anomaly scores of the TPD COPOD are generated as a combination of the anomaly scores of the two COPOD models using a window function. The window function defines the time intervals based on which anomaly scores can be divided using a moving average [45]. Let \(O_{c_{1}}(X_{i})\) and \(O_{c_{2}}(X_{i})\) be the anomaly scores of COPOD model 1 and COPOD model 2, respectively. For a given data point \(X_{i}\), its decision label is negative (anomalous) if and only if the predictions of \(O_{c_{1}}(X_{i})\) and \(O_{c_{2}}(X_{i})\) are negative; otherwise, the decision label of \(X_{i}\) is positive. The final decision score/label \(O_{f}(X_{i})\) is made by observing the predictions of the dual COPOD models for a time frame of \(t_{w}\) time instants. In this work, a \(t_{w}\) of 30s is used for inference. Algorithm 2 summarizes the decision function of the TPD COPOD method. Combining the TPD COPOD architecture with the algorithm definition of phase 2 results in the complete TPD COPOD architecture as shown in Figure 4.
```
Input 1: Discrete input data \(\{X_{i}^{(J)}|i=1,2,3...,n\}\) where \(X_{i}^{(J)}\) refers to the \(j-th\) feature (dimension) of the \(i-th\) discrete data point Input 2: Continuous input data \(\{X_{i}^{(k)}|i=1,2,3...,n\}\) where \(X_{i}^{(k)}\) refers to the \(k-th\) feature (dimension) of the \(i-th\) continuous data point Output : Set of decision labels \(O_{f}\) for each dimension j in 1,...,d do Compute anomaly scores \(O_{c_{1}}(X_{i}^{(j)})\) for COPOD 1 end for foreach dimension k in 1,...,d do Compute anomaly scores \(O_{c_{2}}(X_{i}^{(k)})\) for COPOD 2 end foreach data record i in 1,...,n do if\(O_{c_{1}}(X_{i})\parallel O_{c_{2}}(X_{i})==-1(anomalous)\)then Output \(O_{i}=1\); ; /* Anomalous data point */ else Output \(O_{i}=0\); /* Normal data point */ end if end foreach data record i in 1,...,n do Using a time frame, \(t_{w}\) if\(\frac{1}{t_{w}}\sum_{i=1}^{t_{w}}O_{i}\geq 80\%\)then Final decision label \(O_{f_{i}}=-1\); /* Anomaly detected */ else Final decision label \(O_{f_{i}}=1\); /* Normal operation */ end if end return Final decision labels \(\{O_{f_{i}}\}\)
```
**Algorithm 2**TPD COPOD Method's Decision Function.
## V Results and Discussions
The TPD COPOD method's performance evaluation is based on performance metrics, results from predictions on the test data, and comparison with prior work trained on similar datasets. The proposed method was developed using the Python programming language, and PyOD which is the most comprehensive and scalable Python library for detecting outlying objects in multivariate data [46]. Evaluation results and performance metrics calculations are performed by using the Scikit-learn library [47]. The three different datasets, namely; SWaT [40], WADI [38], and TLIGHT [4] are used for evaluating the model performance of the proposed method.
### _Results Evaluation on the SWaT Dataset_
This subsection describes the proposed method's performance on the SWaT dataset as compared with previous work evaluated on the same dataset. During training, the SWaT training dataset is passed through the first stage of the TPD COPOD method for any noise and unwanted signals to be removed. Figure 5 shows the first sample of noise detected by the ECOD model of phase 1 TPD COPOD. Figure 5 shows the feature-level outlier scores explaining the reason for detecting the first sample as an outlier. The x-axis indicates the features (sensors and actuators) of the input dataset represented as numerical integers. The blue dashed line represents the 90th percentile band, and the orange dashed line represents the 99th percentile band for the given features. Sample 1 of Figure 5 is flagged as an anomaly because several dimensions (features), such as dimensions 2, 6-8, 19, 24, 28, 29, 31, 34, 35 39-43, and 45-48, have outlier scores that exceed the 99th percentile band. The detected anomalous training sample 1 shown in Figure 5 is selected at random, and the explanations provided about the figure is similar to the rest of the plots, and are not shown in this paper. Figure 5 shows that about half of the dimensions have outlier scores that exceed the 99th percentile band, and this explains why the sample is detected as an anomaly. Overall, the computer used for analysis required 16.70s for phase 1 to clean the training data consisting of
496,800 samples. A total of 496,680 training samples were detected as outliers in phase 1 of TPD COPOD.
Figure 6 shows the first normal training sample in the SWaT training dataset detected by the ECOD model of phase 1 TPD COPOD model. The training sample shown in Figure 6 is detected as a normal sample because none of the dimensional outlier scores exceeds the 99th percentile band. Similarly, in all cases where the ECOD model of phase 1 TPD COPOD model detected a training sample as normal, none of the dimensional anomaly scores of those samples exceeded the 99th percentile band.
The clean SWaT dataset consisting of 447,120 samples were separated into discrete and continuous samples for phase 2 of the TPD COPOD. Training required 2.35s for phase 2 models, and predictions required 6.02s over test data consisting of 450,819 samples. After training, only phase 2 of the proposed method is used for performing inference. Figure 7 shows the first discrete sample of a detected anomaly in the test data by phase 2 COPOD 1. Figure 7 shows that because of the discrete nature of the input dataset to phase 2 COPOD 1, the 90th percentile band for most of the dimensions are 0. Dimension 9 produced an outlier score that is equal to the 99th percentile band, and as a result, dimension 9 is a major contributing factor to classifying the first discrete test sample as an anomaly.
The results presented in Figure 8 use the visualization approach proposed in [48, 4] to better understand the TPD COPOD model performance. The histogram-based visualization approach normalizes the histogram frequency (y-axis) to a range between \(0\%\) and \(100\%\), whereas the x-axis represents the model's normalized decision scores indicating the prediction confidence. Figure 8 shows the results of the normalized TP, TN, FP, and FN values on the SWaT test set by phase 2 COPOD 1 shown in Figure 4. Figure 8 shows that phase 2 COPOD 1 correctly detected about 60% of the SWaT anomalies with over 60% confidence level. Phase 2 COOPD 1 misclassified less than 10% of the normal instance as anomalies, whereas about 35% of the anomalies were misclassified as normal data points. Also, phase 2 COPOD 1 correctly predicted about 45% of the normal data with prediction confidence of less than 10%. Figure 9 shows the results of the normalized TP, TN, FP, and FN values on the SWaT test set by phase 2 COPOD 2 shown in Figure 4. Figure 9 shows that about 80% of the anomalies were correctly detected by phase 2 COPOD 2 with prediction confidence between 95% and 50%. The TP and FP prediction by phase 2 COPOD are normally distributed with average prediction confidence of about 50%.
The SWaT dataset has been utilized for anomaly detection by K-Nearest Neighbors (KNN) [49], Feature Bagging (FB)
Fig. 5: Detected Noise in the SWaT Training Dataset by Phase 1 shown in Figure 3 of the Proposed Method: Sample 1.
Fig. 8: Results of the Normalized TP, TN, FP, and FN Values on the SWaT Test Set by Phase 2 COPOD 1 shown in Figure 4.
Fig. 6: Detected Normal Data Sample in the SWaT Training Dataset by Phase 1 shown in Figure 3 of the Proposed Method: Sample 1.
Fig. 7: Detected Anomaly in the SWaT Test Dataset by Phase 2 shown in Figure 4 of the Proposed Method: Discrete Sample 1.
Fig. 9: Results of the Normalized TP, TN, FP, and FN Values on the SWaT Test Set by Phase 2 COPOD 2 shown in Figure 4.
[49], Support Vector Machines (SVM) [49], Auto-encoders (AE) [49] and Dual Isolation Forest (DIF) [9]. Table I shows a comparison of evaluation results between previously published methods and the proposed method. Table I shows that TPD COPOD has an F1-score of \(93\%\), precision of \(93\%\), and recall of \(93\%\). TPD COPOD has the highest F1-score and recall values as compared with previous work. In terms of precision, TPD COPOD performed at par with NN-one class, DIF, and SVM. The high recall and F1-score values of TPD COPOD reflect its robustness and ability to confidently detect SWaT dataset anomalies compared to previous work. When compared to other approaches with similar computational complexities, such as NN, the proposed anomaly detection method significantly outperforms NN in terms of F1-score and recall. The outstanding performance of TPD COPOD across all performance metrics could be attributed to the noise removal from the training data in phase 1 during model training. Phase 2 of the method in Figure 4 trains on clean training data, arguably a true representation of the normal operations of the SWaT testbed.
### _Results Evaluation on the WADI Dataset_
This subsection describes the proposed method's performance on the WADI dataset compared to previous work evaluated on the same dataset. During training, the WADI training dataset is passed through the first stage of the TPD COPOD method for any noise and unwanted signals to be removed. The total number of dimensions (features) consisting of both discrete and continuous values of the training dataset is 119. The clean WADI data consisting of 706,114 samples were separated into discrete and continuous samples for phase 2 of the TPD COPOD. The computer used for analysis required 13.80s to train the phase 2 models and 19.64s to make predictions on the test data consisting of 172800 samples.
The WADI dataset has been utilized for anomaly detection by SVM [49] KNN [49], AE [49], FB [49], EGAN [49], GAN [49], Deep Autoencoding Gaussian Model (DAGMM) [54], Long-short Term Memory Variational Autoencoder (LSTM-VAE) [55] and DIF [9], and Graph Deviation Network (GDN) [56]. Table II shows a comparison of evaluation results between previous work and the proposed method. Table II shows that TPD COPOD has superior performance to all previous work in terms of F1-score and recall. TPD COPOD achieved an F1-score of \(92\%\), precision of \(82\%\), and recall of \(86\%\). The high recall and F1-score values of TPD COPOD reflect its robustness and ability to confidently detect WADI dataset anomalies as compared to previous work. Although LSTM-VAE and GDN achieved higher precision values as compared to the proposed method in this study, LSTM-VAE and GDN models have significantly low recall values of \(27\%\) and \(40.2\%\), respectively. The low recall values of LSTM-VAE and GDN resulted in poor F1-scores of \(25\%\) and \(57\%\), respectively. The outstanding performance of TPD COPOD across all performance metrics could be attributed to the noise removal from the training data in phase 1 during model training. Phase 2 of the method in Figure 4 trains on clean training data, arguably a true representation of the normal operation of the WADI testbed.
### _Results Evaluation on the TLIGHT Dataset_
This subsection describes the proposed method's performance on the TLIGHT dataset as compared with previous work evaluated on the same dataset. During training, the TLIGHT training dataset is passed through the first stage of the TPD COPOD method for any noise and unwanted signals to be removed. The total number of dimensions (features) consisting of both discrete and continuous values of the training dataset is 36. There are five different test sets of TLIGHT dataset. The computer used for analysis required 0.67s for phase 1 to clean the training data consisting of 41,580 samples. A total of 4,144 training samples were detected as outliers in phase 1.
Table III shows the comparison between TPD COPOD and previous methods evaluated on TLIGHT dataset. Test set 1 of the TLIGHT dataset consisted of 5,000 samples, the computer required 0.328s to make predictions. Table III shows that TPD COPOD achieves superior precision, recall, and F1-score of \(96\%\), \(96\%\), and \(95\%\) respectively. OCSVM [4], OCNN [4] and IF [4] have similar performance in terms of precision, recall, and F1-score. The test set 2 of the TLIGHT dataset consisted of 7,000 samples. The computer used for analysis required 0.0.437s to make predictions on the test set 2. TPD COPOD achieves superior performance by correctly predicting all the data points in test set 2 by achieving \(100\%\) precision, recall, and F1-score. OCSVM, OCNN and IF have similar performance in terms of precision, recall, and F1-score. IF
achieves high precision, recall, and F1-score values of \(98\%\), \(97\%\), and \(97\%\) respectively, on test set 2. However, OCNN and OCSVM achieved similar performance on test set 2. The test set 3 of the TLIGHT dataset consisted of 13,130 samples. The computer used for analysis required 0.324s to make predictions on the test set 3. TPD COPOD achieves superior precision, recall, and F1-score of \(91\%\), \(90\%\), and \(88\%\) respectively, on test set 3. OCSVM, OCNN and IF achieve similar performance in terms of precision, recall, and F1-score on test set 3.
The test set 4 of the TLIGHT dataset consisted of 15,000 samples. The computer used for analysis required 0.477s to make predictions on the test set 4. TPD COPOD achieved relatively low performance on test set 4 as compared to its performance on test set 1 and 2 by having precision, recall, and F1-score values of \(88\%\), \(85\%\), and \(83\%\) respectively. OCNN, IF, and TPD COPOD achieve similar performance whereas OCSVM has the worst performance by having precision, recall, and F1-score values of \(81\%\), \(81\%\), and \(71\%\) respectively. TPD COPOD has low performance on test set 4 because of the large proportion of anomalies in test set 4 consisting of timing bits anomalies which are hard to detect [4]. The test set 5 of the TLIGHT dataset consisted of 18,269 samples. The computer used for analysis required 0.517 to make predictions on the test set 5. TPD COPOD achieved its lowest performance on test set 5 by having precision, recall, and F1-score values of \(83\%\), \(75\%\), and \(73\%\) respectively. OCNN and OCSVM achieve similar performance whereas IF has the best performance by having precision, recall, and F1-score values of \(82\%\), \(78\%\), and \(77\%\) respectively. Again, TPD COPOD has low performance on test set 5 because of the large proportion of anomalies in test set 5 consisting of timing bits anomalies. Therefore, the TPD COPOD appears to be ineffective at detecting TLIGHT system errors consisting of system timing bits.
## VI Conclusion
This work proposes the first known two-phase dual (TPD) COPOD anomaly detection method, which is an unsupervised anomaly detection technique consisting of two sequential stages and a dual (parallel) modeling stage. Phase 1 of the method makes use of ECOD to remove any obvious noise or outlier data records in a given training dataset. Phase 2 of the method consists of a dual COPOD architecture that utilizes the output data of phase 1 to develop two COPOD models. The algorithms implemented in the proposed method are parameter-free and based on empirical distribution functions. The deterministic nature of all stages of the method results in the mitigation of the challenges associated with hyperparameter selection in unsupervised anomaly detection. Furthermore, the proposed method is highly interpretable and quantifies each feature's contribution toward an ICS anomaly. The proposed anomaly detection method is computationally and memory efficient, scalable, and suitable for low- and high-dimensional ICS datasets. The proposed method is trained, evaluated, and compared with previous work using three open-source ICS datasets. The proposed method outperformed previous work in terms of F1-score and recall on the SWaT, WADI and TLIGHT datasets. The robust performance of the TPD COPOD method coupled with its speed of anomaly detection makes the TPD COPOD capable of real-time ICS anomaly detection. Future work should focus on finding a means by which the ECOD algorithm of phase 1 may be extended to multimodal training dataset.
|
2309.10844 | HYPERTILING -- a high performance Python library for the generation and
visualization of hyperbolic lattices | HYPERTILING is a high-performance Python library for the generation and
visualization of regular hyperbolic lattices embedded in the Poincar\'e disk
model. Using highly optimized, efficient algorithms, hyperbolic tilings with
millions of vertices can be created in a matter of minutes on a single
workstation computer. Facilities including computation of adjacent vertices,
dynamic lattice manipulation, refinements, as well as powerful plotting and
animation capabilities are provided to support advanced uses of hyperbolic
graphs. In this manuscript, we present a comprehensive exploration of the
package, encompassing its mathematical foundations, usage examples,
applications, and a detailed description of its implementation. | Manuel Schrauth, Yanick Thurn, Florian Goth, Jefferson S. E. Portela, Dietmar Herdt, Felix Dusel | 2023-09-19T18:00:02Z | http://arxiv.org/abs/2309.10844v3 | HYPERTILING - a high performance Python library for the generation and visualization of hyperbolic lattices
###### Abstract
hypertiling is a high-performance Python library for the generation and visualization of regular hyperbolic lattices embedded in the Poincare disk model. Using highly optimized, efficient algorithms, hyperbolic tilings with millions of vertices can be created in a matter of minutes on a single workstation computer. Facilities including computation of adjacent vertices, dynamic lattice manipulation, refinements, as well as powerful plotting and animation capabilities are provided to support advanced uses of hyperbolic graphs. In this manuscript, we present a comprehensive exploration of the package, encompassing its mathematical foundations, usage examples, applications, and a detailed description of its implementation.
###### Contents
* 1 Introduction
* 1.1 Hyperbolic Lattices
* 1.2 Applications
* 1.3 Existing Implementations
* 2 Setup
* 2.1 Environment
* 2.2 Installation
* 2.3 Quick Start
* 3 Features
* 3.1 Tilings
* 3.2 Graphs
* 3.3 Neighbors
* 3.4 Refinements
* 3.5 Dynamic Modification and Filters
* 3.6 Drawing
* 3.7 Animations
* 4
Mathematical Foundations * 4.1 Isometries * 4.2 Geodesics * 4.3 Polygons
* 5 Architecture
* 5.1 Overview
* 5.2 Static Rotational Kernels
* 5.2.1 General Idea
* 5.2.2 Rotational Duplicates
* 5.2.3 Implementation Details
* 5.2.4 Neighbors
* 5.3 Dunham's Algorithm
* 5.4 Generative Reflection Kernel
* 5.4.1 General Concepts
* 5.4.2 Algorithmic Details
* 5.4.3 Layer Definition
* 5.4.4 Graph Kernels
* 5.4.5 Neighbors
* 5.4.6 Integrity Check
* 5.5 Benchmarks
* 5.6 Choosing a Kernel
* 6 Examples
* 6.1 Epidemic Spreading
* 6.2 Scalar Field Theory
* 6.3 Helmholtz Equation
* 7 Road Map
* 7.1 Triangle Groups
* 7.2 Regular Maps
* 7.3 Symbolic Kernels
* 7.4 Parallelization
* 8 Conclusion
## 1 Introduction
The exploration of curved spaces is motivated by the recognition that the intrinsic geometry of a system can dramatically affect its behavior and characteristics, and that the flat Euclidean geometry, which accurately describes our everyday experiences, is not universally applicable across all scales and contexts. Curvature plays a significant role in various branches of science, most notably general relativity. Although measurements of the cosmic microwave indicate that the universe as a whole is flat or very close to it [1, 2, 3], curvature remains a key concept in cosmology and astronomy, as massive objects directly change space and time around them.
Spaces with negative curvature in particular are of great interest. These so-called _hyperbolic spaces_ play a decisive role in the AdS/CFT correspondence [4, 5, 6, 7], which provides a duality between conformal field theory (CFT) operators and Anti-de Sitter (AdS) gravity fields and is of great significance both for fundamental aspects of quantum gravity [8] and for applications to strongly correlated condensed matter systems [9].
Applications of curved manifolds are also found in many other fields of science and engineering, for instance, in large-scale climate simulations encompassing the entire planet Earth. Accounting for the effects of curvature becomes then crucial for accurately modeling and predicting weather patterns, climate changes and their potential impact on ecosystems. Specifically, Earth can be represented as a two-sphere, \(\mathbb{S}_{2}\), which is a compact manifold of constant positive curvature. Constant _negative_ curvatures, on the other hand, correspond to hyperbolic spaces. These non-compact manifolds distinguish themselves from flat spaces, in that, for instance, the volume encompassed by a ball of radius \(r\) grows exponentially with \(r\) instead of polynomially, and there are not one, but infinitely many parallels to any given line \(L\), passing through any point \(P\) not on \(L\).
Manifolds of constant curvature are fully characterized by their scalar curvature radius \(\ell\). From a more abstract point of view, \(\ell\) can be seen as a control parameter, representing a natural extension of the usual flat geometry. Induced by the curvature, established physical systems exhibit different behaviors or even entirely new phenomena, which, in turn, can be used as probes, yielding novel insights into the physics of the corresponding flat models, in the limit \(\ell\to\infty\). Examples of physical processes which are substantially affected by their supporting geometry can be found in diffusive systems [10, 11, 12, 13, 14], in magnetic properties of nano-devices [15, 16, 17], soft materials [18], complex networks [19, 20], including information infrastructure [21], quantum gravity [22, 23, 24], bio-membranes [11], glass transitions [25, 26], equilibrium spin systems and critical phenomena [27, 28, 29, 30] as well as adsorption and coating phenomena on non-flat surfaces [26].
The investigation of many of the phenomena mentioned so far demands a numerical approach, often involving a discretized geometric representation, which is generally a nontrivial task in hyperbolic spaces. The purpose of hypertiling[31] is to provide a robust, powerful and flexible solution for this step.
This paper is organized as follows: In the remainder of this Introduction, we briefly present hyperbolic spaces and lattices, their application and existing numerical implementations. In Section 2, we show how easy it is to install and use hypertiling and in Section 3 we show-case the range of features it offers. Section 4 is a short summary of the package's underlying mathematical foundations and Section 5 provides an extensive discussion of our algorithmic numerical implementations, their different features and performances. We show the package in action in Section 6, discuss our future plans in Section 7 and offer our closing remarks in Section 8.
### Hyperbolic Lattices
A manifold equipped with constant negative curvature (a hyperbolic space) is commonly denoted by the symbol \(\mathbb{H}^{d}\), where \(d\) is the number of spatial dimensions. Formally, it can be embedded in a \(d+1\) dimensional space with Minkowskian signature. Specifically, it constitutes the hypersurface constrained by the relation
\[x^{\mu}x_{\mu}=\eta_{\mu}x^{\mu}x^{\nu}=-x_{0}^{2}+x_{1}^{2}+\ldots+x_{2}^{2}=- \ell^{2}, \tag{1}\]
where \(\eta=\text{diag}(-1,1,\ldots,1)\). The line element is given by
\[\text{d}\,s^{2}=-\text{d}x_{0}^{2}+\text{d}x_{1}^{2}+\ldots...+\text{d}x_{n}^ {2}. \tag{2}\]
In this paper, we restrict ourselves to the case \(d=2\), also known as the _pseudosphere_. This notion stems from an apparent resemblance of the \(\mathbb{H}_{2}\) metric to that of an ordinary sphere. Using the parametrization \(x_{0}=\cosh\rho\), \(x_{1}=\sinh\rho\cos\phi\) and \(x_{2}=\sinh\rho\sin\phi\), we arrive at
\[\mathrm{d}s^{2}=\mathrm{d}\rho^{2}+\sinh^{2}\rho\mathrm{d}\phi^{2}, \tag{3}\]
where \(\rho\in[0,\infty)\) and \(\phi\in[0,2\pi)\). In the literature, a certain variety of similar, polar-like coordinates representations can be found, a frequently used one being
\[\mathrm{d}s^{2}=\frac{1}{1+r^{2}/\ell^{2}}\mathrm{d}r^{2}+r^{2}\mathrm{d}\phi^ {2}, \tag{4}\]
where the curvature radius \(\ell\) enters explicitly and \(r\in[0,\infty)\). All these coordinate systems are mathematically equivalent in that they describe the same manifold. The coordinate representation primarily used in this paper is the so-called _Poincare disk model_ of the hyperbolic space, denoted as \(\mathbb{D}_{2}\). Its metric is given by
\[\mathrm{d}s^{2}=\frac{4\ell^{2}}{(1-z\bar{z})^{2}}\mathrm{d}z\mathrm{d}\bar{z}, \tag{5}\]
where \(z\in\mathbb{C}\), \(|z|<1\). Note that \(K=-1/\ell^{2}\) represents the constant negative curvature of the manifold. In the Poincare model, the entire \(\mathbb{H}_{2}\) space is projected onto the complex plane, with the unit circle representing points infinitely far away from the origin.
The hyperbolic two-space can be naturally discretized by _regular hyperbolic tilings_[32], which have been studied already since the late 19th century [33, 34]. They become known to a broader scientific audience due to the works of H.S.M. Coxeter [35, 36], which also have been an inspiration for M.C. Eschers famous Circle Limit drawings [37]. Regular hyperbolic tilings preserve a large subgroup of the isometries of \(\mathbb{H}_{2}\)[38, 39], which makes them promising candidates for a wide range of numerical simulations setups. Regular tilings are characterized by their Schlafli symbol \((p,q)\), where the condition \((p-2)(q-2)>4\) has to be met in order for a tiling with \(q\) regular \(p\)-gons meeting at each vertex, to be hyperbolic. The \((7,3)\) hyperbolic tiling and its dual \((3,7)\) tiling are shown in Figure 1 as an example. Since hyperbolic spaces exhibit a length scale, defined by their radius of curvature \(\ell\), the edge lengths of hyperbolic polygons are fixed quantities, depending only on the Schlafli parameters \(p\) and \(q\)[40]. Their geodesic length \(h^{(p,q)}\) in units of \(\ell\) can be computed via the Poincare metric (5) and can be interpreted as a fixed lattice spacing that cannot be tuned1. In general, the inherent length scale has significant implications for the discretization of hyperbolic spaces. Foremost, it renders a continuum limit of \((p,q)\) tilings in the usual way impossible, which severely limits the applicability of traditional finite-size scaling methods [41, 42].
Footnote 1: A detailed discussion can be found Section 4.3.
Scaling, i. e. shrinking or enlarging a regular polygon in a hyperbolic or spherical space will also influence its shape, evidenced by changes in the interior angles at the vertices, and the regular tessellation will in general no longer cover the space without voids or overlaps. Technically speaking, there is no concept of _similarity_ in curved spaces. Moreover, these geometric peculiarities render the construction of periodic boundaries - which can be indispensable when studying bulk systems due to the generically _large_ boundary of hyperbolic spaces - particularly challenging [43]. Broadly speaking, the bounding edges of a suitable finite part of the tessellation need to be glued together properly in order to obtain a translationally invariant lattice. A systematic approach of how compact surfaces can be tessellated is given by the theory of so-called _regular maps_[44, 45, 46, 47]. Just like other compact surfaces, regular maps can be embedded in 3D Euclidean space, such as beautifully demonstrated in Reference [48].
### Applications
Hyperbolic lattices found particular interest in the field of critical phenomena [49, 50] over the last two decades. For the Ising model [51] there are strong indications that the critical exponents take on their corresponding mean-field values as the hyperbolic grid can be regarded as effectively infinite-dimensional [52, 53]. Moreover, even at very high temperatures small-sized ferromagnetic domains can be observed [54]. Despite the mean-field properties on hyperlattices, the correlation length does not diverge at criticality, rather it stays finite, thus indicating the existence of an inherent length scale linked to the curvature radius which destroys the usual concept of scale invariance at criticality [55, 56]. Also, other equilibrium critical phenomena have been examined on hyperbolic lattices, including the \(q\)-state Potts model and the \(XY\) model [57, 25, 58]. For the latter, it turned out that the hyperbolic surface induces a zero-temperature glass transition even in systems without disorder. This is due to the non-commutativity of parallel transport of spin vectors which causes a breakdown of their perfect orientational order and consequently gives rise to local frustration. Even more striking novel effects were found in _percolation_ systems on hyperbolic lattices [59, 60, 61, 62, 63, 64, 65]. Specifically, an intermediate phase associated with two critical thresholds arises. At the lower critical probability, infinitely many unbounded clusters emerge. At the upper critical point, these clusters join into one unique unbounded cluster, spanning the entire system. In the flat Euclidean limit, these two thresholds coincide and the intermediate phase vanishes. It was found that this behavior is due to the non-vanishing surface-volume ratio of these lattices in the infinite-volume limit.
Besides critical phenomena, the physics of hyperbolic tilings has recently been studied in the context of condensed matter physics [66], circuit quantum electrodynamics [67, 68, 69], quantum field theory [70, 71] and topolectric circuits [72, 73, 74]. Another research branch where hyperbolic lattices arise very naturally is the AdS/CFT correspondence [4], as an Anti de-Sitter space with Euclidean signature, EAdS\({}_{2}\), is isomorphic to \(\mathbb{H}_{2}\). Current attempts to discretize the AdS/CFT correspondence are based on modeling hyperbolic spaces [75]. Very recently, some of the authors were able to show that the Breitenlohner-Freedman bound [76, 77], a central result in supergravity, which states that certain perturbations that are unstable in flat geometries are actually stable on hyperbolic spaces, thus allowing for a straightforward experimental realization via hyperbolic electric circuits [78]. In this study, an earlier version of the hypertiling package was used.
### Existing Implementations
The paper _Hyperbolic Symmetry_[79] by Douglas Dunham has been a very influential work in the numerical exploration of regular hyperbolic geometry. Implementations soon after its publication [80] up until today [81] are based on this algorithm. Scientific applications such as those described in Section 1.2, especially with a numerical focus, demand frameworks for constructing hyperbolic lattices and, over the years, individual researchers and small research groups have been developing their own codes. These codes, however, are typically neither openly available nor maintained after the publication of their associated work and are therefore of little use for the wider research community. Among the implementations that are available, some are for demonstration or educational purposes and have as their main objective simply displaying hyperbolic tilings [82, 83, 84, 85, 86], sometimes with artistic goals [87, 81], or supporting manufacturing applications [88, 89] that demand the construction of relatively small tessellations. Other, more group-theoretic approaches, are understood as proof of concept rather than high performance modules [90, 91]. Hence, all these projects do not provide the performance, data availability, facilities/resources and documentation required for sustained scientific research. It is with the aim of fulfilling these research needs that hypertiling has been created.
## 2 Setup
### Environment
hypertiling is a Python package and should run everywhere where a Python 3 interpreter is available. In order to construct and visualize basic hyperbolic lattices, we only require two very common package dependencies, namely _numpy_ and _matplotlib_. To fully utilize the high performance aspect of the library, we furthermore recommend installing _numba_, which is used to significantly speed up many of hypertiling's internal functions. However, note that even without _numba_ the package is fully functional, only potentially slower. Finally, specific optional dependencies are the package _sortedcontainers_2, employed for a faster internal memory layout of specific construction kernels, as well as _networkx3_, which can be a useful extension of the visualization capabilities already provided directly in hypertiling. Note that all these packages are available via standard sources, such as PyPI or conda.
Footnote 2: [https://grantjenks.com/docs/sortedcontainers](https://grantjenks.com/docs/sortedcontainers)
Footnote 3: [https://networkx.org](https://networkx.org)
### Installation
The hypertiling library can be installed directly from the PyPI package index using the ubiquitous pip installer:
python -m pip install hypertiling
All releases as well as the latest version can also be downloaded or cloned from our public GitLab repository, using
Figure 1: Selection of regular hyperbolic tilings projected onto the Poincaré disk. Tilings in the upper (lower) row are centered about a cell (vertex).
```
``` gitclone[https://git.physik.uni-wuerzburg.de/hypertiling/hypertiling.git](https://git.physik.uni-wuerzburg.de/hypertiling/hypertiling.git)
```
For a local installation then from its root directory execute
```
python-mpipinstall.
```
### Quick Start
After successful installation, the package can be imported into any Python 3 interpreter and a first plot of a hyperbolic lattice is readily created with only few lines of code:
```
fromhypertilingimportHyperbolicTiling fromhypertiling.graphics.plotimportquick_plot p,q,n=7,3,4 tiling=HyperbolicTiling(p,q,n) quick_plot(tiling)
```
This should display a tessellation similar to the (7,3) lattice shown in Figure 1.
## 3 Features
The hypertiling package centers on the construction of tilings of the two-dimensional hyperbolic plane. A hyperbolic tiling (or _tessellation_) consists of individual _cells_ or, in two dimensions, _polygons_, which are arranged to cover the hyperbolic manifold without voids or overlappings. We use the terms cells/polygons and tiling/tessellation/lattice interchangeably. Since particular focus is placed on _regular_ tilings, it is common to identify tilings by their Schlafli symbol \((p,q)\), with \((p-2)(q-2)>4\) for hyperbolic curvature. In a regular tiling, all \(p\)-gonal cells are identical/uniform in the sense of geometric congruence. For a selection of visualizations, we refer the reader to Figure 1. Besides hyperbolic tilings, hypertiling also offers the functionality to construct _graphs_, which can be interpreted as reduced, coordinate-free lattices. They comprise only the adjacency relations between vertices.
Constructing tilings and associated graphs represents the core functionality of hypertiling, which the library makes particularly simple, as shown in Section 2.3. For more advanced use, we first need to introduce the concept of _kernels_. In hypertiling, a kernel encodes the algorithmic construction, the data structure and certain peripheral methods and auxiliary functions of a tiling or a graph. However, since we stick to a uniform user interface, irrespective of the kernel used, greatest possible flexibility is ensured and the library can be used with only little to no knowledge about technical, kernel-specific details. Our interfaces are held transparent and the user can benefit from the capabilities of the different kernels without any detailed knowledge of their inner workings.
### Tilings
The kernels which produce a HyperbolicTiling currently available in the package are:
* StaticRotationalSector or "SRS" Cells are constructed via rotations about vertices of existing ones. Cells are implemented as HyperPolygon class objects and can be refined (compare Section 3.4). A bookkeeping system is used to avoid duplicate cells.
* StaticRotationalGraph or "SRG" (default) Algorithmically related to SRS, this kernel constructs adjacency relations between cells already during the construction of the tiling. It is currently the default tiling kernel of the package and provides methods for adding or removing cells from the lattice dynamically.
* GenerativeReflection or "GR" Very fast and lightweight tiling construction mechanism, which uses reflections on "open" edges to generate new cells. Only one symmetry sector is held on storage, with cells outside of this sector being generated on demand.
* Dunham or "DUN07" An implementation of the influential construction algorithm by D. Dunham [79, 92]. Recursive calls to a hierarchical tree structure are used to build duplicate free tilings in hyperboloid coordinates rather than in the Poincare disk representation.
* DunhamX or "DUN07X" A modern, heavily optimized variant of DUN07, with a performance increase of more than one order of magnitude.
When instantiating a tiling, the kernel can be selected via a keyword argument, using either the abbreviation string
```
fromhypertilingimportHyperbolicTiling T=HyperbolicTiling(7,3,2,kernel="GR") or the full class name fromhypertilingimportHyperbolicTiling fromhypertilingimportTilingKernels T=HyperbolicTiling(7,3,2,kernel=TilingKernels.GenerativeReflection)
```
An optional keyword for the kernels "SRS" and "SRG" is center, which can take on the values cell (default) and vertex and determines whether the tiling is centered around a polygon or a vertex. Examples for both cases can be found in Figure 1. In this context, it is worth to be remarked that a cell-centered \((p,q)\) tiling is the graphic-theoretical dual of a vertex centered \((q,p)\) tiling and vice versa.
An in-depth description of all kernels, including their particular advantages and shortcomings, as well as a detailed performance comparison can be found in Section 5. In most cases, however, the user is well served by either the "SRG" kernel (default), for greater flexibility, or the "GR" kernel, when speed is important or computing resources are a constraint.
Encapsulating all of the internal mechanics into kernel objects guarantees easy debugging, modification, and, most important, extendability of the package. Irrespective of which kernel has been selected for the construction, a HyperbolicTiling object provides, e. g. iterator functionality (which returns a list of coordinates of the cell center and vertices) and can return its size via the Python built-in len() function. Moreover, cells come with several attributes, such as the coordinates of their vertices and center, angle in the complex plane, orientation, layer (or generation) in the tiling, and symmetry sector. These attributes can be accessed using get functions, for example
```
T.get_vertices(i) T.get_angle(i) T.get_layer(i)
```
which return these quantities for the \(i\)-th cell. A full reference of get-functions can be found in the package documentation. Note that the definition of polygon layers4, the output of get_layer, might differ among kernels and can be unavailable for some (such as "DUN97" and "DUN97X") due to algorithmic constraints.
Footnote 4: _Layers_ are also termed _coronas_ or _generations_ in the literature.
### Graphs
In addition to the kernels described in the previous section, we provide kernels which construct HyperbolicGraph objects. What is the difference between a HyperbolicTiling and a HyperbolicGraph? Broadly, in Section 3.1, cells are considered as individual entities, without knowledge of their surroundings. It is clear, however, that neighborhood relations are a crucial property in many applications, as detailed, e. g., in Section 1.2. For this reason, we provide methods for establishing adjacency between cells or, in other words, to access tilings as graphs. Such graphs capture the entire _geometric_ structure of the tiling and can be represented, for instance, as an array of adjacent cell indices, similar to a sparse representation of the corresponding adjacency matrix. Kernels, which construct _only_ this graph structure, independent of an actual representation of cells (in terms of their coordinates) are dedicated graph kernels and produce HyperbolicGraph objects. These leaner objects yield reduced features and functionality compared to a full HyperbolicTiling. Currently, two graph kernels are available in the package:
* GenerativeReflectionGraph or "GRG" Following the same algorithmic principles as the GR tiling kernel, this class constructs neighborhood relations already during the construction of the lattice. Only one symmetry sector is explicitly stored, whereas any information outside this sector is generated on demand. Geometric cell information, except for the center coordinates, is not stored.
* GenerativeReflectionGraphStatic or "GRGS" A static variant of the GRG kernel. Adjacency relations for all cells are explicitly computed, such that no sector construction and no on-demand generation is required. Hence the memory requirement is about a factor \(p\) larger compared to GRG. Nonetheless, GRGS is still very fast and therefore particularly suited for large-scale simulations of systems with local interactions.
Compared to tilings, hyperbolic graphs are invoked via a separate factory pattern, shown in the following code example:
```
fromhyertilingimportHyperbolicGraph G=HyperbolicGraph(7,3,2,kernel="GRG")
```
### Neighbors
For kernels that do not establish adjacency relations already upon lattice construction, these relations can be computed in a separate second step, if required. The methods get_nbrs and get_nbrs_list, internally implemented as wrapper functions, offer various neighbor search algorithms, selectable using the method keyword, as shown in the following code example:
```
#constructtiling T=HyperbolicTiling(7,3,5,kernel="SRS") #selectalgorithm"radiusoptimizedslice"(R0S) T.get_nbrs_list(method="ROS")
```
The default method is kernel specific and generally the fastest available. Output of get_nbrs_list is a sparse nested List of length \(N\), where \(N\) is the total number of polygons in the tiling. Sublist \(i\) contains the indices of those cells which are neighbors of the cell with index \(i\). The function get_nbrs is invoked with an explicit index and yields only the list of neighbors of that particular cell. Note that, even though HyperbolicTiling and HyperbolicGraph are different objects, neighbors are accessed in the same way
```
#letTbeAHyperbolicTilingorHyperbolicGraph
#returnneighboursofcelli T.get_nbrs(i)
#returnneighboursofallcells T.get_nbrs_list()
```
For the SRS kernel, the available neighbor search methods include method="ROS" (radius optimized slice), where the distance between any pair of cells in one symmetry sector is computed and compared against the lattice spacing. Also, a combinatorial algorithm method="EM0" (edge map optimized), where adjacency relations are obtained by identifying corresponding edges among polygons, is provided. A detailed discussion of all neighbor methods of the SR kernel family can be found in Section 5.2.4. The GR kernel provides several different algorithms as well, including a radius search method="radius" and a geometrical algorithm exploiting the lattice construction mechanics, method="geometrical". For a detailed list of all methods specific to the GR kernel, refer to Section 5.4.5.
Omitting the method keyword and hence resorting to the default algorithm is a solid choice in many use cases - unless an unusual definition of neighborhood is required. As illustrated in Figure 2, neighbors may for instance be defined by sharing either an edge or a vertex with the cell under consideration. But also broader neighborhoods are possible, e. g. by employing an appropriately tuned radius search. To ensure clarity regarding the specific definition employed by a particular method, refer to the package documentation.
In general, the computation of adjacency relations in an _existing_ tiling presents a non-trivial task. Given the fact that a two-dimensional manifold of negative curvature can not be embedded into a higher dimensional Euclidean space, a natural ordering, which can be used to group or sort cells depending on their position, is lacking. Merely partitioning the Poincare disk into one or several rectangular grids, inspired by techniques like hierarchical multigrids, fails to achieve efficient ordering. In a loose sense, this issue arises due to the exponential growth of the manifold's volume in all directions. At the same time, the representation is heavily distorted towards the boundary as the unit circle is finite. As a result, neither the Poincare disk coordinates nor any Euclidean-style grid can be employed for efficient sorting purposes.
Given these difficulties, a conceptually straightforward way of identifying adjacent cells in
Figure 2: Adjacency can be defined by shared edges (left panel), shared vertices (middle panel) or within a radial region (right panel, black circle).
any_ regular geometry is by radius search, which is available as a standalone function in the neighbors module and used as the default algorithm behind get_nbrs_list in some kernels, such as DUN07. However, it should be emphasized that any neighbor search method that makes explicit use of Poincare disk coordinates risks becoming inaccurate close to the unit circle, where the Euclidean distance between adjacent cells vanishes. For this reason, we recommend performing additional consistency checks whenever very large lattices are required.
### Refinements
Unlike in traditional Euclidean structures, in a regular hyperbolic lattice, the edge length of a cell (which is equivalent to the effective lattice spacing) can not be tuned freely. This is a direct consequence of a non-zero curvature radius, which introduces an intrinsic length scale to the system. As discussed in detail in Section 4.3, shrinking or enlarging a regular polygonal cell also changes its shape. In particular, the interior angles at the vertices are altered, resulting in overlapping or voids in the tessellation. As a consequence, a continuum limit, where the lattice spacing tends to zero, is not trivially achievable.
To a certain extent, this limitation can be circumvented by introducing _refinements_. As demonstrated in Figure 3, in a triangular tiling, each triangle can always be subdivided into four smaller ones. The bisectors of the edges of the original triangle are used as new vertices. In principle, this process can be repeatedly applied and returns more and more fine-grained refinement levels. If the original tiling is not a triangular one, the first refinement step needs to be adjusted. In this case, we first subdivide every \(p\)-gon (\(p>3\)) into \(p\) uniform triangles, employing the center (of mass) of the original cell as a new vertex, shared among the new triangles. Note that these new cells can be highly non-equilateral, depending on the parameters \(p\) and \(q\). After this first refinement step, we can proceed as described above.
It is important to emphasize that triangles generated in the refinement procedure are no longer isometric. Even when starting from an equilateral triangle (such as in the first refinement step of a \((3,q)\) tiling), the central triangle will in general differ from the three outer ones. Also, neither of the four refined triangles is equilateral. Hence it is clear that the refined tiling is no longer maximally symmetric. Stated differently, hyperbolic lattice refinements always break the discrete rotational symmetry of the lattice as well as its strict regularity. This is the price we pay for denser tessellations. Whether or not it is an acceptable trade-off clearly depends on the application. One way to quantify the amount of non-uniformity in the refined lattice can be by monitoring the cell areas as refinement steps are added. The package provides convenient formulae to calculate angles, edges lengths and areas (see Section 4). As a general rule of thumb, the smaller the cells in the original, unrefined tiling are with respect to the radius of curvature, the less pronounced the resulting non-uniformity will be. By repeated application of refinements, the distribution of triangle areas will eventually saturate, as elementary triangles become increasingly flat on scales smaller than the curvature radius.
In summary, hyperbolic lattice refinements provide a clever way to circumvent some of the limitations that result from the peculiarities of hyperbolic geometry and they offer a continuum limit where the effective length scale in the lattice approaches zero. This comes at the cost of losing symmetry properties as well as strict uniformity of the lattice cells. Even though recent research demonstrates that for example bulk-to-bulk propagators on a refined lattice agree remarkably well with their continuum counterparts [70], these points should be kept in mind.
Currently, in the hypertiling package, refinements are supported in the SRS kernel. They can be invoked as follows
```
fromhypertilingimportHyperbolicTiling p,q,n=6,4,3 T=HyperbolicTiling(p,q,n,kernel="SRS")
```
#addtworefinementlevels T.refine(2)
#addthremore foriinrange(3): T.refine() ```
The method refine takes the number of refinement levels as its only argument. One should be aware that the size of the tiling increases quickly with the number of refinement iterations. Specifically, the number of cells after \(r\) refinement steps is given by
\[N_{c}(r)=\begin{cases}4^{r}N_{0}\\ 4^{r-1}\,pN_{0}\end{cases}\quad\quad\text{for}\quad\quad p=3\\ p>3 \tag{6}\]
where \(N_{0}\) represents the number of cells in the original, unrefined lattice.
### Dynamic Modification and Filters
The static rotational graph (SRG) kernel comes with a number of unique features compared to other kernels in hypertiling, as it enables flexible modifications of an existing lattice. This can be useful, for instance, in applications where the lattice serves not only as a static supporting structure but also undergoes dynamic changes. An example might be reaction-diffusion processes where particles traverse the lattice in either a random or rule-based manner. In this scenario, it can be advantageous to dynamically generate the required lattice cells around the current positions of the particles, rather than generating and storing a vast, complete lattice. Moreover, a lattice that expands according to a walker's movement may avoid boundary effects.
An illustration of the SRG kernel's ability to add and remove cells dynamically is given in Figure 4. We start by adding cells to an existing tiling using the add method. This function acts on _exposed_ cells, i. e. those with incomplete neighborhoods or, more precisely, cells that have less than \(q\) neighbors. When invoked without arguments, all vacant spaces around _exposed_ cells are filled with grid cells. Using an efficient container-based bookkeeping system (refer to Section 5.2.3 for an in-depth explanation) this can be accomplished without creating identical copies of already existing cells, so-called _duplicates_. Also, all absent cells which share a vertex with a currently exposed cell are created by add.
Considering that the SRG kernel gradually builds up the neighbor structure, it is important to note that exposed cells do only know about their "inwards" neighbors i. e. their parents, but not their siblings, until the next layer is generated. Naturally, cells located in the boundary layer of a tiling are always exposed.
Figure 3: A cell-centered (3,8) lattice before (left), after one (middle) and after two (right) refinement iterations.
When invoking \(\underline{\text{add}}\), also the associated adjacency relations are computed alongside the process of generating new cells. Newly created cells are always tagged as exposed, even in case they happen to close a gap or void within the lattice, as can be seen from Figure 4. Cells acted upon by \(\underline{\text{add}}\) lose this attribute, as by design all possible neighbors are then present. In order to _un-expose_ all cells, i. e. to obtain the complete graph structure of the current tiling, the user might consider calling a neighbor search method which globally computes the adjacency structure (see Section 5.2.4).
Apart from adding an entire new "layer", the \(\underline{\text{add}}\) function can also be applied to a subset of cells, using a list of their indices as an input argument. Those can be exposed cells as well as "bulk" cells. Clearly, when a cell is already surrounded by the full set of \(q\) neighbors, no new cells are created since any addition would be a duplicate. Newly added cells are assigned unique indices. The list of exposed cells can be queried using the \(\underline{\text{get\_exposed}}\) class method.
The SRG kernel also provides the functionality to remove cells in an existing tiling, using the method remove, which takes a list of integers containing indices of cells, that are to be removed. Cells can be removed anywhere in the lattice, as demonstrated in Figure 4. As for the addition process, all local adjacency relations are updated accordingly.
In the SRG kernel, both the (usual) one-step lattice construction, as well as the dynamic addition of cells offer the option to incorporate _filters_. Filters are small helper functions, that can be implemented by the user. They can be used, for instance, to limit the generation of the tiling to certain regions in the Poincare disk. Filters are expected to follow a simple syntax, namely they take a complex number (representing the central coordinate of a cell) as an input argument and return a boolean determining whether this cell is to be created or not. During the construction procedure, every candidate for a new cell is checked against that filter and only created if the associated condition is met. For example, in case one wants to grow a tiling only into the upper half of the complex plane, a filter like this might be suitable:
```
importnumpyasnp defmy_angular_filter(z): angle=np.angle(z,deg=True) returnTrueif(0<angle<180)elseFalse
```
Concluding this section, it is worth noting that an interactive demonstration notebook is provided in the examples directory of the package. This notebook might serve as a resource to familiarize oneself with the aforementioned features and explore their functionality.
Figure 4: Demonstration of the lattice modification capabilities of the SRG kernel. Starting from a (7,3) lattice with two layers, an additional layer is added. Then, we remove a number of cells and arrive at a disconnected lattice, where we reconstruct new cells around cells 0 and 36. Exposed cells carry red labels.
### Drawing
Working with non-standard computational lattices also demands suitable methods for data visualization. In the Poincare disk model of hyperbolic geometry, we are confronted with a number of challenges regarding graphical representations. First and foremost, lattice cells are always bound by curved edges, a property that is not natively supported by many standard plot engines. Moreover, for larger lattices, the strong distortion of the stereographic projection towards the boundary of the unit circle results in a pollution of cells near that boundary. This consumes substantial numerical resources, even though these cells typically can not be resolved.
To support the user to deal with those challenges, in hypertiling, we provide a selection of visualization routines for hyperbolic tilings and associated geometric objects. These routines are summarized in Table 1 and described in more detail below.
#### Matplotlib API
* quick_plot Focus on simplicity and speed, offering the best performance, but the fewest extra options, namely, adding the Poncare disk boundary (unit circle), setting the image resolution and the arguments from matplotlib's Polygon, such as linewidth or alpha.
* plot_tiling Offers more customizable plots: by internally using matplotlib's Patches instead of Polygons, the full extent of matplotlib's keyword arguments is available. Besides, _lazy plotting_ (see below) and individually colored cells are available.
* the cost of this trick, however, is the loss of the notion of cells or patches, which therefore can not, e. g., be filled with color. We intend to include this feature in a future release.
Lazy plottingSince cells near the boundary of the unit circle often can not be displayed properly, it can be useful to set a cutoff radius beyond which polygons are omitted. This feature is activated by the option _lazy plotting_, available for the plot_tiling and plot_geodesic routines, and results in lighter plots without slicing the lattice itself.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Feature** & quick\_plot & plot\_tiling & plot\_geodesic & svg module \\ \hline performance & fast & medium & medium & slow \\ geodesic lines & ✗ & ✗ & ✓ & ✓ \\ individual cell color & ✗ & ✓ & ✗ & ✓ \\ lazy plotting & ✗ & ✓ & ✓ & ✗ \\ matplotlib kwargs & ✗ & ✓ & ✓ & ✗ \\ unit circle & ✓ & ✓ & ✓ & ✓ \\ backend & matplotlib & matplotlib & matplotlib & custom \\ \hline \hline \end{tabular}
\end{table}
Table 1: Feature comparison of the current plotting capabilities of the hypertiling package. Here, _unit circle_ denotes the option of adding the boundary of the Poincaré disk; _matplotlib kwargs_, the availability of matplotlib’s keyword arguments; and _lazy plotting_, the possibility of setting a cutoff radius.
Code extensionThe modular design of the code simplifies the integration of specific features from the library into the user's custom routines. Two important internal plotting functions are given by convert_polygons_to_patches and convert_edges_to_arcs, which produce the corresponding matplotlib graphic objects from hyperbolic cells. The routines plot_tiling and plot_geodesic described above are essentially wrappers for these functions, which are useful building blocks for further plot scripts, as shown in our demo notebooks, available in the package repository.
### SVG module
Many applications demand tessellations to be depicted in a geometrically accurate manner - e. g. with vertices connected by geodesics instead of Euclidean straight lines - and in a format that allows the usual graphical manipulations, such as filling areas with color. To make this possible we provide an SVG drawing extension module in hypertiling. The function make_svg renders the tiling as a vector graphic object where edges are drawn as geodesics and polygons are encoded as closed loops consisting of several edges. The SVG image can then be displayed by the interpreter using draw_svg or written to a file via write_svg.
The SVG module provides in many respects the most flexible drawing option in the hypertiling package since SVG images can be freely manipulated in any text editor or vector graphic program. The module's downside is its incompatibility with the matplotlib plot environment: The extension provides a few options, such as line color and thickness, however, production quality images typically demand additional polishing using external programs.
### Animations
Besides its plotting facilities, hypertiling also comes with animation classes, which support dynamically adjustable viewpoints and cell colors. The resulting animations can be exported as video files. These classes are implemented as wrappers around matplotlib's built-in animation module and render particularly well in a Jupyter notebook environment, where interactive plotting can be activated using the @matplotlib notebook command. hypertiling's animation classes are briefly described below and a demonstration notebook with code examples is available in the package's repository.
List AnimationsGiven a pre-computed array-like object of color states, the AnimatorList creates an animation in which the cells' colors cycle through this list. Standard matplotlib animation keyword arguments are also accepted, including interval, setting the distance between color changes in milliseconds, or repeat or repeat_delay.
Live AnimationsThe AnimatorLive class has a very similar signature to that of AnimatorList, but instead of taking a pre-computed list of states, it allows the color values to be updated dynamically, according to a user-implemented function. This function must take the current state as its first argument and return a new state (an array-like object matching the number of cells in the tiling). Additional function arguments (such as physical parameters) can be passed as keyword arguments using stepargs. As with AnimatorList, matplotlib animation keyword arguments are passed using the animargs dictionary. Note that the argument frames in this case controls the duration of exported animations.
Path AnimationsThe classes above vary only the colors on an otherwise static background tiling. The PathAnimation class introduces the possibility of translating the lattice during the animation. Colors can also be animated, according to a list, like in ListAnimation. The translation is defined by a path, which can be a sequence of polygon indices or of coordinates in the
complex plane: These path elements are sequentially moved to the origin, with the smoothness of the animation being controlled by the number of intermediate frames, path_frames, between every path element.
## 4 Mathematical Foundations
We now transition to the mathematical foundations of two-dimensional hyperbolic geometry. This section explains key concepts such as isometries, Mobius transformations, and the construction of geodesics and polygons in the Poincare representation.
### Isometries
As already hinted in the introduction, the hyperbolic 2-space in the Poincare disk representation is given by the interior region of the complex unit circle, \(\mathbb{D}_{2}=\{z\in\mathbb{C},\,|z|<1\}\), equipped with the metric
\[\mathrm{d}s^{2}=\frac{4\ell^{2}}{(1-z\bar{z})^{2}}\mathrm{d}z \mathrm{d}\bar{z}, \tag{7}\]
where \(K=-1/\ell^{2}\) represents the constant negative Gaussian curvature of the manifold. The set of orientation-preserving Mobius transformations \(f:\mathbb{C}\to\mathbb{C}\), defined as
\[f(z)=\frac{az+b}{cz+d};\quad a,b,c,d\in\mathbb{C};\quad ad-bc\neq 0 \tag{8}\]
establishes the Mobius group
\[\mathrm{M\ddot{ob}}_{+}\cong\mathrm{PGL}(2,\mathbb{C}), \tag{9}\]
which is isomorphic to the projective linear group of degree two with complex coefficients. These transformations take on a key role in hyperbolic geometry since certain subgroups act as conformal isometries on the Poincare disk \(\mathbb{D}_{2}\). Specifically, the subgroup of Mobius transformations which describes all orientation-preserving isometries of \(\mathbb{D}_{2}\) and contains all elements as in Eq. (8) can be written as \(2\times 2\) matrices
\[\begin{pmatrix}z\\ 1\end{pmatrix}\mapsto\begin{pmatrix}a&b\\ c&d\end{pmatrix}\begin{pmatrix}z\\ 1\end{pmatrix}. \tag{10}\]
Through the unitarity requirement, we find \(|a|^{2}-|b|^{2}=1\) as well as \(d=\bar{a}\) and \(c=\bar{b}\), yielding the final form
\[\begin{pmatrix}z\\ 1\end{pmatrix}\mapsto\underbrace{\begin{pmatrix}a&b\\ \bar{b}&\bar{a}\\ M&\end{pmatrix}\begin{pmatrix}z\\ 1\end{pmatrix}}_{M}, \tag{11}\]
with \(|M|=1\). This makes it manifest that the set of orientation-preserving isometries forms a projective special unitary group
\[\mathrm{PSU}(1,1)=\mathrm{SU}(1,1)/\{\pm\mathbb{1}\}\subset \mathrm{M\ddot{ob}}_{+}, \tag{12}\]
which is isomorphic to \(\mathrm{PSL}(2,\mathbb{R})\)[39]. Discrete subgroups of \(\mathrm{PSU}(1,1)\) are oftentimes referred to as the _Fuchsian groups_ in the literature [38].
Depending on the choice of the three independent coefficients, \(M\) represents elementary isometric transformations of the hyperbolic plane such as rotation, translation, and reflections. Factoring a complex angle, we arrive at
\[f(z)=\mathrm{e}^{i\phi}\,\frac{z-\alpha}{1-\tilde{\alpha}z},\quad 0\leq\phi<2 \pi,\quad\alpha\in\mathbb{D}_{2} \tag{13}\]
and setting \(\alpha=0\) we find rotations about the origin \(z\mapsto\mathrm{e}^{i\phi}z\), which in matrix form can be written as
\[R(\phi)=\begin{pmatrix}e^{i\phi/2}&0\\ 0&e^{-i\phi/2}\end{pmatrix}. \tag{14}\]
Translations on \(\mathbb{D}_{2}\) are proper Lorentz boosts in \(2+1\) dimensions, or in other words elements of \(\mathrm{SO}^{+}(1,2)\). Consequently, we may write translations as
\[\begin{pmatrix}z\\ 1\end{pmatrix}\mapsto\underbrace{\begin{pmatrix}\cosh\frac{\theta}{2}&\sinh \frac{\theta}{2}\\ \sinh\frac{\theta}{2}&\cosh\frac{\theta}{2}\end{pmatrix}}_{T_{x}(\phi)}\begin{pmatrix} z\\ 1\end{pmatrix} \tag{15}\]
for boosts along the real axis with rapidity \(\theta\) (not \(\theta/2\)!). Boosts in an arbitrary direction can readily be realized by adding suitable rotations before and after the actual translation, i. e.
\[T(\theta)=R(\phi)T(\theta)R(-\phi), \tag{16}\]
where \(\phi\) is the angle between a vector pointing in the translation direction and the positive \(x\)-axis. Likewise, a rotation around an arbitrary point can be accomplished through the proper composition of elementary transformations. Thus, in addition to rotation, we implement the general form of a translation of point \(z_{0}\) to the origin, given by
\[z\mapsto f_{T}(z)=\frac{z-z_{0}}{1-zz_{0}^{2}} \tag{17}\]
where the inverse transformation is given by
\[z\mapsto f_{T^{-1}}(z)=\frac{z+z_{0}}{1+zz_{0}^{2}} \tag{18}\]
which maps the origin back to \(z_{0}\).
### Geodesics
On the Poincare disk, straight lines are circular arcs which intersect orthogonally with the boundary, i. e. the unit circle \(\mathbb{S}^{1}\), as illustrated in Figure 5. Points on \(\mathbb{S}^{1}\) (compare \(u,v\) in the Figure) are located infinitely far away from the interior region of the disk in terms of geodesic distance. Geometrically, the construction of a geodesic line through two points \(z_{a},z_{b}\in\mathbb{D}_{2}\) requires a circle inversion on the set of complex numbers without the origin, i. e. \(f:\mathbb{C}^{+}\rightarrow\mathbb{C}^{+}\), where \(f(z)=1/\tilde{z}=z/|z|^{2}\) of either of the two points, which gives us \(z_{c}\) outside the unit circle. We are left to construct a circle through the points \(z_{a}\), \(z_{b}\), \(z_{c}\) and suitably parametrize the segment from \(z_{a}\) to \(z_{b}\). Note that the circle inversion can not be applied if \(z_{a}\), \(z_{b}\) and the origin are collinear. In this case, the geodesic becomes a straight line in the projection.
One particular advantage of the Poincare disk representation of two-dimensional hyperbolic space is that it represents a conformal model. Therefore, angles on \(\mathbb{D}_{2}\) are measured exactly as the corresponding Euclidean angle in \(\mathbb{C}\) and hyperbolic circles are mapped to Euclidean circles, although hyperbolic and Euclidean circle centers and radii do not coincide, as illustrated in Figure 5.
We define the length of a parametrized path \(\gamma:[a,b]\to\mathbb{D}_{2}\) as a suitable integral over the hyperbolic line element
\[\operatorname{length}_{\mathbb{D}_{2}}(\gamma) =\int_{a}^{b}\left\lvert\left\lvert\frac{\mathrm{d}\gamma}{ \mathrm{d}t}\right\rvert\right\rvert_{\mathbb{D}_{2}}\mathrm{d}t\equiv\int_{a} ^{b}\frac{2\ell}{1-|\gamma(t)|^{2}}\left\lvert\left\lvert\gamma^{\prime}(t) \right\rvert\right\rvert_{2}\mathrm{d}t \tag{19}\] \[=\int_{\gamma}\frac{2\ell}{1-|z|^{2}}|\mathrm{d}z| \tag{20}\]
where \(\left\lvert\left\lvert\cdot\right\rvert\right\rvert_{2}\) represents the Euclidean norm. This allows to compute geodesic distances between two points \(z_{a},z_{b}\) on the Poincare disk by evaluating the above integral along the corresponding shortest path
\[d(z_{a},z_{b})=\inf\left\{\operatorname{length}_{\mathbb{D}_{2}}(\gamma): \gamma\text{ with endpoints }z_{a},z_{b}\right\}. \tag{21}\]
The hyperbolic distance between a point \(z\in\mathbb{D}_{2}\) and the origin yields
\[d(0,z)=2\ell\tanh^{-1}(|z|)=\ln\left(\frac{1+|z|}{1-|z|}\right), \tag{22}\]
whereas the general form of the distance between two points \(w,z\in\mathbb{D}_{2}\) is given by
\[d(w,z)=2\ell\tanh^{-1}\left(\frac{|z-w|}{|1-z\bar{w}|}\right). \tag{23}\]
### Polygons
We define a hyperbolic _polygon_ as a region confined by a set of geodesic line segments, so-called _edges_. The endpoints of these edges are called _vertices_. If all edges have the same length, the polygon is said to be regular. Unlike in Euclidean geometry, where the sum of the inner angles at the vertices is exactly \((p-2)\pi\), polygons in hyperbolic (spherical) spaces have angle sums of less (more) than \((p-2)\pi\), respectively.
A _tiling_ or tessellation is a set of regular polygons which are isometric and cover the entire manifold without overlappings or voids. In the Euclidean case there exist exactly three isometric tilings, which are the well-known square, triangular and honeycomb lattices [93]. The
Figure 5: Geodesic line segments (black arcs) and their continuation towards infinity (dashed lines). The boundary of the gray disk represents the complex unit circle. The point \(m\) denotes the center of a hyperbolic circle and has equal distance to all points on its boundary (red). Corresponding shortest paths are marked by dotted lines.
corresponding characteristic lattice spacing \(h\) can be scaled freely. We encounter a substantially different behavior in a hyperbolic space, where the curvature radius introduces an additional length scale. As a result, the sum of inner angles depends on the size of a polygon. In Figure 6 we demonstrate the interplay between polygon areas and its vertex angles. From the picture, it becomes clear that as the triangle size decreases, the angle sum approaches \(\pi\). This can be intuitively understood as the space becoming more and more flat locally. Additionally, at any point in the manifold, the local circumangle must sum up to \(2\pi\). These constraints result in the fact that the fundamental (and any other) polygon must possess specific dimensions to enable the tiling to cover the entire disk. Specifically, in order to accomplish a full covering, the angles at the vertices need to have an angle of exactly \(2\pi/q\), as shown, for instance, in the rightmost panel of Figure 6. Here, angles are given by \(\alpha=\beta=\gamma=2\pi/7\).
Stated differently, the shape of cells in a regular tiling depends on the Schlafli parameters \(p\) and \(q\). In Figure 7 we show a more detailed illustration of all fundamental angles in a regular tesselation. In this example of a (6,4) lattice, the size of the hexagonal cells is determined by the restriction that the angle at the edges is exactly \(2\beta=2\gamma=2\pi/q=90^{\circ}\). A larger size (and consequently smaller inner vertex angles) would result in gaps between adjacent hexagon cells along their edges. A smaller size (and consequently larger angle) would result in overlaps.
The fact that the choice of \(p\) and \(q\) fixes the polygon size, is a crucial property to keep in mind when working in hyperbolic geometry. It means that the characteristic length of the system can not be tuned. We list the most important geometric lengths in Table 2 for a selection of \(p\) and \(q\) values. Functions that help to compute these quantities are provided in the hypertiling.util module. Their exact definition is discussed in the following:
The circumradius \(r_{0}\) of a polygon, which is the distance from its center to either of its vertices is given by
\[r_{0}=\sqrt{\frac{\cos\left(\frac{\pi}{p}+\frac{\pi}{q}\right)}{\cos\left( \frac{\pi}{p}-\frac{\pi}{q}\right)}}. \tag{24}\]
Furthermore, the radius \(r_{m}\) of the in-circle, i. e. the largest circle centered at the origin and
Figure 6: Hyperbolic triangles of different size. The left panel depicts a so-called _ideal triangle_, where all three vertices are located at infinity (ideal points) and the sum of interior angles is zero. The middle panel shows a finite triangle and the right panel the fundamental cell of a regular (7,3) tiling, with equal edge lengths and an angle sum of exactly \(2\pi p/q\).
fully inside the polygon is implicitly given by the expression
\[\cos\left(\frac{\pi}{p}\right)=\frac{\tanh\left(2\,\tanh^{-1}\,r_{m}\right)}{\tanh \left(2\,\tanh^{-1}\,r_{0}\right)}. \tag{25}\]
Recall that \(r_{0}\) and \(r_{m}\) are distances in the complex plane. The true geodesic distance can easily be computed via Equation (22) as \(d(0,r_{0})\) and \(d(0,r_{m})\), respectively.
For computations involving triangles, hyperbolic geometry provides a particularly useful rule, the so-called law of sines. Given a generic hyperbolic triangle, where \(a,\,b\) and \(c\) represent
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \(\{p,q\}\) & \(h^{(p,q)}\) & \(h^{(q,p)}\) & \(h_{r}\) & \(r_{0}\) \\ \hline \(\{3,7\}\) & 0.566256 & 1.090550 & 0.620672 & 0.300743 \\ \(\{4,5\}\) & 1.061275 & 1.253739 & 0.842481 & 0.397975 \\ \(\{5,4\}\) & 1.253739 & 1.061275 & 0.842481 & 0.397975 \\ \(\{6,4\}\) & 1.762747 & 1.316958 & 1.146216 & 0.517638 \\ \(\{7,3\}\) & 1.090550 & 0.566256 & 0.620673 & 0.300743 \\ \(\{8,4\}\) & 2.448452 & 1.528571 & 1.528570 & 0.643594 \\ \(\{12,10\}\) & 3.951080 & 3.612418 & 3.132385 & 0.916418 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Characteristic geometric lengths for a selection of different regular tilings, including the lattice spacing \(h^{(p,q)}\) (i. e. the edge length), the lattice spacing of the dual lattice \(h^{(q,p)}\) and the cell radius \(h_{r}\) (i. e. the geodesic distance between the cell midpoint and its vertices), all three in units of \(\ell\). The last column displays the radius of the fundamental cell in the Poincare disk, given by \(r_{0}\in(0,1)\).
Figure 7: Example of the fundamental polygon (white), its subdivision into isometric triangles and associated angles in a (6,4) tiling. Angles are given by \(\alpha=2\pi/p\) and \(\beta=\gamma=\pi/q\). Moreover \(h=\overline{BC}=\overline{DC}/2\) represents the lattice spacing, i. e. the geodesic distance between any two adjacent vertices in the tiling.
the edge lengths opposite to the respective vertices \(A\), \(B\), \(C\), one finds
\[\frac{\sin\alpha}{\sinh(a)}=\frac{\sin\beta}{\sinh(b)}=\frac{\sin\gamma}{\sinh(c)}. \tag{26}\]
It is always possible to divide the fundamental polygon into \(2p\) isometric triangles (compare \(\triangle{ACD}\) in Figure 7). In this case, where the inner vertex angle at \(D\) is exactly \(\pi/2\), the relation reduces to
\[\sin\frac{\alpha}{2}=\frac{\sinh\overline{DC}}{\sinh\overline{AC}} \tag{27}\]
which is equivalent to
\[\sin\frac{\pi}{p}=\frac{\sinh(h/2)}{\sinh d_{0}}. \tag{28}\]
The letter \(h\) denotes the hyperbolic distance between vertices, which is nothing but the effective lattice spacing and can be computed as
\[h=h^{(p,q)}=2\ell\cosh^{-1}\left(\frac{\cos\left(\frac{\pi}{p}\right)}{\sin \left(\frac{\pi}{q}\right)}\right). \tag{29}\]
Finally, the area enclosed by a general triangle in the hyperbolic domain is given by
\[A_{\triangle}=(\pi-\alpha-\beta-\gamma)\,\ell^{2} \tag{30}\]
and allows straightforward computation of the polygon area according to Figure 7.
## 5 Architecture
### Overview
The hypertiling library offers a number of different algorithms for constructing hyperbolic tilings and graphs. At the heart of the package, these are implemented as different _kernels_, each of which contains its own construction algorithm, memory design, auxiliary functions and specific manipulation features. A property that is shared among all construction algorithms is that they work incrementally, with new polygons being generated from existing ones. Hence before we start to dive deeper into kernel-specific properties, it is useful to define commonly used family relations, relative to a given polygon:
* _self_: The polygon under consideration itself
* _parent_: The polygon from which _self_ is created
* _sibling_: A polygon also created by _parent_, but which is not _self_
* _child_: A polygon created by _self_
* _nibling_: A polygon created by a sibling
In what follows, the internal mechanics of all kernels currently available in the package is discussed in greater detail. We remark that this discussion is quite technical. If the reader does not require this level of detail at present, they might choose to skip Sections 5.2 - 5.4 and refer to it on an as-needed basis later.
### Static Rotational Kernels
The family of static rotational kernels comprises two distinct implementations, namely the _static rotational graph_ (SRG) kernel, as well as _static rotational sector_ (SRS) kernel. From the perspective of features, we have already discussed the unique lattice manipulation capabilities and immediate graph generation of SRG, as well as the ability to refine lattices in SRS, which moreover uses an optimized sector construction, in Section 3.1. In this chapter, we shed light on the internal algorithmic design and the data structures used in these kernels. In order to better understand the program workflow, we also include a UML-like diagram in Figure 8, which provides a visual representation of the relationships between classes within the SR kernel family.
#### 5.2.1 General Idea
Both the SRG and SRS kernel are built upon a common, rather simple principle: New cells (internally stored as HyperPolygon objects in an array structure) are generated via rotations about the vertices of existing ones. Building a lattice starts with the construction of a fundamental cell, whose edge length is determined by geometric properties of the hyperbolic space (compare Section 4.3). In the case of a cell-centered tiling (which is the default option, and can be manually set by adding the center="cell" keyword in the HyperbolicFiling factory function) this fundamental polygon represents the first layer of the tessellation, whereas for center="vertex" the innermost layer consists of \(q\) polygons (compare Figure 1). Next, the second layer is constructed by iterating over each vertex in the first layer and computing all adjacent polygons using successive rotations by an angle of \(2\pi/q\) about that vertex. Technically, this operation can be split into a sequence of Mobius transformations, as detailed in Section 4.1. First, a translation is carried out, which moves the vertex to the origin; there, the polygon is rotated by \(2\pi/q\) and the inverse translation brings the vertex back to its original position.
It is evident that the construction scheme presented above produces _duplicates_. For instance, adjacent parents create a number of identical children. As we aim to produce a proper duplicate-free tiling, a mechanism that avoids these identical copies is required. This is achieved by introducing an auxiliary data structure we call DuplicateContainer. In this container, the coordinates of the center of every cell already present in the computed tiling are stored. Newly constructed cells are compared against existing ones and those already present in the container will be discarded.
#### 5.2.2 Rotational Duplicates
As the name already indicates, the static rotational sector (SRS) kernel takes advantage of the \(p\)-fold rotational symmetry of a hyperbolic \((p,q)\) tiling. Only one sector of the lattice is explicitly constructed. In a second step, this fundamental sector is replicated \(p\) times, accompanied by a suitable shift of attributes (such as vertex coordinates) to obtain the full lattice. As only one sector of the tiling is constructed, newly generated polygons, whose center coordinates end up being located outside the angle interval \(0\leq\phi<2\pi/q\) are immediately discarded5.
Footnote 5: For vertex-centered tilings \(2\pi/q\) is to be replaced by \(2\pi/p\) here and in the following.
However, due to the overall high degree of symmetry in the tiling, it can occur that a new cell is positioned exactly on a symmetry axis, with its center numerically close to one of the sector boundaries. As a consequence, so-called _rotational duplicates_ might be created. This issue can be illustrated using a specific example, as in Figure 9, where the boundaries between sectors are visualized by the dotted black lines. In the picture, the central cell constitutes layer 1 and the cell with index 1 is located in layer 2. Moving outwards, we recognize that both,
polygons 2 and 14, are positioned directly at opposite sector boundaries and that they are identical in terms of a \(p\)-fold discrete rotational lattice symmetry. Hence only one of them belongs into the fundamental sector. We always decide to adopt the one with the smaller angle in the complex plane, hence, in this case, polygon 2. In practice, however, due to the closeness of both polygons to their associated sector boundary, combined with the finite numerical precision, four cases can occur:
1. Both polygons end up in the sector
2. Neither polygon ends up in the sector
3. Only the one with the _smaller_ angle ends up in the sector
4. Only the one with the _greater_ angle ends up in the sector
In the first case, we end up with one _rotational duplicate_. Note that, in this example, polygons with indices 2 and 14 are no duplicates per se, i. e. considering only the sector they would not be recognized as duplicates due to their different center coordinates. However, upon the angular replication step polygon 2 will be rotated by \(2\pi/p\) around the origin and then exactly coincide with polygon 14.
It is obvious that scenarios 1 and 2 must be avoided and we select scenario 4 to be the desired one. As a result of what is discussed above, a mechanism that reliably filters out potential rotational duplicates is required. To this end, we introduce a second instance of the DuplicateContainer data structure. Similar to the global duplicate management container, this second container stores the cells' center positions during the construction, however only those being located within a tiny slice around the lower sector boundary, i. e. with angles \(|\phi|<\epsilon\), where \(\epsilon\ll 1^{\circ}\). Including these cells automatically ensures that options 2 and 3 can not be realized, since the polygon with the smaller angle is always inside this "soft boundary". After the entire sector is generated and before the angular replication of the lattice is carried out, the auxiliary container can be used to implement a filtering step. We loop over all cells in the soft boundary around the _upper_ sector border, i. e. those with an angle of \(2\pi/p-\epsilon<\phi<2\pi/p+\epsilon\)
Figure 8: Class and program flow diagram for the SR kernel section of the package.
and perform a clockwise rotation by \(2\pi/p\). In Figure 9 this soft boundary region is indicated by the colored lines. If polygon 14 had accidentally been included into the fundamental sector, after this rotation its center position would be identical to that of polygon 2, which is already inside the large duplicate container. As a consequence, polygon 14 is deleted from the tiling, since it will be generated as a copy of polygon 2 at the replication stage of the algorithm.
#### 5.2.3 Implementation Details
As discussed above, one of the key ingredients of any kernel in the SR family is the reliable detection of duplicate cells in the lattice. For the implementation of this DuplicateContainer, one might naively consider Python's set data container, since due to its property as a hashed data type, it allows find operations in constant to linear time complexity, depending on the number of elements. Nonetheless, due to the representation of cell center coordinates as floating-point numbers, it becomes necessary to round them to a specific number of decimal places. This rounding process is necessary to enable the hashing of coordinates but effectively leads to the creation of bins in the set. Now, for very large tilings, where cells will be located increasingly close to the unit circle, it might occur that two separate (i. e. non-duplicate) polygons are not readily distinguishable and could be mistakenly disregarded. Furthermore, and even more severe, the binning mechanism always has a chance to accidentally miss duplicates, if both coordinates are rounded to adjacent bins. Due to the lattice being constructed in an iterative fashion, minor computational inaccuracies might accumulate when proceeding outwards. This way, a more precise rounding could paradoxically lead to an increased occurrence of errors, since more bin boundaries are present.
In order to avoid binning issues altogether, we introduce a robust mechanism for managing duplicates. This is accomplished by a specialized data structure, the DuplicateContainer. When available on the system, we utilize the external sortedcontainer library, which en
Figure 9: A hyperbolic lattice, divided into the fundamental symmetry sectors, separated by dotted lines and labeled by Roman numerals. Colored straight lines indicate the _soft_ sector boundary of \(\pm\epsilon\), which is used to filter out rotational duplicate cells as described in the text.
ables searching within the data structure in logarithmic time \(\mathcal{O}(\log(N))\), where \(N\) denotes the number of cells already constructed. If it is not available, then our own, slower, fallback implementation is used. Since the centers of a polygon are complex numbers we first need to establish an ordering relation in order to utilize a binary tree-like data structure. In practice, we employ the complex _angles_ of the center coordinates for this purpose, hence allowing for efficient access into a sorted container type. As a result, binning can be avoided altogether, and we are able to efficiently decide whether a polygon has already been constructed, with an accuracy up to nearly the relative machine precision. As a double check, during the lattice construction, the typical Euclidean distance of adjacent cells in the complex plane is monitored and a warning is issued in case this distance comes close to machine precision. This sets a theoretical limit on the available size of lattices in terms of the radial distance from the origin.
Additionally to this "upper" error limit, we also need to account for the accumulation of errors due to repeated calculation of layers. In order to keep track of this type of uncertainty we take a sample of polygons from every newly constructed layer and compute the distances to their corresponding neighbors. In particular, we are interested in the _minimal geodesic_ distance within the sample, since this quantity can be interpreted as an approximate bound of the accumulated error, given the sample is large enough. We compare this minimal distance with the fundamental geodesic distance of the tiling (i. e. the lattice spacing, compare Section 4.3). In a scenario of error-free arithmetic, both values would be identical. However, due to the accumulation of gradually applied Mobius transformations, this discrepancy does not vanish but usually increases towards the outer layers. Once it reaches the order of magnitude the coordinates are rounded to (our binning size), the construction becomes unreliable and the user is duly warned.
#### 5.2.4 Neighbors
When it comes to adjacency (or neighbor relations) between polygons, SRG and SRS follow fundamentally different approaches. The static rotational graph (SRG) kernel is capable of constructing the local neighborhood of cells already during the generation of the tiling. This is accomplished by a suitable generalization of the DuplicateContainer class (compare Figure 8) which can hold tuples consisting of center coordinates and corresponding cell indices. This allows to not only detect whether a candidate polygon is already in the lattice (compare discussion above) but also to return its index in case it is already present. Since according to the SR construction principle _all_ adjacent cells of the polygon under consideration (_self_) are attempted to be created, this way we can recover mutual neighbor relations; either the candidate polygon already exists, in which case its associated index is added to the neighbor list of _self_- or the candidate is, in fact, a valid new polygon, in which case it is assigned a unique new index and _this_ index is being added to the list of neighbors. Clearly, in both cases, the index of _self_ is added to other polygon's neighbor list, to establish mutual adjacency.
The generalized duplicate container furthermore opens the opportunity to dynamically remove cells, as has been demonstrated in Section 3.5. In principle, this would be possible with the standard duplicate container as well, but the requirement to identify cells based on floating point comparisons of their center coordinates would make this approach prone to rounding errors. Instead, having a cell index available in the container allows to remove cells by their indices, and use the center coordinate as a helper structure to quickly pinpoint the corresponding position in the array. In summary, removing a polygon _self_ from the tiling requires the following steps:
1. Remove corresponding HyperPolygon from list of polygons
2. Remove neighbor list of _self_ from neighbor array
3. Remove occurences of _self_'s index in neighbor lists of other polygons
4. Remove corresponding entry from duplicate container
5. Remove index of _self_ from list of exposed cells
In order to perform these operations efficiently, it is necessary that the list of polygons and the list of neighbors in the SRG kernel are implemented as Python dictionaries. Only this way, entries can be added and removed by index in constant time. Overall, step 4 presents the bottleneck of the removal operation since, as detailed above, locating the correct entry in the duplicate container, in general, requires logarithmic time (binary search in a sorted container).
Turning to the SRS kernel, where neighborhood relations are not computed upon construction, we provide several different methods to obtain them in a separate step. All methods can be accessed by using the standard wrapper function get_nbrs_list via the parameter method, which can take on the following values:
* get_nbrs_radius_brute_force or method="RBF" If the distance between any pair of polygons falls below a certain threshold, they are declared neighbors. The radius can be passed as a keyword argument. In case no radius is provided, the lattice spacing is used. Note that this algorithm is very reliable but too slow for large tilings due to the brute-force all-to-all comparison.
* get_nbrs_radius_optimized or method="RO" Similar to "RBF" but uses numpy to gain a significant performance improvement.
* get_nbrs_radius_optimized_slice or method="ROS" Variant of the "RO" scheme, which exploits the discrete rotational symmetry and applies the radius search only to a \(p\)-fold sector of the tiling. Neighbors outside the fundamental sector are obtained via suitable index shifts.
* get_nbrs_edge_map_optimized or method="EMO" Locating neighbors by identification of shared edges. Apart from the initialization of the edges, this is a coordinate-free, combinatorial algorithm.
* get_nbrs_edge_map_brute_force or method="EMBF" Brute-force variant of "EMO", available for testing and debugging purposes.
### Dunham's Algorithm
An important milestone in the development of algorithms for the construction of hyperbolic tilings is the combinatorial method presented by D. Dunham and coworkers in a series of publications in the early 1980s [79, 94]. A hierarchical spatial tree structure is employed, where recursively new trees of cells are created at the vertices of existing ones, resembling a depth-first search algorithm. Similarly to our static rotational kernels (Section 5.2), new polygons are created locally by discrete rotations of existing ones around their vertices. The first polygon in each tree creates one child less, as can be seen in Figure 10. The algorithm uses hyperboloid (also called Weierstrass) coordinates. This allows to represent transformations as \(3\times 3\) matrices which can be incremented as the individual trees are processed. Every time a new cell is added to the tiling, this transformation translates and rotates a copy of the fundamental cell accordingly. Additionally, in each iteration step the parent polygon adjusts the transformation which created itself and passes it to the next polygon.
In order to generate tilings without duplicates, it is important that vertices are properly _closed_, meaning that no further copies are generated by rotations around a vertex once all \(q\) adjacent cells at this vertex already have been constructed. The first _child_ polygon of a _parent_
polygon closes its leftmost vertex of that _parent_ polygon. Hence, this polygon will create one _child_ less and we call it a _closing polygon_. This is accomplished using a variable called _exposure_. It denotes the number of polygons left to be created by a specific _parent_ polygon. In the regular case, it is given by \(N_{\text{exp}}=p-2\), where a factor of \(-1\) stems from the _parent_ polygon and another factor of \(-1\) from the adjacent closing polygon, which will be created by a _sibling_. For closing polygons, the exposure is reduced to \(N_{\text{exp}}=p-3\), due to the connection to the formerly mentioned polygon.
In hypertiling we implement two variants of the improved version of Dunham's combinatorial algorithm, as outlined in Ref. [92] and a series of unpublished notes around the year 2007. The DUN07 kernel represents an almost literal Python implementation, which suffers from a number of obvious performance bottlenecks. We address these in an improved version, named DUN07X, where we are able to achieve a significant performance speed-up and furthermore make the kernel ready for numba just-in-time compilation. Due to the recursive structure and hence independent trees, this kernel also offers a great starting point for parallelization, which we leave to future work.
A particular strength of Dunham's algorithm is that the number of polygons per layer as well as the correct parent-child relations are deduced exactly. However, it does not provide a trivial way of determining the neighbor relations during construction since the individual trees grow independently. A coordinate representation of cell vertices is still required to store and work with the tiling, introducing floating numbers and limiting the accuracy in practice.
The shortcomings with respect to the determination of neighborhood relations, the performance loss due to the construction of an entire tiling, as well as the recursive structure, which in general provides inferior performance compared to iterative sequential approaches, inspire us to develop the _generative reflection_ (GR) kernel. It shares several strengths of Dunham's approach, such as an exact combinatorial scheme, which avoids duplicates and automatically determines when a layer is completed, but at the same time provides several substantial improvements. This includes a non-recursive algorithmic design, which exploits the discrete rotational symmetry of a regular tiling and allows to determine adjacency relations natively.
Figure 10: Sequence in which cells are created in the Dunham algorithm. First layer polygons {1,11,21,31,41} can be seen as initial nodes of independent branches, resembling the concept of a depth-first search. To enhance clarity, numerical labels are displayed only in the first branch.
### Generative Reflection Kernel
The _generative reflection_ kernel (GR), in contrast to the other kernels available, entirely focuses on edges instead of vertices. To be specific, polygons are reflected on their edges rather than rotated around their vertices. The reflection process can be expressed as a sequence of Mobius transformations (compare Section 4.1).
Moreover, compared to the SRS kernel, which only creates one symmetry sector of the tiling and replicates this sector, the GR algorithm goes one step further and stores only the fundamental sector in the first place. All remaining cells (outside of this sector) can readily be constructed on-demand using Python generators. This way both the construction time, as well as the memory footprint can be reduced dramatically, even compared to Dunham's algorithm, as will be compared in Section 5.5.
The GR kernel provides a number of methods that hide the generative nature from the user, satisfying the usual interfaces, as previously touched upon in Section 3.1. For instance, if a polygon is accessed, the kernel will determine whether it is stored inside the fundamental sector (and therefore physically stored in the memory) or whether it needs to be generated first. Either way, an array containing center and vertex coordinates is returned.
#### 5.4.1 General Concepts
While kernels such as SRS and SRG check for duplicate polygons globally, the GR kernel is a local combinatorial algorithm similar to DUN07X. In order to create a new cell, only the immediate neighborhood structure, but no knowledge of the remaining lattice is required. The creation of duplicates is avoided from the beginning. In the following, we sketch the algorithmic approach of how this is achieved:
A polygon is represented as an array \(\nu\) of vertices \(v_{i}\), stored in clock- or counterclockwise order, hence defining a new property: the _orientation_. In Figure 11, the orientation and start vertex (i. e. the first vertex in the array) are indicated by arrows. Given a specific _parent_ polygon (black square), for the _children_ (red, green, and yellow), which are created through edge reflections, the orientation changes, and the position of the starting points, relative to the parent, shifts. However, we require the relative orientation and starting point to match that of the parent polygon and hence adjust both properties. In Figure 11, on the left-hand side, the orientation is not corrected. Considering an arbitrary child (green, yellow, and red), the orientation (circular arrow) is different in relation to the parent. Moreover, considering all the children, the starting point, i. e. the first vertex in the array (indicated by the straight arrow) is different with respect to the parent for each but the first child. For the first child (green), the starting point and the following vertex are shared with the parent. For the next child (yellow) however, the starting point, as well as the following vertex, are not shared with the parent. The right-hand side of the figure shows the adjusted polygons. Each shares the
Figure 11: A parent polygon (black) and its children, constructed by edge reflections. Left: Orientations and starting points of the children are different from the parent. Right: Orientations and starting points of the children have been adjusted.
starting point and their last vertex with their parent. Moreover, their orientation is corrected to be clockwise. In summary, the orientation in each child is flipped and shifted by \(i\), where \(i\) describes which edge, counting from the starting point it has been reflected on.
Hence the orientation is now similar for every polygon and those edges which can create further children can be deduced. As shown in Figure 11, it is always the last edge (relative to the orientation pointer) which is the one that is shared with the parent. Moreover, polygons do not share an edge with their siblings, unless \(q=3\), where this is the case for the first and the second-to-last edge. Only those edges which are not shared with either the parent or a sibling can be used to generate another _child_ in the next layer. However, for certain polygons, which we will refer to as _filler polygons_, this picture is not sufficient. Filler polygons are the equivalent to closing polygons in Dunham's kernel (compare Section 5.3). However, for GR a distinction between two types of filler polygons needs to be made. In comparison to regular (i. e. non-filler) polygons, one additional edge is blocked by either a parent or a nibling. A further distinction is made between filler polygons of first and second kind, illustrated in the left and right panel of the Figure 12, respectively. Filler polygons of the first kind are created through a child-nibling artifact. This describes the situation when a child of a polygon at a certain edge has already been created as a nibling. In other words, a _child_ of a _sibling_ can also be a _child_ of _self_. Filler polygons of the second kind are not created through this effect, but instead, a _child_ and a _nibling_ are neighbors. As filler polygons of the first kind can be created by two parents, in order to prevent duplicates, _self_ has to verify that the last created _nibling_ is not a neighbor of _self_. In this case, _self_ will create one child less, i. e. will not create the child of the first edge. For filler polygons of the second kind, the _child_ created first is compared to the last polygon created by the parent immediately before the current parent. If both share an edge, they are considered filler polygons of second kind, and the corresponding edges are blocked.
Depending on whether the lattice parameter \(q\) is odd or even, regular tilings feature either both or only one kind of filler polygon. This can be understood as follows. Naturally, the first set of vertices to be closed are those of the fundamental polygon. Specifically, for each vertex, \(q-1\) additional polygons are required. As the polygons are created pairwise on the two open edges connected to a vertex, for \(q\) being odd, a final _pair_ of polygons will close this vertex. However, the vertex in between them, i. e. the one on the edge they share, starts with two polygons instead of one. Therefore, it needs to be closed like the even-\(q\) case for the fundamental polygon. In this case, the final polygon can be created by both open edges. The polygon thus has two parents and is therefore a filler polygon of first kind. Its vertices behave like the vertices of the fundamental polygon. Hence, for \(q\) even, the tiling consists of regular and filler polygons of the first kind only. However, for \(q\) odd, the tiling consists of regular polygons and alternating types of filler polygons. For \(q=3\), a different scenario arises. In
Figure 12: Left: Filler polygons of first kind (green) and non-filler polygons (orange) in the second layer of a \((7,4)\) tiling. Right: Filler polygons of second kind (green) in a \((5,5)\) tiling.
this case, each polygon is connected to two of its siblings and thus every polygon is a filler of second kind. However, while for \(q\neq 3\) filler polygons of second kind are only connected to one single other filler polygon of second kind, for \(q=3\) the filler polygons of second kind are connected to two other filler polygons of second kind. Together these polygons act as two separate pairs whereas each pair creates a filler of first kind in the next layer. These first kind filler polygons are also connected to their siblings and thus can be interpreted as filler polygons of both types, i.e. first and second kind, at the same time.
As cells are created in strictly deterministic order, i. e. layer by layer and in counter-clockwise direction, all filler polygons involve the last created _nibling_ and the first _child_ of the next polygon. If the first created _child_ is equal to the last created _nibling_ the filler polygon is of first kind. If the last created _nibling_ is a neighbor to the first _child_, both are filler polygons of second kind. Therefore, the vertices of each first _child_ of a polygon are compared to the edges of the immediately preceding one. If two consecutive vertices match, the polygons share an edge and for both polygons, the corresponding edge is blocked for a _child_. Whether an edge is blocked or not is an important quantity in this algorithm and is encoded as a single bit.
#### 5.4.2 Algorithmic Details
In Algorithm 1, we present the core construction principle of the GR kernel as a pseudo code. Variables are highlighted using italic letters. In order to improve the readability of the code, certain passages have been simplified by means of helper functions. In the actual implementation, these sections are directly incorporated into the main function in order to avoid extra function calls and enhance runtime performance. The designated helper functions are:
* BLOCK_PARENTS_EDGES: For the array _edge_array_, which encodes whether edges of polygons are blocked or free, the entry for every polygon is an unsigned integer considered as a bitset, where each bit represents a certain edge of the respective polygon. For the initial representation, every bit except the bit for the parent's edge is set to 1. The corresponding integer is given as \(\eta=2^{p}-1\), where \(p\) is the number of polygon edges, as usual.
* CALCULATE_TARGET_LAYER_SIZE: Resort to analytical function in order to determine target layer size. In the actual implementation, this is done in advance of the function call in order to allocate a sufficient amount of memory.
* CHECK_FILLER_IST_KIND: Checks whether the first input argument is a filler polygon of first kind by comparing vertices of the next parent and the most recently created polygon. This is done using a \(p\times p\) matrix \(\eta\), where the components are defined as \(\eta_{ij}=(v_{i}-\hat{v}_{j})\), with \(v\) and \(\hat{v}\) representing the respective polygon vertices. The row index indicates the vertex of \(v_{i}\) and the column index is the vertex of \(v_{j}\) for a certain component. Now, \(\eta_{ij}\) is compared against zero, with a threshold in order to account for numerical uncertainties. This way, matching vertices are identified. If two _consecutive_ vertices match, a filler polygon of first kind is found and the indices in the matrix \(\eta\) can be used to block the corresponding edge.
* CHECK_FILLER_2ND_KIND: Similar to CHECK_FILLER_IST_KIND. Instead of the next parent, its first child is used.
* REFLECT: Computes vertex coordinates for a new polygon through reflection across a specific edge of the parent polygon. In practice, this is done by a series of Mobius transformations. To be more precise, the polygon is shifted in a manner that positions the initial vertex of the edge to be reflected upon at the center of the tiling,i. e. at \(0+0i\). Subsequently, a rotation ensures that the second vertex of the edge under consideration is also located on the real axis. The actual reflection is done by a complex conjugation, which inverts the imaginary values of the vertices. Finally, the polygon is rotated again and moved back to its original position.
* CORRECT_ORIENTATION: Ensures the correct sequence of vertices for the polygon, as illustrated in Figure 11.
* RADIUS_OF: Calculates the distance of the polygon's center from the origin as \(r=\sqrt{z\overline{z}}\).
In the pseudo-code, input arguments are the lattice parameters \(p\) and \(q\), as well as the number of reflective layers to be constructed, \(n_{\text{layers}}\). Lastly, the _polygons_ variable represents an array that will serve as the storage for the polygons. It already contains the fundamental polygon. In the algorithm's progression, the actual offspring polygon is created in line 24 and inserted into the list of cells in line 27. The variables _poly_counter_ and _index_shift_ denote a global polygon number counter and a counter which accounts for the appropriate adjustment of indices when a new layer is initialized. They are initially set to 1 and 0, respectively.
#### 5.4.3 Layer Definition
With the novel generation scheme of the GR kernel comes an adjusted definition of a _layer_, which is different compared to the traditional definition used in the other kernels of this package - the so-called _reflective layer_. It represents the natural layer definition for this kernel and is defined as the accumulated total number of reflections that need to be executed to construct a specific polygon. To be specific, let a parent be located in layer \(n\), then its child, with whom the parent shares an edge, will be part of layer \(n+1\). This is different compared to the traditional layer definition where children are constructed by rotations about vertices and therefore might only share a vertex with the parent. We show examples for two different combinations of \((p,q)\) in Figure 15. In panel (a), cells are colored according to the traditional layer definition and in panel (b) according to their reflective layer attribute.
In general, it is important to remark that the parameter \(n\) in the input parameter set \((p,q,n)\) denotes the reflective rather than the traditional layer number. For given \(n\), a lattice generated by GR will therefore in general feature fewer cells than a lattice generated by other kernels. Due to the different definitions of layers (compare Section 5.4.3), also the overall shape of the tiling is generally different compared to other kernels. The layer definitions coincide for \(q=3\) as the vertices will be closed in every reflective layer, however for any \(q>3\), the behavior is different. In the traditional layer definition, boundary vertices are immediately closed once a new layer is constructed, i. e. all polygons which share this vertex are constructed and it becomes a bulk layer. In the GR, a boundary vertex in layer \(i\) will be closed only
\[\Delta i=\left\{\begin{array}{ll}\frac{q}{2}&\text{for $q$ even}\\ \frac{q-1}{2}&\text{for $q$ odd}\end{array}\right. \tag{31}\]
_reflective_ layers later. Therefore, the vertex remains at the lattice boundary until \(\Delta i\) further layers have been constructed. This holds for every vertex. Hence, as \(q\) is increased, the tiling becomes increasingly sparse and its surface more and more fractal. Two examples are depicted in Figure 13. The vertices appear as white dots due to the boundary lines of the polygons. In the left panel, we show a (3,20) lattice with ten layers. The vertices for the first layer are not yet closed. In contrast, the vertices of the first layer in the right panel are closed. The vertices of the second layer, however, are still open, as one further layer is required to close them. Consequently, by design, since in the GR a lattice is built until the target number of reflective layers is reached, the completeness of the last \((q-3)/2\) traditional layers can not be guaranteed.
For compatibility reasons we provide functions that return the layer of a polygon in the traditional definition, map_layers and get_layer. The former method performs the actual
Figure 13: Left: (3, 20) grid with ten layers. A vertex shared by two regular polygons needs \(\Delta i=q/2=10\) subsequent layers to be closed. Right: (3, 10) lattice with seven layers. Here, \(\Delta i=5\), hence only vertices of the first layer are closed.
computation, whereas the latter one represents a wrapper for convenient access. Note that upon the first call to get_layer, the mapping function is invoked internally, unless manually executed beforehand. Once mapped, subsequent calls are then simple memory access operations and hence fast.
It is clear that among the cells in a layer, the radial distances to the origin vary and, moreover, that the corresponding distance distribution of two adjacent layers in general overlaps. This is illustrated in Figure 14 for a (7,3) tiling (left panel), as well as for a (7,20) tiling (right panel). For \(q=3\), where the traditional and reflective layer definitions coincide, all open vertices of a layer are immediately closed by the subsequent one, which makes the overlap minimal. Instead, the overlap is considerably large for \(q=20\) since subsequent \(\Delta i=q/2=10\) layers are required to close a vertex. The filler polygons in a layer always represent the maximal radial extension into previous layers.
#### 5.4.4 Graph Kernels
One notable strength of the GR algorithm is its ability to deduce neighbor relations during the lattice creation process with only minor algorithmic adjustments. To this end, we provide specialized variants of the GR kernel, the _generative reflection graph_ (GRG) and the _generative reflection graph static_ (GRGS) kernel. For regular polygons and \(q\neq 3\), the adjacency relations are entirely given by parent-child relations, i. e. we need to keep track of which child polygon is created by which parent. If the polygon is a filler of first kind, the polygon is connected to the next _sibling_ of the _parent_. If the polygon is a filler of second kind, the polygon is connected to its next _nibling_. Due to the order of construction, these are either stored next to the _parent_ (first kind) or _self_ (second kind). For \(q=3\), the relations change such that the polygons in the array next to _self_ are also neighbors to _self_.
As mentioned above, both kernels are based on the GR construction principles. However, due to their nature as _graph kernels_, only the neighbor relations and the centers of polygons are returned instead of entire polygons, which allows the kernel to be particularly lightweight. However, note that during the construction process, the coordinates of the vertices are still required, resulting in a larger memory footprint during this step compared to the final graph. As another consequence, lattices constructed by GRG and GRGS are still limited by the available computational accuracy, as coordinate calculations involve floating point arithmetics.
Unlike GRG and GR, the GRGS kernel stores adjacency relations of the entire lattice explicitly. This can be useful in applications, where a radial instead of sectorial ordering of the cell indices is more intuitive or preferred for other reasons, such as in simulation comprising radial boundary conditions.
Figure 14: Distributions of cell distances to the origin for a \((7,3)\) tiling (left panel) and a \((7,20)\) tiling (right panel).
#### 5.4.5 Neighbors
As mentioned above, the GRG and GRGS kernels construct adjacency relations automatically. For the original GR kernel, this is not the case, however, we provide a number of methods that allow to obtain the neighbors of a polygon in a separate second step. The methods can be accessed either directly or using the get_nbrs method. The latter wrapper function, which is shared by all kernels (compare Section 3.3), takes a keyword method which specifies the algorithm. In the following, we go through all methods available in the GR kernel in detail:
* get_nbrs_generative or method="GEN" Neighbors are determined in a brute-force manner, where the center of the cell is reflected across all edges, and resulting points are compared to the center coordinates of every other cell in the lattice. This is achieved by calling the helper routine find on these points, which, as detailed earlier, extracts the corresponding polygon index. As a consequence, similar to the find function itself, execution times for this method heavily depend on the structure and size of the lattice. However, at the same time, this method is therefore relatively robust and good for testing and debugging purposes.
* get_nbrs_radius or method="RAD" This method calculates the overall distance between the considered polygon and all other polygons in a brute-force manner. In case the distance matches the lattice spacing, the respective polygons are considered neighbors.
* get_nbrs_geometrical or method="GEO" In "GEO" we compute adjacency relation using the _relative_ positions of polygons inside their reflection layers, defined as \[\Gamma_{i}=\frac{i}{N_{j}},\] (32) where \(i\) is the index of the polygon and \(N_{j}\) is the number of polygons in its corresponding reflection layer \(j\). This measure can be interpreted as an _angular_ position in the layer. Let us consider an arbitrary child-parent pair, where the parent index is \(i\) and the child index \(K\). By design, we know that the child is created directly from its parent and therefore the relative positions in their respective layers must be similar. This allows to provide a precise _guess_ for the index of neighbor cells, given by \[k\approx\Gamma_{i}\cdot n_{k}.\] (33)
Figure 15: Comparison of traditional (a) and reflective layers (b) for a (3,7) tiling. In both cases, \(n=5\), however the resulting lattice size (number of cells) is different. Only for \(q=3\), both definitions become identical (not shown).
It needs to be remarked that filler polygons create a minor additional shift. In practice, a small region around the guess index is scanned for children and as soon as one is found, the remaining ones can be determined straightforwardly. Only for \(q=3\), the _siblings_ of a polygon are its neighbors. They are always located at indices \(i-1\) and \(i+1\).
* get_nbrs_mapping or method="MAP" This method assumes that the neighbors of all cells in the fundamental sector have been mapped (i. e. explicitly stored) earlier and are hence immediately available. This can be achieved by invoking the function map_nbrs once. In case the region of interest is outside the fundamental sector, all relations are properly adjusted. In case the neighbors have not been mapped beforehand, get_nbrs_mapping will perform the mapping first, resulting in a significantly larger execution time for the initial call. The way map_nbrs works is that it computes the distance of the current polygon to all polygons in the previous layer in order to find the two closest points which are candidates for its parents. Analogously, the next layer is searched for the remaining neighbors. In the process, all candidates are compared against the lattice spacing in order to determine which of them are actual adjacent cells. Only in case of \(q=3\), _siblings_ are relevant, which can be readily identified as the immediately adjacent indices of _self_.
Additionally to these methods, which return neighbors of single cells, the GR kernel provides an implementation of get_nbrs_list, which returns the neighbors for the entire tiling. This function calls map_nbrs and generates a list format from the result. Be aware that compared to invoking map_nbrs, the memory requirement is larger by a factor of about \(p\), since get_nbrs_list is not restricted to the fundamental sector but explicitly returns all connections in the lattice.
#### 5.4.6 Integrity Check
One additional noteworthy feature of the GR kernel is the check_integrity routine, which, to some extent, allows monitoring the correctness and completeness of a grid. The test is carried out in two stages. At first, the tiling is checked for duplicates as well as for problems resulting from the available numerical resolution. To this end, a polygon is removed and the find method is employed. This method determines the index of the polygon a coordinate point is located in and returns False in case none is found. If _another_ polygon is found at that location, this either indicates the existence of a duplicate or a spatial displacement of adjacent polygons due to numerical accuracy. Once this first test is passed successfully, we proceed by testing for the correct number of neighbors using the get_nbrs_generative method, which has been explained earlier. This method is specifically suited for this purpose as it does not search for neighbors directly but creates the neighbors through reflections. Subsequently, the find method is used on these newly created polygons. If the number of found neighbors does not match \(p\), the tiling either has holes or the polygon under consideration is located in a boundary layer. In either case, the method will generate a warning message containing the index of the first polygon lacking an adequate number of neighbors, along with its corresponding layer information. A code example can be found in the package documentation.
### Benchmarks
In this section, we provide detailed performance results for tiling construction and the generation of adjacency relations. These benchmarks should aid the selection of the appropriate kernel when computing resources are a constraint. All tests have been performed on a desktop workstation with an Intel Xeon W-1390p and 64 Gb RAM. Besides these performance benchmarks, a feature-based comparison is presented in Section 5.6.
For a given \((p,q)\) lattice, we anticipate the kernels to generate polygons at a steady rate, with some additional algorithm-specific overhead which should be negligible for large tilings. This time per polygon is therefore a good indicator for the kernel performance. Results for different kernels are shown in the left panel of Figure 16. In the Figure, the heights of the individual bars represent the average time to construct a single polygon \(t_{\text{poly}}(p,q,n)\) for a specific tiling. This quantity is determined using linear regression of the time required to construct a full tiling, \(t(p,q,n)\) and the number of polygons \(N(p,q,n)\). Therefore, in the regression the function
\[f:N(p,q,n)\in\mathbb{N}\to t(p,q,n)\in\mathbb{R}. \tag{34}\]
is fitted. A similar approach is used for calculating the memory footprint per polygon, displayed in the right panel of Figure 16. The advantage of a linear regression over a simple arithmetic average is the compensation of systematic shifts due to systematic overhead. Error bars are mostly smaller than 1% and not shown.
In general, GR and GRG are the fastest algorithms in the package. Remarkably, DUN07X' runtime is of the same order of magnitude as those of GR and GRG, even though it creates an entire tiling instead of only one sector. It is more than two times faster than GRGS, which is a variant of GRG that constructs a full tiling. The main reason for its performance is that DUN07X dispenses with explicit coordinate comparisons. The SR kernels are, as expected, significantly slower, even when only one sector is created, such as for SRS. This stems from the internal duplicate management system of the SR kernels, which even with its optimized containers and efficient binary search, introduces a systematic performance overhead for large tilings. Finally, our analysis reveals that SRG offers the poorest performance, which is the expected trade-off for its dynamics manipulation capabilities.
In the analysis above we focus solely on the time required to generate the tiling. However, for many applications adjacency relations are essential and measurements encompassing the time both for tiling creation and for retrieving neighbor information become relevant. Considering graph kernels, GRG and GRGS are the fastest implementations in the package. As expected, GRGS is slower by roughly a factor of \(p\) compared to GRG. We also find that once neighbor relations are required, DUN07X turn out to be one of the slowest kernels for large lattices, as the lack of specialized neighbor detection methods leads us to resort to a brute-force radius search instead. As the majority of adjacency algorithms do not scale linearly with the number of polygons, there is not a simple time-per-polygon ratio as in Figure 16 and we present our benchmark results separately in Figure 17. Besides DUN07X, we evaluate SRS and GR with their default algorithms R0S (see Section 5.2.4) and get_nbrs_list (which calls MAP and expands the results from one sector to the whole tiling), respectively. Despite numerical optimizations, they all exhibit quadratic scaling for large tilings, owing to the all-to-all distance
Figure 16: Computing time and memory footprint comparison for different kernels and \((p,q)\) lattices. In the right panel, peak memory consumption is displayed for the construction process (solid bars) and neighbor search (transparent bars). Kernels that compute neighbors during lattice construction present only solid bars.
comparisons needed. The GR kernel displays a larger overhead, but benefits from its heavily optimized internal structure and is generally the fastest for large lattices.
Another important performance quantity is the required memory per polygon. It is best evaluated by monitoring the _peak_ memory consumption during the construction of tiling and neighbor structure, shown in the right panel of Figure 16 as solid and transparent bars, respectively. Again, linear regression is used to compensate for systematic offset. The results clearly show that GR and GRG heavily outperform every other kernel in this category, due to the fact that only one sector is stored and generators are used to construct remaining regions only on demand. As soon as neighbor relations are required, however, their memory footprint increases significantly, as an explicit, full list of neighbor indices is produced. The GR kernel nonetheless remains the most memory efficient option. Finally, SRS as well as SRG are the most expensive kernels in this category, stemming from the same memory requirement for the list output combined with algorithmic overhead due to the HyperPolygon class and the duplicate management mentioned earlier.
### Choosing a Kernel
For many use cases, any of the kernels available gets the job done, and the user does well to stick to the default options. For those that require specific features, however, selecting the most suitable kernel among the currently available options, each with its own distinct technical intricacies, may seem challenging. In this section, we give some guidance to help making this choice.
The first distinction to be made is between graph and tiling kernels. Graph kernels are more lightweight and contain mostly only the neighbor relations, whereas geometric information is reduced to a minimum. In contrast, tiling kernels store the entire tiling, including all vertex coordinates, which obviously requires additional memory capacity. The graph kernel category currently contains the GRG and GRGS kernels, both in the GR family. The key difference between these two is that GRG only creates one sector explicitly and thus requires less memory. On the other hand, GRGS generates a full grid, which leads to an increased generation time but offers certain advantages, such as a more natural, radial ordering of cell indices instead of sectorial ordering. This facilitates a simple integration of GRGS into simulations that require
Figure 17: Computing time required for detecting adjacent cells. Graph kernels generate neighboring relations automatically and are hence not included. Dashed lines indicate quadratic scaling.
radial boundary conditions, for instance.
The tiling kernel category also contains a member of the GR family, the GR kernel, which stands out in terms of performance and significantly improved memory efficiency compared to the family of SR kernels. The latter comprises the kernels SRS and SRG, both of which offer greater flexibility than the GR kernel, enabling dynamic modifications of a lattice in SRG and the use of refinements in SRS. Another difference between them is the calculation of neighbors, which is already done during the creation of the grid in the SRG kernel. Hence they are promptly available, without any additional function calls, as for the graph kernels mentioned above. With respect to overall performance, SRS benefits from a fast, sector-based construction approach, whereas the dynamic manipulation properties of SRG require more advanced internal bookkeeping, leading to some performance overhead.
For a concise overview, see the summary of the various kernel properties in Table 3. Note that our plotting and animation functions work with all kernels, since they share a common API. We also want to remark that the summary in the Table represents a current depiction, and as the library continues to evolve, some kernels may receive further updates and additional features in the future.
## 6 Examples
In this section we offer a selection of scientific uses of the hypertiling package.
### Epidemic Spreading
The spreading of infectious diseases can be modeled by simple dynamic rules, such as the so-called contact process [95], in more technical terms also referred to as the _asynchronous susceptible-infected-susceptible_ (SIS) model. On a lattice structure, each vertex represents an individual that can be in one of two states, _infected_ (active) or _healthy_ (susceptible). The temporal evolution of the system comprises two fundamental stochastic processes [96]. Denoting an active site by \(A\) and a susceptible site by \(\mathcal{O}\), these are infection of a neighbor, \(A\to A+A\), and spontaneous recovery, \(A\to\mathcal{O}\). Both processes run stochastically with rates determined by the infection probability \(\lambda\). Over time, a _cluster_ of infected sites evolves. If the infection spreads slowly (small \(\lambda\)), the system is dominated by the recovery process and eventually an _absorbing_ state is reached where the entire lattice is inactive and the disease is extinct. In contrast, if \(\lambda\) is large enough, the system steadily maintains an active cluster and is said to be in the _active phase_ where the disease persists indefinitely. Typically, at a critical parameter \(\lambda_{c}\) whose precise
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & SRS & SRG & DUN07X & GR & GRG & GRGS \\ \hline Construction principle & sector & full & full & sector & sector & full \\ Neighbors built during construction & ✗ & ✓ & ✗ & ✗ & ✓ & ✓ \\ Vertex coordinates available & ✓ & ✓ & ✓ & ✓ & ✗ & ✗ \\ Dynamic modifications & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ \\ Refinements & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ \\ Integrity checks & ✗ & ✗ & ✗ & ✓ & ✓ & ✓ \\ \hline Numerical performance (no neighbors) & \(\bullet\) & \(\bullet\) & \(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\) & \(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\) \\ Memory efficiency (no neighbors) & \(\bullet\) & \(\bullet\) & \(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\) & \(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\) & \(\bullet\)\(\bullet\)\(\bullet\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Feature comparison of the construction kernels in the package. More bullets correspond to better performance.
value depends on the microscopic details of the lattice, there is a transition between the active and the absorbing phase. Specifically, at \(\lambda=\lambda_{c}\) spatial and temporal correlation length scales, \(\xi_{\perp}\) and \(\xi_{\parallel}\), diverge and the emerging activity cluster becomes scale-invariant and strikingly self-similar [50, 97]. If the lattice structure is regular or - in a certain sense - only weakly disordered [98], the properties of this transition are _universal_ and fall into the class of _directed percolation_[99].
We conduct epidemic spreading simulations on a hyperbolic (5,4) tiling with 120 million cells, starting from a single infected seed particle in an otherwise inactive lattice. The seed is positioned at the center of the tiling. Throughout the time evolution of the contact process, we monitor the number of active sites \(N_{a}(t)\). Our goal is to locate the critical point, i. e. the probability threshold \(\lambda_{c}\) which separates the active from the inactive regime. Results are displayed in Figure 18. As configurations of individual runs typically vary greatly, we average each curve over up to \(25\,000\) independent realizations of the process, i. e. we measure the ensemble averages \(\langle N_{a}(t)\rangle\) as well as the survival probability \(\langle P_{s}(t)\rangle\), given by the fraction of runs which are still alive at time \(t\). The results indicate that the threshold probability is located between \(\lambda=0.625\) and \(\lambda=0.626\). The cluster size \(\langle N_{a}(t)\rangle\) for smaller values \(\lambda\) declines to zero, after an initial transient behavior, whereas for larger values of \(\lambda\), the curves display exponential growth. A similar picture emerges in the second panel of Figure 18, where the survival probability \(\langle P_{s}(t)\rangle\) decays to zero below the threshold and tends to constant values above, indicating the infection persists on the lattice, i. e. an active phase.
In all simulations we monitor the mean square geodesic radius of the spreading cluster and stop the simulation once this quantity reaches the order of magnitude of the lattice boundary region in order to avoid finite-size effects, since at this point, the process can not spread further. Moreover, we use random sequential updates, i. e. in every time step an active particle is randomly selected and updated according to the dynamic rules described above. Then, time is incremented by the inverse of the current cluster size, \(1/N_{a}\).
Concluding this section, we want to emphasize that state-of-the-art high precision simulations of reaction-diffusion processes, such as in the context of epidemic spreading from a localized seed, require considerably large lattices in order to avoid boundary effects and to extract the true asymptotic scaling behavior. Since the dynamics of continuous phase transitions in non-Euclidean geometry is generally a challenging field of research (see Section 1.2 and references therein), which requires substantial resources when addressed numerically, a more detailed study falls out of the scope of this paper and is left to future work.
### Scalar Field Theory
In quantum field theory, one of the most fundamental nonlinear models is the \(\phi^{4}\)-model [100], a scalar field with fourth-order self-interaction, described by the Lagrangian
\[\mathcal{L}=\frac{1}{2}\partial_{\nu}\phi\partial^{\,\gamma}\phi-\frac{1}{2} \mu^{2}\phi^{2}-\lambda\phi^{4} \tag{35}\]
in Minkowski spacetime, where the parameter \(\mu\) is associated to the mass and the partition function (or generating functional) is given by
\[\mathcal{Z}=\int\mathcal{D}\phi\,e^{iS}=\int\mathcal{D}\phi\,\exp\left(i\int \mathrm{d}^{n}x\mathcal{L}\right). \tag{36}\]
The phase structure of this model resembles that of an Ising model [101, 102], a well-known model in statistical mechanics and magnetic solids. In particular, a critical transition from a high-temperature disordered phase into an ordered low-temperature phase where the symmetry of the scalar field is spontaneously broken, can be observed. Simulations of this theory
on a lattice first require a suitable Wick rotation, transforming the path integral factor \(e^{iS}\) into a Boltzmann exponent \(e^{-S_{E}}\), thus resulting in a classical \(n\)-dimensional Euclidean theory. The actual lattice discretization is straightforward and described in many textbooks, such as [103, 104]. Eventually, one finds a lattice Hamiltonian of the form
\[\beta\mathcal{H}=S_{E}=-\sum_{\langle i,j\rangle}\phi_{i}\phi_{j}+m\sum_{i} \phi_{i}^{2}+\lambda\sum_{i}\left(\phi_{i}^{2}-1\right)^{2}, \tag{37}\]
where \(\phi_{i}\) is an \(N\)-component real variable and the angular brackets \(\langle i,j\rangle\) denote a summation over nearest neighbors. The inverse temperature is denoted by \(\beta\). For any positive \(\lambda\), this system undergoes a continuous phase transition which lies in the universality class of the classical \(O(N)\) model, which can be seen as the \(N\)-component vector-valued generalization of the Ising model. The parameter \(\lambda\) proves particularly useful in numerical analysis since it can often be adjusted in a way that leading scaling corrections approximately vanish. In this case, one speaks of an _improved_ Hamiltonian [105, 106]. For \(\lambda\to\infty\) the classical Ising, XY, Heisenberg, and higher-symmetry vector-models are recovered, as the field is effectively forced to unit-length \(\phi_{i}^{2}=1\).
In order to illustrate how a remarkably distinct behavior can arise on hyperbolic geometries, compared to flat Euclidean lattices, we simulate the classical scalar-field Hamiltonian on two different lattice structures, a (3,7) lattice, as well as a flat square grid, both confined to a circular region. We choose the couplings between adjacent cells ("spins") in a way that energetically favors anti-parallel local alignment. Both systems are initiated in a random hot configuration of positive (red) and negative (green) field values, which - in the magnetic picture - can be interpreted as uniaxial spins pointing upwards or downwards, respectively. By repeatedly applying the Metropolis update algorithm [107] the system eventually reaches thermal equilibrium. An animation of this process is available on our YouTube channel6. In Figure 19 we show examples of equilibrium configurations for both systems. In the traditional flat lattice, we find extended anti-ferromagnetic domains, separated by so-called anti-phase boundaries [108, 109], originating from the quench of the initially hot system towards a colder temperature. In the hyperbolic lattice, however, imposed by connectivity restrictions, local forces always compete with each other, and a true ground state (a perfect anti-parallel
Figure 18: Epidemic spreading on a large hyperbolic (5,4) tiling. The left panel shows the time evolution of the number of active sites, whereas the right panel displays the corresponding survival probabilty. Colors indicate the infection probability.
alignment) cannot exist. Instead one finds what is commonly referred to as geometrical frustration [110].
### Helmholtz Equation
The hyperbolic graph structure generated by hypertiling can be used to construct discrete lattice operators and solve partial differential equations. For example, the discretized Helmholtz operator for a general graph can formally be written as a matrix \(w(A-G)+ml\) acting on a vector of length \(N\), in our case corresponding to scalar function values of \(N\) cells in the tiling. Here, \(A\), \(G\) and \(I\) denote adjacency, degree and identity matrices, respectively, each of size \(N\times N\). While \(w\) denotes appropriate geometric weights as discussed in Ref. [78] for hyperbolic tilings, \(m\) represents the "mass" term, also referred to as eigenvalue \(\lambda\) or wave number \(k^{2}\) of the Helmholtz equation.
We provide interfaces to compute these matrix operators once the neighbor relations of a HyperbolicTiling or HyperbolicGraph have been obtained (compare Sections 3.1 and 3.2) as demonstrated in the following code fragment:
```
importhypertilingasht
#createtilingandobtainneighboursinonestep nbrs=ht.HyperbolicTiling(7,3,6).get_nbrs_list() A=ht.operators.adjacency(nbrs)#Adjacencymatrix G=ht.operators.degree(nbrs)#Degrematrix I=ht.operators.identity(nbrs)#Diagonalmassmatrix D=A-G#Laplacian H=D-M#Helmholtzian
```
As hyperbolic tilings quickly can become very large, a memory-efficient sparse matrix format is used instead of a dense representation. In Figure 20(a) we display the Helmholtz operator for a \((7,3)\) tiling with \(3\) layers, where we set \(w=2\) and \(m=5\). As can be seen, the adjacency matrix is indeed sparse and furthermore symmetric due to the graph being undirected. The sevenfold pattern can be directly related to the construction kernel, in this case,
Figure 19: A quenched antiferromagnetic spin model exhibits geometrical frustration on a (3,7) tiling (left panel), whereas on a flat lattice, an ordered anti-parallel alignment can be observed (right panel). Colors indicate the field values.
SRS, where only one symmetry of the tiling is explicitly created and then replicated. Note that both constant and position-dependent weights can be set for all operators using the optional weights keyword (not shown in the code fragment above). Moreover, the operator functions accept an optional boundary keyword which takes a boolean array of length \(N\) (number of cells), encoding whether a vertex is considered a boundary vertex. For boundary points, the corresponding rows are left empty, the matrix is non-symmetric. This characteristic provides an approach to Dirichlet boundary value problems, since this way it is ensured that boundary points act as _ghost cells_; they affect their bulk neighbors, but remain unchanged themselves. The right-hand side of the resulting linear system thus carries precisely the Dirichlet boundary values for those sites.
Now we are ready to solve the linear system \(Hx=y\), where \(H\) is the differential operator, \(x\) is the solution vector and \(y\) is the right-hand side. We use GMRES [111, 112], which is an established solver for non-symmetric and potentially indefinite linear systems. In Figure 20(b), we show an example where we solve an electrostatic boundary value problem in a (3,7) tiling with 6 layers and one refinement step (compare Section 3.4). The boundary has been fixed to values of either \(-1\) (red) or \(+1\) (blue). In this example, we set \(m=0\), hence reducing the Helmholtz equation to a Laplace problem. In the interior region of the lattice a smooth "electrostatic potential" is obtained, as expected, with interfaces between areas of positive and negative solutions roughly following hyperbolic geodesics. A more detailed exploration of Helmholtz and Laplace problems on hyperbolic tilings can be found in our example notebooks, provided in the hypertiling repository.
## 7 Road Map
With its wide range of scientific applications and potential for producing visually captivating artistic content, hyperbolic geometry remains an active and evolving field. The discretization of hyperbolic Riemann surfaces, in particular, grows in importance across a number of scientific disciplines. This continued interest motivates us to further extend hypertiling's capabilities and we already have some new features in the pipeline, some of which we outline below.
Figure 20: Left: Discretized Helmholtz operator for a (7,3) tiling with 3 layers. Right: Solution of an electrostatic problem on a refined (3,7) tiling, where boundary values have been fixed to either -1 (red) or +1 (blue).
### Triangle Groups
The hypertiling package focuses on _regular_ tilings, given by the \((p,q)\) Schlafli symbol. As explained above, these represent maximally symmetric spatial discretizations, which makes them particularly amenable to (large-scale) numerical simulations. However, tessellations of Riemann surfaces with polygons realized by so-called triangle reflections group [113, 32] can be expanded beyond regularity. A triangle group is defined by three rational numbers \((p,q,r)\) which reciprocally encode the inner angles at their vertices, given by \(\pi/q\), \(\pi/q\)\(\pi/r\). Depending on the relation \(1/p+1/q+1/r\) of the fundamental _Schwarz triangle_[114] being larger, equal, or smaller than one, the surface is respectively spherical, flat, or hyperbolic. From the fundamental triangle, we can build a tesselation through reflections on its edges. It is clear that there are \(2p\), \(2q\), and \(2r\) triangles meeting at the corresponding vertices. The smallest triangle group lattice is given by \((7,3,2)\). Note that \(p\), \(q\) and \(r\) can become formally infinite, representing tilings with triangles that have one or more ideal vertices (compare, e. g., Figure 6 in Section 4). Regular \((p,q)\) tilings can be recovered by coalescing \(2p\) triangles in a \((p,q,2)\) triangle group tiling.
### Regular Maps
Given the generically large boundary of finite hyperbolic tilings, compactified periodic lattices can prove themselves useful in many applications, particularly in computational physics, where one often wants to keep the influence of simulation boundaries under control. Translationally invariant compact hyperbolic tessellations can be constructed using regular maps [45] and we intend to tackle the challenge of implementing this possibility in future releases. Currently, the user may, for instance, use the dynamic features of the SRG kernel (compare Section 5.2) to manually construct periodic tilings by identifying edges of a properly chosen fundamental domain, as outlined for example in References [43] and [48].
### Symbolic Kernels
As mentioned in Section 5.4, the GRG kernel comprises a combinatorial algorithm that is able to automatically compute neighborhood relations. This local construction principle allows to entirely move away from coordinates representations of cells and to construct the graph relations solely from its combinatorial principles. We are currently implementing this as a variant of the GR kernel family, where we also intend to encode all polygon states and transformations as elementary bit operations. The corresponding smaller memory demands and overall increase in efficiency should allow for the creation of even larger tilings. In fact, as tilings constructed through this symbolic approach are necessarily exact since no floating point arithmetic is involved, there is no demand for exponentially increasing numerical precision. Hence, lattices of arbitrary size can be generated, only restricted by available computing resources. This approach shows similarities with more group-theoretic symbolic algorithms, where tilings are processed using a representation of triangle group elements as words produced by finite-state automata [115, 116, 117, 118].
### Parallelization
So far we do not offer _parallel_ lattice construction in hypertiling. One reason for this is that most of our kernels are designed for efficient serial execution. In the GR kernel, for the generation of every new polygon, the precise state of the previously generated one is required. Therefore exactly only one polygon can be created in each iteration. The static rotational kernels SRS and SRG on the other hand use global duplicate management containers (compare Section 5.2.3) which need to be accessed frequently and hence would require heavy inter-node
communication in a distributed memory environment. Alternatively, a single node shared-memory parallelization, where the lattice is simultaneously expanded by several replication drivers, each working through a share of the currently outmost polygons to be replicated, would be possible. Also, the DUN07 algorithm offers potential for parallelization, given its tree-like structure, where independent sub-trees can be traversed in parallel. However, as the calculations inside a single tree do not contain many operations of the same type, it is questionable whether the use of a GPU is efficient here or whether bypassing the global interpreter lock by means of running several processes on the same CPU might be superior. A detailed consideration is reserved for future work.
Finally, we would like to note that, although there are potential performance gains from parallel implementations of some of the existing algorithms, this would only yield notable advantages for extremely large sizes, due to the already considerable single-core speed of the existing serial implementations. Moreover, in situations involving numerical simulations on large tilings, almost always the actual simulations are bound to consume substantially more resources than the lattice creation process, when done with the current version of hypertiling. Hence, parallelization is not one of our top priorities at the moment.
## 8 Conclusion
In this article, we present an open-source library for the construction, modification and visualization of regular hyperbolic tilings. We developed hypertiling to give researchers and visual artists access to hyperbolic lattices by using straightforward high-level Python commands. Our primary emphasis lies on high-performance computing, and our algorithms are optimized to deliver the highest possible speed and memory efficiency. To further support scientific applications, our library includes an extensive tool set of methods to establish adjacency relations, enabling the use of tilings as graphs. In this context, our generative reflection (GR) kernel family currently is, to the best of our knowledge, the fastest available algorithm to construct these geometric structures, substantially outperforming other standard algorithms. This performance can be achieved by employing a combinatorial design of the algorithm, bit-coded memory layout, generator functionality (to significantly decrease storage requirements) as well as Python optimizations such as numpy and numba's just-in-time compilation. Given these optimizations, the GR kernels are able to construct huge lattices - a crucial requirement for scientific applications where large hyperbolic bulk regions with minimal boundary effects are needed. For instance, a \((7,3)\) lattice with \(1.5\) billion cells (roughly equivalent to a \(35\,000^{2}\) square lattice) can be generated on a standard desktop workstation in less than one hour.
In addition to speed, the library also enables flexible and dynamic lattice manipulation by means of its static rotational (SR) kernels: A tiling can be created explicitly layer-wise or expanded around particular cells, and cells can be added or removed at any place in the lattice. This is made possible by a sophisticated bookkeeping system that avoids the creation of duplicate cells. Together with fast, optimized implementations of all required Mobius transformations provided in the package, this opens the way for the construction of very individual, even procedural and dynamically generated lattices. The SR kernels also provide triangle refinements, which can be particularly useful whenever a better spatial resolution beyond the geometrically fixed hyperbolic polygon edge length is required. Finally, a Python implementation of the well-known construction algorithm by D. Dunham is available, both in its original, as well as in a modern, performance optimized version.
All internal arithmetics and algorithmic details are hidden from the user. The interested developer can nevertheless profit from a hierarchy of class objects, which allows easy debugging, modification and, most importantly, extendability.
Our library provides considerable plotting and animation capabilities. We offer plotting methods that are both fast as well as accurate, and tilings can moreover be exported as SVG vector graphics for further manipulation and publication-ready plots. Finally, our animation classes facilitate the presentation of dynamic processes on a hyperbolic lattice as well as changes of the lattice itself, all realized via straightforward matplotlib animation wrappers.
The combination of all these resources renders hypertiling a uniquely capable package. It puts large hyperbolic lattices at the fingertips of researchers, lowering the threshold for exploration and opening up new avenues of application of numerical hyperbolic geometry.
## Acknowledgements
We thank P. Basteiro, J. Erdmenger, H. Hinrichsen, R. Meyer, A. Stegmeier and L. Upreti for fruitful discussions. We are also grateful to the Information Technology Center of Wurzburg University for providing computational resources through the JULIA cluster.
Author contributionsM.S. initiated and leads the project. M.S. and Y.T. conceived the core program design and implemented the main lattice construction algorithms. F.G. specialized on numerical optimization and HPC aspects of the code. FD, F.G, D.H. and J.S.E.P implemented specific modules and extensions and tested the code. All authors contributed to writing the documentation and the manuscript.
Funding informationM.S. acknowledges support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy through the Wurzburg-Dresden Cluster of Excellence on Complexity and Topology in Quantum Matter - ct.qmat (EXC 2147, project-id 390858490). F.G. furthermore acknowledges financial support through the German Research Foundation, project-id 258499086 - SFB 1170 'ToCoTronics'.
|
2309.11582 | Incorporating Singletons and Mention-based Features in Coreference
Resolution via Multi-task Learning for Better Generalization | Previous attempts to incorporate a mention detection step into end-to-end
neural coreference resolution for English have been hampered by the lack of
singleton mention span data as well as other entity information. This paper
presents a coreference model that learns singletons as well as features such as
entity type and information status via a multi-task learning-based approach.
This approach achieves new state-of-the-art scores on the OntoGUM benchmark
(+2.7 points) and increases robustness on multiple out-of-domain datasets (+2.3
points on average), likely due to greater generalizability for mention
detection and utilization of more data from singletons when compared to only
coreferent mention pair matching. | Yilun Zhu, Siyao Peng, Sameer Pradhan, Amir Zeldes | 2023-09-20T18:44:24Z | http://arxiv.org/abs/2309.11582v1 | Incorporating Singletons and Mention-based Features in Coreference Resolution via Multi-task Learning for Better Generalization
###### Abstract
Previous attempts to incorporate a mention detection step into end-to-end neural coreference resolution for English have been hampered by the lack of singleton mention span data as well as other entity information. This paper presents a coreference model that learns singletons as well as features such as entity type and information status via a multi-task learning-based approach. This approach achieves new state-of-the-art scores on the OntoGUM benchmark (+2.7 points) and increases robustness on multiple out-of-domain datasets (+2.3 points on average), likely due to greater generalizability for mention detection and utilization of more data from singletons when compared to only coreferent mention pair matching.1
Footnote 1: The code is publicly available at [https://github.com/yilunzhu/coref-mtl](https://github.com/yilunzhu/coref-mtl).
## 1 Introduction
Coreference is a linguistic phenomenon that occurs when two or more expressions in a text refer to the same entity (e.g. _the Vice President... She_). Conceptually, resolving coreference takes two steps: identifying all mention candidates from a text as opposed to non-referring expressions, and linking identified mentions into clusters. However, in a given document, some mentions are never referred back to: these are called singletons, i.e. mentions that, unlike non-referring expressions, could be referred back to in principle, but are not involved in any coreference relations in context. Singletons are important to coreference resolution since they represent true negatives in cluster linking (Kubler and Zhekova, 2011), but also to how humans understand discourse from a theoretical perspective (Grosz et al., 1995), since they also constitute mentioned entities (i.e. clusters of size 1).
However, due to the lack of singleton annotation in the most frequently used coreference dataset for English, i.e. OntoNotes V5.0 (Weischedel et al., 2011; Pradhan et al., 2013), previous attempts have either ignored singletons (Lee et al., 2017, 2018; Wu et al., 2020; Dobrovolskii, 2021) or incorporated pseudo-singletons into the model (Wu and Gardner, 2021; Toshniwal et al., 2021). The first approach is commonly used in contemporary end-to-end (e2e) systems which train directly on detecting coreferring mentions, but causes problems in that models cannot differentiate singleton spans from non-referring or random/meaningless spans, i.e. penalizing these two types equally. Though e2e has achieved significant progress on OntoNotes, it does not align with linguistic theories on how humans resolve the task. The second approach attempts to amend the model with pseudo-singletons by predicting non-coreferring mentions, but the accuracy gap between gold and generated singletons is unknown and ultimately leads to degradation.
Previous work has also shown that recent coreference models struggle with domain generalization (Moosavi and Strube, 2017; Zhu et al., 2021). To alleviate the problem, Moosavi and Strube (2018) proposed a novel algorithm to incorporate linguistic features and showed improvement in out-of-domain (OOD) data. Subramanian and Roth (2019) applied adversarial training to improve generalization. However, the first approach requires carefully designed linguistic features, and both papers evaluated generalization only on one single-genre dataset, limiting the validity of the results.
To tackle these challenges, we introduce a novel coreference model. Our contributions can be summarized as follows: First, we propose a multi-task learning (MTL) based neural coreference model with constrained mention detection, which jointly learns several mention-based tasks, including singleton detection, entity type recognition, and information status classification. Second, experiments demonstrate that the proposed model achieves new
state-of-the-art performance on the OntoGUM test set. Third, we show that our model outperforms strong baselines on two OOD datasets, showing it generalizes more reliably to unseen data than plain e2e. We release all code and provide a system that detects and links all mentions, including singletons, and outputs predicted entity types.
## 2 Related Work
MTL for coreferenceMultitask learning Caruana (1997); Collobert and Weston (2008) uses a single model with shared parameters trained to perform multiple tasks, with potential benefits arising from synergies between related objectives. Previous work has investigated the use of MTL for coreference by harnessing related pre-training tasks. Yu and Poesio (2020); Kobayashi et al. (2022) applied an MTL framework to a more specific bridging resolution problem, with standard coreference resolution as the additional task. Luan et al. (2018) used MTL with coreference resolution, entity recognition, and relation extraction for scientific knowledge graph construction. Lu and Ng (2021) used five MTL tasks for event coreference resolution.
Neural coreference resolutionThe e2e approach jointly learns mention detection and coreferent pair scoring Lee et al. (2017), and achieved SOTA scores on the OntoNotes test set before several extensions were proposed. Lee et al. (2018); Kantor and Globerson (2019) improved span representations to improve pair matching. Joshi et al. (2020) added better pre-trained language models to gain additional score boosting. Wu et al. (2020) adapted a question-answering framework into the task and improved both span detection and coreference matching scores. Dobrovolskii (2021) also improved performance by initially matching coreference links via words instead of spans.
## 3 Methods
### Model
Let \(N\) be the number of possible spans in a document \(D\). The coreference task can be formulated as assigning an antecedent span \(y_{i}\) for each span \(i\), where the set of possible antecedents for each span \(i\) contains a dummy antecedent \(\epsilon\) and all preceding spans: \(\mathcal{Y}(i)\) = \(\{\epsilon,1,...,i-1\}\).
\[s(i,j)=\begin{cases}0,&j\text{ = }\epsilon\\ s_{m}(i)+s_{m}(j)+s_{c}(i,j),&j\neq\epsilon\end{cases}\]
where \(s_{m}(i)\) and \(s_{m}(j)\) are the mention scores that determine how likely the selected text span is a mention candidate. Previous work utilizes a scoring function to measure how likely the span is a coreference markable. However, singletons in the training data are ignored and thus weaken the model's generalization capability. Therefore, our proposed model uses two scoring functions to represent the distributions of markables and mentions better. The mention scoring function uses two feed-forward networks fed by the representation of each span: one part is a markable score that calculates the score of the span being a coreferent markable in the document; the other is the mention candidate score that determines how likely a span is a mention candidate. The formula is represented as follows:
\[s_{m}(i)=\beta_{1}\cdot s_{markable(i)}+\beta_{2}\cdot s_{ mention(i)}\]
\[s_{markable(i)}=w_{markble}\text{ \text{\text{fFNN}}}(g_{i})\]
\[s_{mention(i)}=w_{mention}\cdot\text{\text{\text{fFNN}}}(g_{i})\]
where \(\cdot\) denotes a dot product, fFNN denotes a feed-forward neural network, \(\beta_{1}\) and \(\beta_{2}\) denote model parameters that adjust the weights of markable scores and mention candidate scores, and \(g_{i}\) denotes the represented embeddings of the span (we use the same span representing method as in Lee et al. (2017)). The two scoring functions are computed via two standard feed-forward neural networks. The purpose of this design is to prevent random text spans being fed to the pair-matching step. Following the e2e approach Lee et al. (2017, 2018); Joshi et al. (2020), we concatenate the boundary representations, the soft head vector and an additional feature vector \(\phi\) containing speaker information, and feed the resulting vector into separate feed-forward neural networks to calculate markable scores and mention candidate scores.
In addition to the main pair-matching task, our model adds three mention-based tasks: a (possibly singleton) mention span detection task, entity type recognition, and information status classification (see below). For each task, the span vector is fed into a separate feed-forward network for classification. Each task is assigned a weight to calculate the total loss score:
\[\mathcal{L}_{total}=\sum_{c=1}^{C}\mathcal{W}_{c}\cdot\mathcal{L}_{c}\]
where \(\mathcal{W}_{c}\) is the weight for task \(c\). See Appendix A for an overview of the model architecture.
### Task Selection
Since OntoNotes does not contain singletons, we choose a corpus for which singleton information is available but follows the same annotation scheme as OntoNotes. The OntoGUM corpus Zhu et al. (2021) is an adapted version of the GUM corpus Zeldes (2017), a multi-layer corpus with a range of annotations at the word level (part-of-speech, morphology), phrase level (phrase trees, entity recognition, and linking), dependency level Universal Dependencies syntax) and document-level (discourse parses and coreference). Although OntoGUM uses the same singleton-free coreference scheme as OntoNotes, information about singletons can be recovered from the original GUM corpus. We therefore select three annotations from GUM and investigate whether they are helpful for coreference resolution on OntoGUM: nested mention span detection, entity type, and information status.
Mention detectionAs outlined in Section 1, we integrate gold nested mentions, including singletons (sg), into our model to improve mention detection and coreference. The task aims to recognize meaningful referential text spans and makes more information available to the model than the plain e2e approach that only trains on coreferring mentions (\(\sim\)39% of mentions in GUM are singletons).
Entity typeGUM assigns one of ten entity types (ent) to each mention - person, organization, etc. (see Figure 2 in Appendix D). Since a cluster usually has one entity type, this feature instructs the model regarding which mentions belong to the same semantic class.
Information statusInformation status (infs.) indicates how an entity was introduced into discourse, e.g. new, previously mentioned or inferrable from other mentions Prince (1981). Each mention is assigned one of six labels (see Appendix C). This task is expected to inform the model about the likelihood and how an entity was previously introduced.
## 4 Experiments
### Datasets
OntoGUM Zhu et al. (2021) is a coreference dataset following the same annotation scheme as OntoNotes. This paper adds other layers to the coreference annotation, such as mention spans (including singletons), aligned entity types, and information status, automatically extracted from the GUM corpus. We train the model with GUM v8.0, which includes 193 documents across 12 written and spoken genres with \(\sim\)180K tokens.
We also evaluate our model on two OOD datasets of the same annotation scheme: OntoNotes and WikiCoref. OntoNotes includes richly annotated documents with layers including syntax, propositions, named entities, word senses, and coreference, but no singleton mentions or aligned (non-named) entity types Pradhan et al. (2013). Its test set includes 348 documents with 170K tokens. WikiCoref Ghaddar and Langlais (2016) is a manually annotated corpus from English Wikipedia, containing 30 documents with \(\sim\)60K tokens.
### Baseline
Combining the e2e approach with a contextualized language model (LM) and span masking is one of the best models on OntoNotes. Following Joshi
\begin{table}
\begin{tabular}{l|c c|c c c c c c c c c|c} \hline \hline & \multicolumn{3}{c|}{Markble Detection} & \multicolumn{3}{c}{MUC} & \multicolumn{3}{c}{B\({}^{3}\)} & \multicolumn{3}{c|}{CEAF\({}_{\phi 4}\)} & \multirow{2}{*}{Avg. F1} \\ & P & R & F1 & P & R & F1 & P & R & F1 & P & R & F1 \\ \hline \multicolumn{13}{l}{**In-domain** - OntoGUM} \\ \hline Joshi et al. (2019) & **91.0** & 71.9 & 80.3 & **83.3** & 69.7 & 75.9 & 70.8 & 59.2 & 64.5 & 70.5 & 45.8 & 55.5 & 65.5 \\ MTL (sg) & 90.2 & 75.0 & **81.9** & 82.7 & 72.8 & 77.4 & 70.4 & 63.1 & 66.5 & 71.5 & 49.2 & 58.3 & 67.6 \\ MTL (sg+ent) & 90.0 & **75.1** & **81.9** & 82.8 & **72.9** & **77.6** & **71.2** & **63.6** & **67.2** & **71.9** & **50.2** & **59.1** & **68.2** \\ MTL (sg+ent+infs.) & 90.0 & 75.0 & 81.8 & 82.1 & 72.3 & 76.9 & 70.0 & 62.3 & 65.9 & 70.0 & 48.6 & 57.3 & 66.9 \\ \hline \hline \multicolumn{13}{l}{**Out-of-domain** - OntoNotes} \\ \hline Joshi et al. (2019) & **83.9** & 76.9 & 80.3 & **77.6** & 72.7 & 75.1 & 66.9 & 60.6 & 63.6 & **64.3** & 54.5 & 59.0 & 65.9 \\ MTL (sg+ent) & 82.2 & **80.2** & **81.2** & 77.0 & **76.1** & **76.5** & **67.1** & **64.0** & **65.5** & 63.6 & **59.5** & **61.5** & **67.8** \\ \hline \multicolumn{13}{l}{**Out-of-domain** - WikiCoref} \\ \hline Joshi et al. (2019) & 79.9 & 58.8 & 67.7 & 73.7 & 60.1 & 66.2 & 66.4 & 43.4 & 52.4 & 56.6 & 31.6 & 40.5 & 53.0 \\ MTL (sg+ent) & **80.4** & **60.0** & **68.7** & **74.5** & **61.8** & **67.5** & **67.8** & **45.3** & **54.4** & **59.0** & **33.0** & **42.4** & **55.6** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison between Joshi et al. (2019) and our model on test sets of both in-domain (OntoGUM 8.0) and out-of-domain datasets (OntoNotes and WikiCoref). The overall F1 score is the average of F1s from three evaluation metrics MUC, B\({}^{3}\), and CEAF\({}_{\phi 4}\). All models are trained on OntoGUM.
et al. (2020), we use large SpanBERT embeddings as the LM and the improved coarse-to-fine (Lee et al., 2018) SOTA model as our baseline model (see Appendix B for implementation details).
### Task Weights
The task weights are a list of parameters that controls the relative importance of various tasks in our model, which are optimized via hyperparameter search on the OntoGUM dev set to achieve the best performance. In the optimal setting with 2 auxiliary tasks, the loss weight for the major task coreference relation identification is set to 0.4 and the weights for singleton detection and entity type recognition are set to 0.2 each. The weights are 0.15 for each auxiliary task when information status is added to training.
### Results
In-domain EvaluationWe train the model on OntoGUM and evaluate it in-domain. As shown in the first part of Table 1, our model with the best setting improves average F1 by 2.7 points and achieves new SOTA performance on the OntoGUM benchmark, indicating the benefit of the MTL tasks. We also note that recall scores of both mention detection and coreference matching show a significant increase by 3.2 and 4.0 points, respectively, which suggests that the MTL approach helps the model capture more non-trivial markable spans and coreference relations than the baseline model, with little or no precision cost. In addition, though information status contributes to the result as a sole auxiliary task (see Table 2), it is harmful when training with other tasks.
Out-of-domain EvaluationTo test the robustness of our model, we evaluate on two OOD datasets sharing the same annotation scheme with OntoGUM. The second part of Table 1 shows that our best in-domain model with mention detection and entity type as auxiliary tasks outperforms the baseline model on both datasets by 2.3 points on average. For OntoNotes, though our model has slightly lower precision, the recall results in substantially better performance; for WikiCoref, our model performs better on both precision and recall. These results indicate that the knowledge gained from the multiple mention-based tasks can be transferred to unseen text types, and is likely a combination of more training data (since singletons include instances not considered by the baseline training) and the learning of features distinguishing non-mentions from mentions and ones corresponding to semantic types.
### Ablation Study
To show the importance of each task in our model, we ablate each task in the architecture and report the average F1 on the OntoGUM development set. In Table 2, singleton scores and the mention detection task contribute 1.3 points to the final result, indicating that this feature is the most important one.
With the addition of the nested entity type recognition task, the model brings a smaller increase (0.4 points) to the final result. There could be several reasons for this: one is that the LM has already learned entity types latently, so giving this as an explicit feature is redundant; the other reason is that the baseline model rarely groups mentions with different entity types into clusters so that entity type features can only correct few errors.
When only integrating information status into the model, the result (avg. F1 67.6) outperforms the baseline model, showing the effectiveness of this type of information. However, when all three tasks are incorporated, the overall score (67.8) is lower than excluding information status classification (68.7), which shows that information status is redundant when other mention-based features are specified.
## 5 Error analysis
We conduct quantitative and qualitative error analyses to illustrate how our model differs from the baseline. Firstly we conduct a quantitative analysis following Lu and Ng (2020), who classify resolution errors into 13 classes. Following their approach, we merge coreference errors into 6 groups. Table 3 displays the distribution of errors observed in the OntoGUM development set. These errors are present in the baseline e2e model but correctly resolved by our proposed MTL model (e2e errors) or vice versa (mtl errors).
\begin{table}
\begin{tabular}{l c c} \hline \hline & Avg. F1 & \(\Delta\) \\ \hline Base model & 67.0 & \\ w/ singleton detection (=sg) & 68.3 & +1.3 \\ w/ sg + entity type (=et) & 68.7 & +0.4 \\ w/ sg + et + information status & 67.8 & -0.9 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of various tasks included in the coreference model on the OntoGUM development data.
The majority of mtl errors involve definite nominals, revealing the challenge of resolving cherryp-picked cases that must be memorized within a multi-genre context. However, our proposed model demonstrates its ability to correctly identify relations when multiple clusters are involved. Furthermore, nearly 16% of resolved errors are associated with pronouns, indicating that our model is more capable of accurately identifying coreference relationships within the context of third-person pronouns and demonstrates a slight improvement in handling pronouns in dialogue, particularly first and second-person pronouns.
We also observe that our proposed model reduces errors across nearly all types compared to the baseline model, particularly in the case of third-person pronouns. This result suggests that integrating entity type recognition and mention detection in the MTL framework enables accurate recognition of noun-pronoun relations, particularly for pronouns that do not provide explicit entity type information, e.g., _it_. Additionally, the MTL model demonstrates improved error avoidance with definite nouns. These findings highlight the enhanced performance of our proposed model in identifying coreference relations within the local context.
We also identify several errors that illustrate the impact of singleton detection and entity type recognition. Examples in Table 4 demonstrate how including singletons and mention-based features improves the retrieval of accurate mention spans and enhances coreference relationships. The first three examples highlight how entity-type recognition contributes to resolution by avoiding type mismatches. In example (1), the pressure from entity type recognition likely aids in identifying _Harrow_ as a school (an organization). In example (2), the MTL model recognizes _it_ as an event, thereby correctly creating two distinct groups and avoiding coreference with _the grass_ (a plant entity). Similarly, example (3) presents pressure to recognize that _they_ is not an inanimate object, so it correctly prefers _noises_ as the antecedent. Examples (4) and (5) illustrate how mention detection identifies missing mentions in the baseline model or improves boundary recognition. These representative examples provide valuable insights into the significance of incorporating singletons and auxiliary mention-based tasks into a coreference model.
## 6 Conclusion
This paper presents a neural coreference model that connects singletons and other mention-based features to coreference relation matching via an MTL architecture, which (1) outperforms a strong baseline and achieves new SOTA results on OntoGUM and (2) beats the baseline model on two unseen datasets. The results show the effect of singletons and mention features and indicate improvements in model robustness when transferring to unseen data rather than overfitting distributions in the training data. In addition, our resulting system can output all mentions (incl. singletons) with entity types out-of-the-box, which benefits a series of downstream applications such as Entity Linking, Dialogue Systems, Machine Translation, Summarization, and more, since our single model already outputs typed spans for all entities mentioned in a text (see Figure 2b in Appendix D for an illustration).
\begin{table}
\begin{tabular}{l|c c c} \hline \hline Error type & \multicolumn{2}{c}{mtl errors} & \multicolumn{2}{c}{e2e errors} \\ \hline Pronouns & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ - 1st \& 2nd person pronouns & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ - 3rd person pronouns & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ Definiteness & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ - Definite nouns & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ - Indefinite nouns & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ Proper nouns & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ Others & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline Total & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \end{tabular}
\end{table}
Table 3: Number and percentage of errors by class that are produced by e2e but avoided by the MTL model (e2e errors) and produced by the MTL model but resolved by the e2e model (mtl errors).
\begin{table}
\begin{tabular}{l|l l} \hline \hline
**Entity type errors** & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\
1 & he did represent [the school]\({}_{1}\) during the very first & Eton v [Harrow]\({}_{1}\) cricket match & \\
2 & Who cut [the grass]\({}_{1}\)? Marlena did [it]\({}_{2}\). Marlena did [it]\({}_{2}\) a long time ago, but [it]\({}_{1}\) hasn’t been watered. & [It]\({}_{1}\) s dying. & \\
3 & I made [noises]\({}_{1}\) with my heels but [they]\({}_{1}\) were too & \\ & loud so I stopped. & \\ \hline
**Singleton errors** & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\
4 & The main reason attributed for the pollution of Athens & \multicolumn{1}{c}{} \\ & is because the city is enclosed by mountains in [a & \multicolumn{1}{c}{} \\ & basin which does not let the smog leave]\({}_{1}\)... have & \\ & greatly contributed to better atmospheric conditions in & \\
5 & This means that if [the gov]\({}_{1}\) decided to print 1 & \\ & quadrillion dollars in the span of a week... we ’re & \\ & loaning [the US gov]\({}_{1}\) the very money it prints & \\ \hline \hline \end{tabular}
\end{table}
Table 4: A qualitative analysis of OntoGUM dev errors that appear in the e2e model but are avoided by our MTL model. MTL predictions (gold) are represented by [brackets]\({}_{x}\). E2e predictions (errors) are highlighted in colored text and each color in an example denotes a coreference cluster.
### Limitations
In this work, we have experimented with training our model on OntoGUM. Due to the lack of singletons and other mention-based annotations, we do not train the model on the most frequently used and one of the largest coreference datasets. Thus the proposed model has not been tested on a large-scale dataset and compared with other coreference models on OntoNotes.
We evaluate the model on two English OOD datasets to investigate the model generalization. Several coreference datasets in other languages share the same annotation scheme as OntoGUM, such as Arabic Pradhan et al. (2013), and Chinese Pradhan et al. (2013). The proposed model needs to be evaluated on datasets in other languages and demonstrate the model generalization across languages. However, this would require singleton annotated data in those languages as well. With recent releases such as CorefUD Nedoluzhko et al. (2022) promoting standardization of multilingual coreference annotations and singleton annotations, we are hopeful that such experiments will be possible in the near future.
|
2309.04831 | Global Convergence of Receding-Horizon Policy Search in Learning
Estimator Designs | We introduce the receding-horizon policy gradient (RHPG) algorithm, the first
PG algorithm with provable global convergence in learning the optimal linear
estimator designs, i.e., the Kalman filter (KF). Notably, the RHPG algorithm
does not require any prior knowledge of the system for initialization and does
not require the target system to be open-loop stable. The key of RHPG is that
we integrate vanilla PG (or any other policy search directions) into a dynamic
programming outer loop, which iteratively decomposes the infinite-horizon KF
problem that is constrained and non-convex in the policy parameter into a
sequence of static estimation problems that are unconstrained and
strongly-convex, thus enabling global convergence. We further provide
fine-grained analyses of the optimization landscape under RHPG and detail the
convergence and sample complexity guarantees of the algorithm. This work serves
as an initial attempt to develop reinforcement learning algorithms specifically
for control applications with performance guarantees by utilizing classic
control theory in both algorithmic design and theoretical analyses. Lastly, we
validate our theories by deploying the RHPG algorithm to learn the Kalman
filter design of a large-scale convection-diffusion model. We open-source the
code repository at \url{https://github.com/xiangyuan-zhang/LearningKF}. | Xiangyuan Zhang, Saviz Mowlavi, Mouhacine Benosman, Tamer Başar | 2023-09-09T16:03:49Z | http://arxiv.org/abs/2309.04831v1 | # Global Convergence of Receding-Horizon Policy Search
###### Abstract
We introduce the receding-horizon policy gradient (RHPG) algorithm, the first PG algorithm with provable global convergence in learning the optimal linear estimator designs, i.e., the Kalman filter (KF). Notably, the RHPG algorithm does not require any prior knowledge of the system for initialization and does not require the target system to be open-loop stable. The key of RHPG is that we integrate vanilla PG (or any other policy search directions) into a dynamic programming outer loop, which iteratively decomposes the infinite-horizon KF problem that is constrained and non-convex in the policy parameter into a sequence of static estimation problems that are unconstrained and strongly-convex, thus enabling global convergence. We further provide fine-grained analyses of the optimization landscape under RHPG and detail the convergence and sample complexity guarantees of the algorithm. This work serves as an initial attempt to develop reinforcement learning algorithms specifically for control applications with performance guarantees by utilizing classic control theory in both algorithmic design and theoretical analyses. Lastly, we validate our theories by deploying the RHPG algorithm to learn the Kalman filter design of a large-scale convection-diffusion model. We open-source the code repository at [https://github.com/xiangyuan-zhang/LearningKF](https://github.com/xiangyuan-zhang/LearningKF).
+
Footnote †: A preliminary version of this manuscript [1] appeared in the proceedings of the 2023 American Control Conference and was presented on June 2, 2023, in San Diego, CA.
## 1 Introduction
In recent years, policy-based reinforcement learning (RL) methods [2, 3, 4, 5] have gained increasing attention in continuous control applications [6, 7, 8]. While traditional model-based techniques synthesize controller designs in a case-by-case manner [9, 10], model-free policy gradient (PG) methods promise a universal framework that learns controller designs in an end-to-end fashion. The universality of model-free PG methods makes them desired candidates in complex control applications that involve nonlinear system dynamics and imperfect state measurements. Despite countless empirical successes, the theoretical properties of model-free PG methods still need to be thoroughly investigated in continuous control. Initiated by [11], a recent line of research has well-analyzed the sample complexity of zeroth-order PG methods in several linear state-feedback control benchmarks, including linear-quadratic regulator (LQR) [11, 12, 13, 14, 15, 16, 17], distributed/decentralized LQR [18, 19], and linear robust control [20, 21, 22]. However, the theoretical properties of PG methods remain elusive in the output-feedback control settings, where the state measurement process could be corrupted by statistical noises and/or other (possibly adversarial) disturbances.
In this work, we study the convergence and sample complexity of PG methods in the discrete-time infinite-horizon Kalman filtering (KF) problem [9, 23]. Recognized as one of the cornerstones of modern control theory [24], the KF problem aims to generate optimal estimates of the unknown system states over time by utilizing a sequence of observed measurements corrupted by statistical noises. Furthermore, in
the linear-quadratic Gaussian (LQG) problem, the separation principle [25] states that the optimal control law combines KF and LQR. Thus, KF is a fundamental benchmark for studying the sample complexity of model-free PG methods beyond state-feedback settings.
Despite being the dual problem to noise-less LQR [25, 26], the KF problem possesses a substantially more complicated optimization landscape from the model-free PG perspective since the KF itself is a dynamical system rather than a static matrix. Specifically, the optimization problem over dynamic filters might admit multiple suboptimal stationary points, and the optimal KF possesses a set of equivalent realizations up to similarity transformations [27, 28]. None of the above challenges appear when using model-free PG to learn a static LQR policy [11, 12, 13, 14, 15, 16, 17]. As a result of the challenging landscape the filtering problem presents, only a few papers have focused on dynamic output-feedback settings. In particular, [27] has analyzed the optimization landscape of LQG, and [28] has shown that an informativity-regularized PG method provably converges to an optimal filter in the continuous-time KF problem, assuming that the model is known. However, [28] has assumed that the target system is open-loop stable and assumed that the control engineer has prior knowledge of a filter that satisfies an informativity condition. It is also unclear if the techniques in [28] can be directly applied to the model-free setting and result in any sample complexity guarantees. Thus, obtaining sample complexity of model-free PG methods in the KF problem has remained a significant challenge.
This work addresses these challenges by introducing a receding-horizon PG (RHPG) algorithm and establishing its global convergence and sample complexity. In contrast to direct policy search, the RHPG algorithm integrates vanilla PG (or any other policy search directions) into a dynamic programming (DP) outer loop, which iteratively decomposes the infinite-horizon KF problem that is constrained and non-convex in the policy parameter into a sequence of static estimation problems that are unconstrained and strongly-convex. Then, we show that solving the sequence of static estimation problems results in the global convergence of RHPG toward the KF, which is the optimal linear filter. We further establish the total sample complexity of the RHPG to be \(\widetilde{\mathcal{O}}(\epsilon^{-2})\) for the learned filter to be \(\epsilon\)-close in policy distance to KF, which is the first sample complexity result of PG methods in the output-feedback control settings. Notably, the RHPG algorithm does not require any prior knowledge of the system to generate a valid initialization and does not require the target system to be open-loop stable. This removes two restrictive assumptions in the previous work [28]. We validate our theories by learning the KF design of a large-scale convection-diffusion model.
Compared to the preliminary results included in [1], this work presents a comprehensive study of RHPG as a model-free RL approach in learning estimator designs. In particular, our contributions, in addition to those listed in [1], are three-folded. First, we analyze the optimization landscape in Theorem 4.1, which clarifies the properties of quadratic programs in RHPG and provides a theoretical foundation for selecting the algorithmic parameters of PG methods. Second, we discuss the insights of the RHPG design in Sec. 3.2 and compare RHPG with standard PG methods regarding parametrization and landscape, computational efficiencies, and requirements on the simulation oracles. These discussions provide the intuitions behind the mathematical developments of RHPG and should benefit future research on improving the algorithm design and the theoretical analysis. Lastly, we open-source numerical experiments on the data-driven estimation of a large-scale dynamical system, which corroborates the theories and demonstrates both the effectiveness and scalability of RHPG.
This work attempts to develop RL algorithms specifically for control and estimation tasks with performance guarantees, by utilizing classic control theory in both algorithmic design and theoretical analyses. The dual theories and implementations of RHPG to the LQR problem have been presented in [17]. Through this line of work, we demonstrate the significant utilization of DP in overcoming the challenging optimization landscape and streamlining the analyses when deploying model-free PG methods to linear control and estimation tasks. Due to the separation principle [25], our results shed light on applying model-free PG methods in solving the LQG problem through a sequential design of controller and estimator.
The structure of the paper is as follows. In Sec. 2, we define the infinite- and finite-horizon settings of the KF problem and formulate them as policy optimization problems. In Sec. 3, we introduce the RHPG algorithm and provide the general theory and intuition that backs its design. In Sec. 4, we analyze the optimization landscape of solving the KF problem using RHPG and establish the global convergence and
sample complexity guarantees of the algorithm. Lastly, we present the numerical studies on a convection-diffusion model in Sec. 5. The paper ends with the concluding remarks of Sec. 6, and an appendix that contains formal proofs of the main results.
### Notations
For a square matrix \(X\), we denote its trace, spectral norm, condition number, and spectral radius by \(\mathrm{Tr}(X)\), \(\|X\|\), \(\kappa_{X}\), and \(\rho(X)\) resp. We define the \(W\)-induced norm of \(X\) as \(\|X\|_{W}^{2}:=\max_{z\neq 0}\frac{z^{\top}X^{\top}WXz}{z^{\top}Wz}\). If \(X\) is further symmetric, we use \(X>0\), \(X\geq 0\), \(X\leq 0\), and \(X<0\) to denote that \(X\) is positive definite (pd), positive semi-definite (psd), negative semi-definite (nsd), and negative definite (nd), resp. We use \(x\sim\mathcal{N}(\mu,\Sigma)\) to denote a Gaussian random vector with mean \(\mu\) and covariance \(\Sigma\). Lastly, we use \(\mathbf{I}\) and \(\mathbf{0}\) to denote the identity and zero matrices, resp., with appropriate dimensions.
## 2 Preliminaries
### Infinite-Horizon Kalman Filtering
Consider the discrete-time linear time-invariant system
\[x_{t+1}=Ax_{t}+w_{t},\quad y_{t}=Cx_{t}+v_{t}, \tag{2.1}\]
where \(x_{t}\in\mathbb{R}^{n}\) is the state, \(y_{t}\in\mathbb{R}^{m}\) is the output measurement, and \(w_{t}\sim\mathcal{N}(\mathbf{0},W)\), \(v_{t}\sim\mathcal{N}(\mathbf{0},V)\) are sequences of i.i.d. zero-mean Gaussian noises for some \(W,V>0\), also independent of each other. The initial state is also assumed to be a Gaussian random vector such that \(x_{0}\sim\mathcal{N}(\bar{x}_{0},X_{0})\), independent of \(\{w_{t},v_{t}\}\), with \(\bar{x}_{0}\neq\mathbf{0}\) and \(X_{0}>0\). Additionally, we assume that \((C,A)\) is observable and note that the condition \(W>0\) readily leads to controllability of \((A,W^{1/2})\), which is a standard condition in KF.
The KF problem aims to generate a sequence of estimated states, denoted by \(\hat{x}_{t}\) for each \(t\), that minimizes the infinite-horizon mean-square error (MSE):
\[\mathcal{J}_{\infty}:=\lim_{N\to\infty}\frac{1}{N}\mathbb{E}\bigg{\{}\sum_{t= 0}^{N}(x_{t}-\hat{x}_{t})^{\top}(x_{t}-\hat{x}_{t})\bigg{\}}. \tag{2.2}\]
Moreover, each \(\hat{x}_{t}\) can only depend on the history and output measurements up to but not including \(t\), i.e., \(\{y_{0},\cdots,y_{t-1}\}\). The celebrated result of Kalman [23] showed that the \(\mathcal{J}_{\infty}\)-minimizing filter (could also be called 1-step predictor), which exists under the controllability and the observability conditions, has the form of
\[\hat{x}_{t+1}^{*} =(A-L^{*}C)\hat{x}_{t}^{*}+L^{*}y_{t},\quad\hat{x}_{0}^{*}=\bar{x }_{0}, \tag{2.3}\] \[L^{*} =A\Sigma^{*}C^{\top}(V+C\Sigma^{*}C^{\top})^{-1}, \tag{2.4}\]
where \(L^{*}\) is the Kalman gain and \(\Sigma^{*}\) represents the unique pd solution to the filter algebraic Riccati equation (FARE):
\[\Sigma=A\Sigma A^{\top}-A\Sigma C^{\top}(V+C\Sigma C^{\top})^{-1}C\Sigma A^{ \top}+W. \tag{2.5}\]
Hence, without any loss of optimality, we can restrict the search to the class of filters of the form \(\hat{x}_{t+1}=A_{L}\hat{x}_{t}+B_{L}y_{t}\) and then parametrize the KF problem as a minimization problem over \(A_{L}\) and \(B_{L}\) subject to a stability constraint1
Footnote 1: Extending the results in this work to the setting with instantaneous feedback measurement (i.e., allowing \(\hat{x}_{t}\) to depend also on \(y_{t}\), and hence replacing \(y_{t}\) in (2.6) with \(y_{t+1}\)) would be straightforward.
\[\min_{A_{L},B_{L}}\quad\mathcal{J}_{\infty}(A_{L},B_{L})\quad\text{s.t.} \quad\hat{x}_{t+1}=A_{L}\hat{x}_{t}+B_{L}y_{t}\ \text{ and }\ \rho(A_{L})<1. \tag{2.6}\]
ote that by (2.3), there indeed exists a solution to (2.6) where \((A_{L}^{*},B_{L}^{*})=(A-L^{*}C,L^{*})\). Note also that when the pair \((A,C)\) is known, (2.6) involves an over-parametrization since solving (2.6) is equivalent to optimizing a single variable \(B_{L}\). However, in the model-free setting where \((A,C)\) is unknown, which is the target setting of our paper, it is reasonable to parametrize the KF problem as in (2.6). Until now, obtaining sample complexity of model-free PG methods in solving the KF problem (2.6) has remained a major challenge.
### Finite-Horizon Kalman Filtering
We now discuss the finite-\(N\)-horizon KF problem, also described by the system dynamics (2.1). Adopting the same parametrization as in (2.6), but this time allowing time-dependence, and again without any loss of optimality, we represent the finite-horizon KF problem as a minimization problem over a sequence of time-varying filter parameters \(\{A_{L_{t}},B_{L_{t}}\}\), for all \(t\in\{0,\cdots,N-1\}\),
\[\min_{\{A_{L_{t}},B_{L_{t}}\}} \mathcal{J}\{(A_{L_{t}},B_{L_{t}})\}\!:=\!\mathbb{E}\bigg{\{}\sum _{t=0}^{N}(x_{t}-\hat{x}_{t})^{\top}(x_{t}-\hat{x}_{t})\bigg{\}} \tag{2.7}\] \[\text{s.t.}\quad\hat{x}_{t+1}=A_{L_{t}}\hat{x}_{t}+B_{L_{t}}y_{t},\quad\hat{x}_{0}=\bar{x}_{0}.\]
The minimum in (2.7) can be achieved by \((A_{L_{t}}^{*},B_{L_{t}}^{*})=(A-L_{t}^{*}C,L_{t}^{*})\), where \(L_{t}^{*}\) is the time-varying Kalman gain
\[L_{t}^{*}=A\Sigma_{t}^{*}C^{\top}(V+C\Sigma_{t}^{*}C^{\top})^{-1},\quad\Sigma_{0}^{*}=X_{0}. \tag{2.8}\] \[\Sigma_{t+1}^{*}=A\Sigma_{t}^{*}A^{\top}-A\Sigma_{t}^{*}C^{\top}(V +C\Sigma_{t}^{*}C^{\top})^{-1}C\Sigma_{t}^{*}A^{\top}\!+\!W. \tag{2.9}\]
The solutions \(\Sigma_{t}^{*}\), for all \(t\in\{0,\cdots,N-1\}\), generated by the filter Riccati difference equation (FRDE) (2.9) always exist and are unique and pd, due to \(V>0\), \(W>0\), and the iteration starts with \(\Sigma_{0}^{*}=X_{0}>0\).
## 3 Receding-Horizon Policy Gradient
### Kalman Filtering and Dynamic Programming
It is well known that the solution of the FRDE (2.9) converges monotonically to the stabilizing solution of the FARE (2.5) at an exponential rate [29, 30]. Then, it readily follows that the optimal time-varying filter \((A_{L_{t}}^{*},B_{L_{t}}^{*})\) to the finite-horizon KF problem (2.7) also converge monotonically to the time-invariant \((A_{L}^{*},B_{L}^{*})\) as \(N\to\infty\). We present this convergence result in the following theorem, which plays an important role in our algorithm design.
**Theorem 3.1**: _The finite-horizon Kalman gain as in (2.8) converges to the infinite-horizon Kalman gain defined in (2.4) exponentially fast as \(N\to\infty\). Specifically, using \(\|\cdot\|_{*}\) to denote the \(\Sigma^{*}\)-induced norm and letting_
\[N_{0}=\frac{1}{2}\cdot\frac{\log\big{(}\frac{\|X_{0}-\Sigma^{*} \|_{*}\kappa_{\Sigma^{*}}\cdot\|A_{L}^{*}\|\cdot\|C\|}{c\cdot\lambda_{\min}(V)} \big{)}}{\log\big{(}\frac{1}{\|A_{L}^{*}\|_{*}}\big{)}}+1. \tag{3.1}\]
Figure 1: Algorithm 1, executed forward in time, constructs an \((h+1)\)-horizon KF problem from \(t=0\) to \(t=h+1\) at each iteration indexed by \(h\).
_where \(\|A_{L}^{*}\|_{*}<1\), we have that, for all \(N\geq N_{0}\), it holds that \(\|L_{N-1}^{*}-L^{*}\|\leq\epsilon\) for any \(\epsilon>0\). If additionally \(X_{0}>\Sigma\) holds, then \(L_{t}^{*}\) is stabilizing for all \(t\geq 0\) in the sense that \(\rho(A_{L_{t}^{*}})=\rho(A-L_{t}^{*}C)<1\)._
The proof of Theorem 3.1 is provided in SSA. Theorem 3.1 quantifies how different system parameters affect the non-asymptotic convergence rate of the time-varying filters to the time-invariant KF. It further demonstrates that if \(N\sim\mathcal{O}(\log(\epsilon^{-1}))\), then the filter \((A_{L_{N-1}}^{*},B_{L_{N-1}})\) will be \(\epsilon\)-close to the infinite-horizon KF \((A_{L}^{*},B_{L}^{*})\). Furthermore, if \(\epsilon\) is sufficiently small (i.e., smaller than the stability margin of \(A_{L}^{*}\)), then it holds that \(\rho(A_{L_{N-1}}^{*})<1\). When \(X_{0}>\Sigma\), the system is sufficiently excited, and as a result, the frozen filter at any \(t\geq 0\) is stable in the sense that \(\rho(A_{L_{t}^{*}})<1\).
### Algorithm Design
Instead of solving the infinite-horizon KF problem (2.7) directly, we introduce the RHPG algorithm, which first selects a sufficiently large problem horizon \(N\) according to Theorem 3.1, then constructs and solves \(N\) static estimation problems sequentially (see Figure 1 for an illustration) using PG methods. We describe the procedure of the RHPG algorithm below.
```
1:Problem horizon \(N\)
2:Initialize \(A_{L_{0}},B_{L_{0}}\leftarrow\mathbf{0}_{n\times n},\mathbf{0}_{n\times m}\)
3:for\(h\in\{0,\cdots,N-1\}\)do
4: Solve (3.2) using PG methods until convergence
5: Use the convergent filter \(A_{L_{h}},B_{L_{h}}\) to warm-start PG updates for the next iteration
6:endfor
7:return\(A_{L_{N-1}},B_{L_{N-1}}\)
```
**Algorithm 1** Receding-Horizon Policy Gradient (RHPG)
We provide detailed implementations of zeroth-order and first-order RHPG in Algorithms 2 and 3, respectively.
The RHPG algorithm is executed forward in time, and in the first iteration, the algorithm learns the optimal filter for a one-step static estimation problem. Then, every subsequent RHPG iteration extends the problem horizon by adding one additional time step after the initial one. Formally, at each iteration indexed by \(h\), the RHPG algorithm constructs an \((h+1)\)-horizon KF problem from \(t=0\) to \(t=h+1\), but we fix the filter parameters for all \(t\in\{0,\cdots,h-1\}\) as those generated from earlier iterations and only optimize for latest filter parameters \((A_{L_{h}},B_{L_{h}})\). This renders each iteration of the RHPG algorithm into solving a static estimation problem that is quadratic in \((A_{L_{h}},B_{L_{h}})\).
Mathematically, for every \(h\in\{0,\cdots,N-1\}\), the RHPG algorithm solves the following minimization problems
\[\min_{A_{L_{h}},B_{L_{h}}}\mathcal{J}_{h} \mathrel{\mathop{:}}=\mathbb{E}_{x_{0},w_{t},v_{t},\theta_{0}} \Big{\{}\sum_{t=0}^{h+1}(x_{t}-\hat{x}_{t})^{\!\top}(x_{t}-\hat{x}_{t})\Big{\}} \tag{3.2}\] \[\mathrm{s.t.} \hat{x}_{t+1}=A_{L_{t}}^{*}\hat{x}_{t}+B_{L_{t}}^{*}y_{t},\ \forall t\in\{0,\cdots,h-2\},\ \hat{x}_{0}=\bar{x}_{0}\] \[x_{t+1}=Ax_{t}+w_{t},\ \forall t\in\{0,\cdots,h-2\},\ x_{0} \sim\mathcal{N}(\bar{x}_{0},X_{0})\] \[\hat{x}_{h}=A_{L_{h-1}}^{*}\hat{x}_{h-1}+B_{L_{h-1}}^{*}y_{h-1}+ \theta_{0}\] (3.3) \[x_{h}=Ax_{h-1}+w_{h-1}+\theta_{0}. \tag{3.4}\]
where \(\theta_{0}\sim\mathcal{N}(\mathbf{0},\Theta)\in\mathbb{R}^{n}\) is sampled independently to \(x_{0},w_{t},v_{t}\) and satisfies \(\Theta>0\). The purpose of injecting an additional "small" noise \(\theta_{0}\) in (3.3)-(3.4) is to ensure the strict convexity of the quadratic program (3.2); we will formally justify it in Sec. 4.
Before presenting the theoretical analyses, we provide a few comments and discussions regarding the algorithmic designs of RHPG. From the optimization landscape perspective and compared to the vanilla
PG update that is trapped by suboptimal first-order stationary points, RHPG provides a new policy search direction that points toward the global optimum by learning the KF design step-by-step, where each subproblem admits a unique global optimum due to the quadratic landscape. See Figure 2 for an illustration. Our algorithmic design shares a similar flavor as the curriculum learning literature [31, 32, 33], where an agent (controller/estimator) evolves by mastering the simplest tasks first and then gradually conquering tasks that are more and more challenging. In our case, the RHPG algorithm first learns the filter capable of predicting only the immediate next state from scratch but then keeps adapting/evolving to handle new filtering tasks with longer and longer problem horizons (cf., the warm-start step in Line 4 of Algorithm 1). When the problem horizon becomes sufficiently large (as characterized by Theorem 3.1), the filter converges globally, and the behaviors of the infinite-horizon KF, such as closed-loop stability, begin to emerge. Lastly, since RHPG starts by learning the simplest static estimation task and every subproblem is unconstrained, RHPG does not require any specific filter for initializations. In other words, it suffices to initialize arbitrarily for searching a static estimator.
On the computational side, it may seem at first glance that RHPG is less efficient compared to vanilla PG since it solves \(N\) optimization problems instead of 1. This, however, turns out not to be true. When applying (sampled-based) vanilla PG to the infinite-horizon objective \(\mathcal{J}_{\infty}\) directly, the rollout length is typically a very large finite number so that an accurate PG estimate can be obtained. The rollout length for RHPG is 1 in its first iteration, which coincides with the iteration that needs the largest number of PG steps since it learns an optimal one-step static estimator _from scratch_. The rollout length remains very small in the first few iterations, where the time-varying filters (2.8) are distinct across time. When the rollout length becomes moderate, only a few PG updates are needed to fine-tune the filter; see Figure 2. Due to the much shorter rollout trajectories, the computational efficiency of RHPG is comparable, if not better, to vanilla PG. We provide the complexity analysis in Sec. 4.
Lastly, we discuss the requirements for the simulation oracle. To sample the gradients, we require the standard assumption that the user has access to a simulator such that for any input filter \((A_{L_{h}},B_{L_{h}})\), the simulator can return an empirical value of the objective function (3.2). This requires the simulator to generate exact state trajectories of the simulated model, but it only reveals a noisy scalar objective value to the learning algorithm. The requirement is reasonable for the offline learning setting since the algorithm does not use any system information directly. However, building a simulator naturally requires knowledge of the system model, which could be exact, approximate, or simplified. Transferring the simulated policies to a real system (a.k.a., Sim2Real) that might exhibit different dynamics requires a _provable_ robustness
Figure 2: Illustration of the different optimization landscapes and search directions under vanilla PG and RHPG. *Note that this graph does not imply that the cost values of \(\mathcal{J}_{h}\) are higher than that of \(\mathcal{J}_{\infty}\).
guarantee of the learned controller/estimator, which is beyond the scope of the present paper, but it is an important current and future research topic [21, 22, 34, 35, 36, 37].
### Bias of Model-Free Receding-Horizon Filtering
The RHPG algorithm is backed by Bellman's principle of optimality, which requires solving each iteration exactly. However, iterative algorithms such as PG can only return an \(\epsilon\)-accurate solution in a finite time. We generalize the dynamic programming principle in the following theorem to analyze how computational errors accumulate in the forward DP process and provide an optimality guarantee for the filter that the RHPG algorithm returns.
**Theorem 3.2**: _Choose the problem horizon \(N\) following Theorem 3.1 and assume that one can compute, for all \(h\in\{0,\cdots,N-1\}\) and some \(\epsilon>0\), filter \((\widetilde{A}_{L_{h}},\widetilde{B}_{L_{h}})\) that satisfies_
\[\big{\|}\widetilde{A}_{L_{h}}-\widetilde{A}_{L_{h}}^{*}\big{\|},\big{\|} \widetilde{B}_{L_{h}}-\widetilde{B}_{L_{h}}^{*}\big{\|}\sim\mathcal{O}( \epsilon\cdot\texttt{poly}(\text{system parameters})),\]
_where \((\widetilde{A}_{L_{h}}^{*},\widetilde{B}_{L_{h}}^{*})\) is the unique minimizing solution of \(\mathcal{J}_{h}\) in (3.2), after setting the filters for all \(t\in\{0,\cdots,h-2\}\) to be those computed in the previous iterations and are \(\epsilon\)-close to the minimum of \(\mathcal{J}_{0},\cdots,\mathcal{J}_{h-1}\), respectively. Then, the RHPG algorithm outputs \((\widetilde{A}_{L_{N-1}},\widetilde{B}_{L_{N-1}})\) that satisfies \(\big{\|}[\widetilde{A}_{L_{N-1}}\;\widetilde{B}_{L_{N-1}}]-[A_{L}^{*}\;B_{L} ^{*}]\big{\|}\leq\epsilon\), where \((A_{L}^{*},B_{L}^{*})\) represents the infinite-horizon KF. If further \(\epsilon\) is sufficiently small such that \(\epsilon<1-\|A_{L}^{*}\|_{*}\), then \(\widetilde{A}_{L_{N-1}}\) satisfies \(\rho(\widetilde{A}_{L_{N-1}})<1\)._
We illustrate Theorem 3.2 in Figure 3 and defer its proof to SSB. Theorem 3.2 guarantees that if every iteration of the DP is solved to an \(\mathcal{O}(\epsilon)\)-accuracy, then the convergent filter after completing the \(N\)-step DP procedure is at most \(\epsilon\)-away from the exact infinite-horizon KF. We note that the RHPG algorithm utilizes two layers of approximations. First, the solution to the infinite-horizon KF problem (2.6) is approximated by the solution to a finite-horizon KF problem (2.7), where we choose \(N\sim\mathcal{O}(\log(\epsilon^{-1}))\) due to the exponential attraction of the Riccati equation. Then, we solve the finite-horizon KF problem (2.7) by integrating forward DP with model-free policy search. Combining the two steps addresses the infinite-horizon KF task with a provable global convergence guarantee using only samples of system trajectories.
## 4 Optimization Landscape, Convergence, and Sample Complexity
We first present the optimization landscape of the static estimation problem (3.2) in the following theorem.
Figure 3: As illustrated, Theorem 3.1 bounds the distance between \(L_{N-1}^{*}\) and \(L^{*}\) and Theorem 3.2 analyzes the forward propagation of the optimization errors from each iteration of RHPG. Combining two theorems upper-bounds the total policy gap to the infinite-horizon KF.
**Theorem 4.1**: _For every \(h\in\{0,\cdots,N-1\}\), the quadratic objective \(\mathcal{J}_{h}\) defined in (3.2) is twice continuously differentiable, and its Hessian matrix can be represented as_
\[H_{h} =\mathbb{E}_{x_{h},\hat{x}_{h},v_{h}}\begin{bmatrix}\hat{x}_{h} \hat{x}_{h}^{\top}&\hat{x}_{h}y_{h}^{\top}\\ y_{h}\hat{x}_{h}^{\top}&y_{h}y_{h}^{\top}\end{bmatrix}\] \[=\begin{bmatrix}\mu_{\hat{x}_{h}}\mu_{\hat{x}_{h}}^{\top}+\Theta& (\mu_{\hat{x}_{h}}\mu_{x_{h}}^{\top}+\Theta)C^{\top}\\ C(\mu_{x_{h}}\mu_{\hat{x}_{h}}^{\top}+\Theta)&C(\mu_{x_{h}}\mu_{x_{h}}^{\top}+ \Theta)C^{\top}+V\end{bmatrix}>0,\]
_where we have used \(\mu_{\hat{x}_{h}}\) and \(\mu_{x_{h}}\) to denote \(\mathbb{E}[\hat{x}_{h}]\) and \(\mathbb{E}[x_{h}]\), respectively, and \(\Theta>0\) is the covariance matrix of the zero-mean Gaussian random vector \(\theta_{0}\) in (3.3)-(3.4). Moreover, the objective function \(\mathcal{J}_{h}\) is strongly convex with constant \(\lambda_{\min}(H_{h})\) and smooth with constant \(\lambda_{\max}(H_{h})\). Lastly, introducing the additional random vector \(\theta_{0}\) is without any loss of optimality in the sense that the time-varying KF characterized by (2.8)-(2.9) represents the unique minimum of (3.2)._
The proof of Theorem 4.1 is deferred to Sec. C, where an extended discussion on the effect of \(\theta_{0}\) is also provided. In short, introducing an additional "small" \(\theta_{0}\) in (3.3)-(3.4) ensures the strict convexity of the quadratic objective \(\mathcal{J}_{h}\) with respect to \(A_{L_{h}}\), while \(\mathcal{J}_{h}\) is strictly convex in \(B_{L_{h}}\) with or without \(\theta_{0}\) due to the condition \(V>0\).
Denote \(\pi_{t}=\big{[}A_{L_{t}}\mid B_{L_{t}}\big{]}\). We define the analytic vanilla PG of \(\mathcal{J}_{h}\) for every \(h\in\{0,\cdots,N-1\}\) to be2
Footnote 2: Note that one can use any other policy search directions such as natural PG [3] or least-squares policy iteration [38] to replace vanilla PG in RHPG.
\[\nabla_{\pi_{h}}\mathcal{J}_{h}(\pi_{h})=2\Big{[}\pi_{h}(\Psi_{h}+ \Delta)-(G_{h}+\Xi)\Big{]}, \tag{4.1}\]
where
\[\Delta=\begin{bmatrix}\Theta&\Theta C^{\top}\\ \hline C\Theta&C\Theta C^{\top}\end{bmatrix},\ \Xi=\begin{bmatrix}\ A\Theta&A \Theta C^{\top}\end{bmatrix} \tag{4.2}\] \[\Psi_{t}=\begin{bmatrix}\ \ \mathrm{Var}(\hat{x}_{t})& \ \mathrm{Cov}(x_{t},\hat{x}_{t})^{\top}C^{\top}\\ \hline C\,\mathrm{Cov}(x_{t},\hat{x}_{t})&C\,\mathrm{Var}(x_{t})C^{\top}+V \end{bmatrix}\] (4.3) \[G_{t}=\begin{bmatrix}\ A\,\mathrm{Cov}(x_{t},\hat{x}_{t})&A\, \mathrm{Var}(x_{t})C^{\top}\end{bmatrix}\] (4.4) \[\mathrm{Var}(\hat{x}_{t+1})=\pi_{t}\Psi_{t}\pi_{t}^{\top},\ \mathrm{Var}(x_{t+1})=A\, \mathrm{Var}(x_{t})A^{\top}+W\] \[\mathrm{Cov}(x_{t+1},\hat{x}_{t+1})=A\big{[}\,\mathrm{Cov}(x_{t},\hat{x}_{t})\mid\mathrm{Var}(x_{t})C^{\top}\big{]}\pi_{t}^{\top}\] \[\mathrm{Var}(\hat{x}_{0})=\mathrm{Cov}(x_{0},\hat{x}_{0})=\bar{x }_{0}\bar{x}_{0}^{\top},\ \mathrm{Var}(x_{0})=\bar{x}_{0}\bar{x}_{0}^{\top}+X_{0}.\]
We next define the vanilla PG update as
\[\pi_{h}^{\prime}=\pi_{h}-\eta_{h}\cdot\nabla_{\pi_{h}}\mathcal{J}_{h}(\pi_{h}). \tag{4.5}\]
where \(\eta_{h}>0\) is a constant stepsize. When the exact PG in (4.1) is not available, it can be estimated from samples of system trajectories using (two-point) zeroth-order optimization techniques as described in Algorithm 2. Due to the landscape properties listed in Theorem 4.1, global convergence and sample complexity of the PG update (4.5) and its zeroth-order implementation naturally follow. We present them in the following propositions.
**Proposition 4.2**: _For all \(h\in\{0,\cdots,N-1\}\) and a fixed \(\epsilon>0\), choose a constant stepsize \(\eta_{h}\leq 1/\lambda_{\max}(H_{h})\). Then, the PG update (4.5) converges linearly to the unique minimum of (3.2). That is, such that \(\|\pi_{h}^{T_{h}}-\widetilde{\pi}_{h}^{*}\|\leq\epsilon\) after a total number of \(T_{h}\sim\mathcal{O}(\log(\epsilon^{-1}))\) iterations, where \(\widetilde{\pi}_{h}^{*}\) is the unique minimizing solution of (3.2) after setting filters for all \(t\in\{0,\cdots,h-2\}\) to be those computed in the previous iterations and are \(\epsilon\)-close to the unique minimum, respectively._
**Proposition 4.3**: _Choose the smoothing radius of the zeroth-order PG to satisfy \(r_{h}\sim\mathcal{O}(\epsilon)\) and the stepsize \(\eta_{h}\sim\mathcal{O}(\epsilon^{2})\). Then, the zeroth-order PG update in Algorithm 2 converges after \(T_{h}\sim\widetilde{\mathcal{O}}(\epsilon^{-2}\log(\frac{1}{8\epsilon^{2}}))\) iterations in the sense that \(\left\|\pi_{h}^{T_{h}}-\widetilde{\pi}_{h}^{*}\right\|\leq\epsilon\) with a probability of at least \(1-\delta\)._
Proposition 4.2 is standard, and Proposition 4.3 follows from the proof of Proposition 3.3 in [17]. Combining Theorem 3.2 with Proposition 4.3, we conclude that if we spend \(\widetilde{O}(\epsilon^{-2})\) samples in solving every one-step KF problem to \(\mathcal{O}(\epsilon)\)-accuracy with a probability of \(1-\delta\), for all \(h\in\{0,\cdots,N-1\}\), then Algorithm 1 is guaranteed to output \(\pi_{N-1}\) that is \(\epsilon\)-close to the infinite-horizon KF with a probability of at least \(1-N\delta\). The total sample complexity of the RHPG algorithm is thus \(\widetilde{O}(\epsilon^{-2})\cdot O(\log(\epsilon^{-1}))\sim\widetilde{O}( \epsilon^{-2})\). This complexity result matches the complexity of applying RHPG to the LQR task [17].
## 5 Numerical Experiment: Estimation of the Convection-Diffusion Model
We conducted numerical experiments to design state estimators for the one-dimensional convection-diffusion linear PDE3. The convection-diffusion equation models physical phenomena involving the transfer of particles, energy, or other quantities within a system due to convection and diffusion. These quantities are described by a continuous concentration function \(c(\mathsf{x},t):\Omega\times\mathbb{R}^{+}\to\mathbb{R}\), where \(\mathsf{x}\) and \(t\) represent spatial and temporal coordinates, respectively, and \(\Omega\subset\mathbb{R}\) is the spatial domain of interest. The one-dimensional convection-diffusion equation can then be expressed as
Footnote 3: We open-source the code repository at [https://github.com/xiangyuan-zhang/LearningKF](https://github.com/xiangyuan-zhang/LearningKF)
\[\frac{\partial c}{\partial t}=\nu\frac{\partial^{2}c}{\partial\mathsf{x}^{2} }-v\frac{\partial c}{\partial\mathsf{x}}, \tag{5.1}\]
where \(\nu\) is the diffusion coefficient and \(v\) is the convection velocity; these scalar physical parameters characterize the strength of convection and diffusion, respectively. When \(v=0\), the convection-diffusion equation (5.1) reduces to the heat equation. As with any PDE, the convection-diffusion equation must be accompanied by initial and boundary conditions. Here, we consider the domain \(\Omega=[0,1]\) with periodic boundary conditions, while the initial condition will be defined shortly.
The convection-diffusion equation can be solved numerically by discretizing space and time, resulting in a linear state-space model of the form (2.1). To do so, we define a state vector \(x_{t}\in\mathbb{R}^{n}\) that contains the values of \(c\) at \(n\) equally-spaced points in \(\Omega\) and at time \(t=k\Delta t\), where \(n\) is even, \(k=0,1,\dots,\) and \(\Delta t\) is the discrete time step. The dynamics governed by the convection-diffusion equation can then be approximated by a state-space model \(x_{t+1}=Ax_{t}\) with
\[A=\frac{1}{n}\cdot D^{\dagger}\mathrm{diag}(e^{-ivk_{\mathsf{x}}-\nu k_{ \mathsf{x}}^{2}}\Delta t)D, \tag{5.2}\]
where \(k_{\mathsf{x}}=2\pi[0,\dots,n/2-1,0,-n/2+1,\dots,-1]\) is the vector of spatial wavenumbers, \(i\) is the imaginary unit, \(D\) is the discrete Fourier transform (DFT) matrix defined by \(D_{pq}=e^{-2\pi i(p-1)(q-1)/n}\), and its scaled conjugate transpose \(D^{\dagger}/n\) is the inverse discrete Fourier transform (IDFT) matrix [39]. The matrix \(A\) combines a spectral discretization of the spatial derivatives in the convection-diffusion equation, which takes
advantage of the periodicity of the spatial domain with an exact temporal integration of its continuous-time dynamics. Such spectral evaluation of the derivatives enjoys exponential convergence properties [40]. As a result, even a small state dimension \(n\) yields a state-space model that faithfully reproduces the dynamics of the convection-diffusion equation.
To set up the numerical experiments, we chose \(\Delta t=0.05\), and set the dimensions of the state vector \(x_{t}\) and measurement/observation vector to \(n=200\) and \(m=5\), respectively. The five sensors were evenly distributed across the physical domain \(\Omega\), where each sensor measures the (unscaled) values of the state at the corresponding location, subject to additive zero-mean Gaussian white noises. Moreover, we chose the diffusion coefficient to \(\nu=2\times 10^{-3}\), the convection velocity to \(v=5\times 10^{-2}\), the distribution of \(x_{0}\) to be
\[x_{0}\sim\mathcal{N}\Big{(}c(\mathsf{x},t=0),\frac{1}{16}\cdot \sin(2\pi\mathsf{x})(\sin(2\pi\mathsf{x}))^{\top}\Big{)} \tag{5.3}\] \[c(\mathsf{x},t=0)=\operatorname{sech}(10(\mathsf{x}-1/2)). \tag{5.4}\]
Furthermore, we set the covariance matrix of the measurement noise to \(V=10^{-1}\cdot\mathbf{I}\), and the covariance matrix of the process to \(W=10^{-9}\cdot\mathbf{I}\). Lastly, we set the covariance matrix of the additional noise \(\theta_{0}\) in (3.3)-(3.4) to \(\Theta=10^{-2}\cdot\mathbf{I}\).
After setting up the PDE environment, we applied the RHPG algorithm to learn the KF design with several different problem horizons ranging from \(N=1\) to \(N=101\). We implemented the inner loop of the RHPG algorithm using the first-order vanilla PG update (4.5) with the stepsize selected based on the Adam rule [41]. We detail the procedure of the first-order RHPG algorithm in Algorithm 3.
In the first experiment, we generated a ground truth system trajectory with the length of 700 discrete time steps and with a deterministic initial condition being \(x_{0}=c(\mathsf{x},t=0)\) in (5.4). Then, we visualized the estimated state trajectories computed using the convergent filters from RHPG with several different input horizons \(N\in\{1,2,3,6,11,21,31,51,101\}\)), where for all filters the initial state estimate was set to \(\hat{x}_{0}=c(\mathsf{x},t=0)\). In Figure 5, we provide a comparison of the estimated trajectories by the RL-based filters with the ground truth trajectory and the estimated trajectory generated by the infinite-horizon KF (2.3)-(2.4). One can observe from the first row of Figure 5 that when \(N\) is small, the convergent filters of RHPG are myopic and can only predict system states over a short period since the convergences of RHPG in the initial iterations are toward the time-varying KFs in (2.8). As we increase the number of iterations of RHPG, the filter adapts to new tasks with longer and longer problem horizons (cf., the illustration in Figure 2). Lastly, one can observe from the third row of Figure 5 that when the problem horizon becomes sufficiently large, the RHPG algorithm converges globally to the infinite-horizon KF, which corroborates the theories developed in the paper.
In the second experiment, we generated 100 random system trajectories of the convection-diffusion equation for 700 time steps, each starting from a randomly sampled initial condition according to (5.3). We then applied the convergent filters of RHPG as well as the model-based KF to estimate the 100 system trajectories and plotted the average estimation error (i.e., \(\|x_{t}-\hat{x}_{t}\|_{2}\)) over time. As shown in Figure 6, the RHPG filters with a small problem horizon \(N\) are myopic, resulting in lower estimation costs over the
Figure 4: Eigen-spectrum of \(A\) constructed in (5.2), where \(\rho(A)=1\) since the integrated concentration over the domain is conserved by the convection-diffusion equation.
short time period. However, these myopic filters are not able to regulate the state estimation error in the asymptotic regime. When we run RHPG for a sufficiently large number of steps (e.g., 51 and 101), the learned filter performs well in the asymptotic regime due to the global convergence of RHPG to the infinite-horizon KF.
## 6 Conclusion
We have introduced the RHPG algorithm and provided rigorous analyses for its convergence and sample complexity in learning the infinite-horizon KF. RHPG is the first model-free PG algorithm with provable global convergence in learning the KF design, and it does not require any prior knowledge of the system for initialization and does not require the target system to be open-loop stable. We have validated our theories in extensive numerical experiments on a large-scale convection-diffusion model. On a higher level, our work has proposed developing RL algorithms specifically for control applications by utilizing classic control theory in the algorithmic design. The proposed paradigm enables certifying provable performance guarantees in a purely model-free setting, overcoming the nonconvex optimization landscape. Following this work and the dual theory to LQR [17], several ongoing and future research directions include designing and analyzing RHPG-type algorithms for LQG and \(\mathcal{H}_{\infty}\)-robust filtering.
## Acknowledgment
X. Zhang and T. Basar were supported in part by the US Army Research Laboratory (ARL) Cooperative Agreement W911NF-17-2-0181, and in part by the Army Research Office (ARO) MURI Grant AG285. S. Mowlavi and M. Benosman were supported solely by Mitsubishi Electric Research Laboratories. X. Zhang acknowledges helpful discussions with Bin Hu of UIUC in the early stage of the project and with Arvind Raghunathan of MERL. X. Zhang and T. Basar acknowledge anonymous reviewers of ACC '23 for their helpful comments. |
2309.04825 | Few-Shot Medical Image Segmentation via a Region-enhanced Prototypical
Transformer | Automated segmentation of large volumes of medical images is often plagued by
the limited availability of fully annotated data and the diversity of organ
surface properties resulting from the use of different acquisition protocols
for different patients. In this paper, we introduce a more promising few-shot
learning-based method named Region-enhanced Prototypical Transformer (RPT) to
mitigate the effects of large intra-class diversity/bias. First, a subdivision
strategy is introduced to produce a collection of regional prototypes from the
foreground of the support prototype. Second, a self-selection mechanism is
proposed to incorporate into the Bias-alleviated Transformer (BaT) block to
suppress or remove interferences present in the query prototype and regional
support prototypes. By stacking BaT blocks, the proposed RPT can iteratively
optimize the generated regional prototypes and finally produce rectified and
more accurate global prototypes for Few-Shot Medical Image Segmentation (FSMS).
Extensive experiments are conducted on three publicly available medical image
datasets, and the obtained results show consistent improvements compared to
state-of-the-art FSMS methods. The source code is available at:
https://github.com/YazhouZhu19/RPT. | Yazhou Zhu, Shidong Wang, Tong Xin, Haofeng Zhang | 2023-09-09T15:39:38Z | http://arxiv.org/abs/2309.04825v1 | # Few-Shot Medical Image Segmentation via a Region-enhanced Prototypical Transformer
###### Abstract
Automated segmentation of large volumes of medical images is often plagued by the limited availability of fully annotated data and the diversity of organ surface properties resulting from the use of different acquisition protocols for different patients. In this paper, we introduce a more promising few-shot learning-based method named Region-enhanced Prototypical Transformer (RPT) to mitigate the effects of large intra-class diversity/bias. First, a subdivision strategy is introduced to produce a collection of regional prototypes from the foreground of the support prototype. Second, a self-selection mechanism is proposed to incorporate into the Bias-alleviated Transformer (BaT) block to suppress or remove interferences present in the query prototype and regional support prototypes. By stacking BaT blocks, the proposed RPT can iteratively optimize the generated regional prototypes and finally produce rectified and more accurate global prototypes for Few-Shot Medical Image Segmentation (FSMS). Extensive experiments are conducted on three publicly available medical image datasets, and the obtained results show consistent improvements compared to state-of-the-art FSMS methods. The source code is available at: [https://github.com/YazhouZhu19/RPT](https://github.com/YazhouZhu19/RPT).
Keywords:Few-Shot Learning Medical Image Segmentation Bias Alleviation Transformer
## 1 Introduction
Automatic medical image segmentation is the implementation of data-driven image segmentation concepts to identify a specific anatomical structure's surface or volume in a medical image ranging from X-ray and ultrasonography to CT and MRI scans. Deep learning algorithms are exquisitely suited for this task because they can generate measurements and segmentations from medical images without the time-consuming manual work as in traditional methods. However, the performance of deep learning algorithms depends heavily on the availability of large-scale, high-quality, fully pixel-wise annotations, which are often expensive to acquire. To this end, few-shot learning is considered as a more promising approach and introduced into the medical image segmentation by [13].
Through revisiting existing FSMS algorithms [3, 4, 5, 17, 16, 19], they can be grouped into two folders, including the interactive method originated from SENet [15] (shown in Fig. 1(a)) and the prototype networks [18, 20] (demonstrated in Fig. 1(b)). For the interaction-based approach, the ideas of _attention_[19], and _contrastive learning_[22] are introduced to work interactively between parallel support and query arms. In contrast, prototype network-based approach almost dominates the FSMS research, such as SSL-ALPNet [13], ADNet [5] and SR&CL [21], whose core idea is to obtain semantic-level prototypes by compressing support features, and then make predictions by matching with query features. However, the problem of how to obtain an accurate and representative prototype remains.
The main reason affecting the representativeness of the prototype is the significant discrepancy between support and query. Specifically, in general, different protocols are taken for different patients, which results in a variety of superficial organ appearances, including the _size_, _shape_, and _contour_ of features. In this case, the prototype generated from the support features may not accurately represent the key attributes of the target organ in the query image. In addition, it is also challenging to extract useful information (prototypes of novel classes) from the cluttered background due to the extremely heterogeneous texture between the target and its surroundings, which may contain information belonging to some novel classes or redundant information issue [19].
To mitigate the impact of intra-class diversity, it considers subdividing the foreground of the supporting prototypes to produce some regional prototypes, which are then rectified to suppress or exclude areas inconsistent with the query targets, as illustrated in Fig. 1(c). Concretely, in the prototype learning stage, multiple subdivided regional prototypes are enhanced with a more accurate class center, which can be derived from the newly designed Regional Prototype Generation (RPG) and Query Prototype Generation (QPG) modules. Then, a designed Region-enhanced Prototypical Transformer (RPT) that is mainly composed of a number of stacked Bias-alleviated Transformer (BaT) blocks, each of which contains the core debiasing function-Search and Filter (S&F) modules, to filter out undesirable prototypes. As shown in Fig. 2, Our contributions are summarized as follows:
Figure 1: Comparison between previous FSMS models and our model. (a) Interactive model. (b) Prototypical network based model. (c) Our proposed model.
* A Region-enhanced Prototypical Transformer (RPT) consisting of stacked Bias-alleviated Transformer (BaT) blocks is proposed to mitigate the effects of large intra-class variations present in FSMS through Search and Filter (S&F) modules devised based on the self-selection mechanism.
* A subdivision strategy is proposed to perform in the foreground of the support prototype to generate multiple regional prototypes, which can be further iteratively optimized by the RPT to produce the optimal prototype.
* The proposed method can achieve state-of-the-art performance on three experimental datasets commonly used in medical image segmentation.
## 2 Methodology
### Overall Architecture
Before introducing the overall architecture, it is necessary to briefly explain how data is processed. Specifically, the 3D supervoxel clustering method [5] is employed to generate pseudo-masks as supervision, which is learned in a self-supervised learning manner without any manual annotations. Meta-learning-based episodic tasks can then be constructed using the generated pseudo-masks. Notably, the pseudo-masks obtained by the 3D clustering method is more consistent with the volumetric properties of medical images than the 2D superpixel clustering method adopted in [13].
As depicted in Fig. 2, the overall architecture includes three main components: the Regional Prototype Generation (RPG) module, the Query Prototype Generation (QPG) module and the Region-enhanced Prototypical Transformer (RPT) consisting of three Bias-alleviated Transformer (BaT) blocks. The pipeline first extracts features from support and query images using a weight-shared ResNet-101 [6] as a backbone, which has been pretrained on the MS-COCO dataset [10]. We employ the ResNet101 pretrained on MS-COCO for optimal performance, and the comparison with ResNet50 pretrained on ImageNet dataset[2] is also included in the appendix. The extracted features are then taken as the input of the RPG and QPG modules to generate multiple region prototypes, which will be rectified by the following RPT to produce the optimal prototype.
### Regional Prototype Generation
The core problem considered in this paper is what causes prototype bias. By examining the input data, it can be observed that images of healthy and diseased organs have a chance to be considered as support or query. This means that if there are lesioned or edematous regions in some areas of the support images, they will be regarded as biased information which in reality cannot be accurately transferred for the query images containing only healthy organs. When these prototypes that contain the natural heterogeneity of the input images are processed by the Masked Average Pooling (MAP) operation, they inevitably lead to significant intra-class biases.
To cope with the above problems, we propose a Region Prototype Generation (RPG) module to generate multi-region prototypes by performing subdivisions in the foreground of the support images. Given an input support image \(\mathbf{I}_{s}\) and the corresponding foreground mask \(\mathcal{M}^{f}\), the foreground of this image can be obtained by calculating their product. The foreground image then can be partitioned into \(N_{f}\) regions, where \(N_{f}\) is set to 10 by default. By using the Voronoi-based partition method [1, 23], a set of regional masks \(\left\{\mathcal{V}_{n}\right\}_{n=1}^{N_{f}}\) can be derived for subsequent use of Masked Average Pooling (MAP) to generate a set of coarse regional prototypes \(\hat{\mathcal{P}}_{s}=\left\{\hat{p}_{n}\right\}_{n=1}^{N_{f}},\hat{p}_{n}\in \mathbb{R}^{C}\). Formally,
\[\hat{p}_{n}=\text{MAP}(\mathbf{F}_{s},\mathcal{V}_{n})=\frac{1}{|\mathcal{V}_{ n}|}\sum_{i=1}^{HW}\mathbf{F}_{s,i}\mathcal{V}_{n,i}, \tag{1}\]
where \(\mathbf{F}_{s}\in\mathbb{R}^{C\times H\times W}\) is the feature extracted from the support images and \(\mathcal{V}_{n}\) denotes the regional masks.
### Query Prototype Generation
Once a set of coarse regional prototypes \(\hat{\mathcal{P}}_{s}\) have been generated for the support images, we can employ the method introduced in [11] to learn the coarse query prototype \(\hat{\mathbf{P}}_{q}\in\mathbb{R}^{1\times C}\). Concretely, it first uses the \(\text{MAP}(\cdot)\) operator as introduced in Eq. (1) to learn a global support prototype \(\mathbf{P}_{g}=\text{MAP}(\mathbf{F}_{s},\mathcal{M}_{s})\)
Figure 2: Overview of the proposed Region-enhanced Prototypical Transformer.
with \(\mathbf{P}_{g}\in\mathbb{R}^{1\times C}\), whose output can then be used to calculate the coarse query foreground mask \(\hat{\mathcal{M}}_{q}^{f}\). Considering that the empirically designed threshold described in [11] may affect the quality of the \(\hat{\mathcal{M}}_{q}^{f}\), we hereby introduce a learnable threshold \(\tau\). This process can be denoted as
\[\hat{\mathcal{M}}_{q}^{f}=1-\sigma(S(\mathbf{F}_{q},\mathbf{P}_{g})-\tau), \tag{2}\]
where \(\mathbf{F}_{q}\in\mathbb{R}^{C\times H\times W}\) is feature extracted from query images, \(S(a,b)=-\alpha cos(a,b)\) is the negative cosine similarity with a fixed scaling factor \(\alpha=20\), \(\sigma\) denotes the Sigmoid activation, and \(\tau\) is obtained by applying one average-pooling and two fully-connected layers (FC) to the query feature, expressed as \(\tau=\text{FC}(\mathbf{F}_{q})\). After this, the coarse query foreground prototype can be achieved by using \(\hat{\mathbf{P}}_{q}=\text{MAP}(\mathbf{F}_{q,i},\hat{\mathcal{M}}_{q,i}^{f})\).
### Region-enhanced Prototypical Transformer
The above received prototypes \(\hat{\mathcal{P}}_{s}\) and \(\hat{\mathbf{P}}_{q}\) are taken as input to the proposed Region-enhanced Prototypical Transformer (RPT) to rectify and regenerate the optimal global prototype \(\mathbf{P}_{s}\). As shown in Fig. 2, our RPT mainly consists of \(L\) stacked Bias-alleviated Transformer (BaT) blocks each of which contains a Search and Filter (S&F) module, and QPG modules that maintain the query prototypes continuously updated. Taking the first BaT block as an example, it calculates an affinity map \(\mathcal{A}=\hat{\mathbf{P}}_{s}\hat{\mathbf{P}}_{q}^{\top}\in\mathbb{R}^{N_{ f}\times 1}\) to reveal the correspondence between the query and \(N_{f}\) support regional prototypes by taking an input containing the query prototype \(\hat{\mathbf{P}}_{q}\) and the support prototype \(\hat{\mathbf{P}}_{s}\in\mathbb{R}^{N_{f}\times C}\) obtained by concatenating all elements in \(\hat{\mathcal{P}}_{s}\). Then, a selective map \(\mathcal{S}\in\mathbb{R}^{N_{f}\times 1}\) can be derived from the proposed self-selection based S&F module by
\[\mathcal{S}_{i}(\mathcal{A}_{i})=\begin{cases}0&\text{if }\mathcal{A}_{i}>= \xi\\ -\infty&otherwise\end{cases},i\in\left\{0,1,...,N_{f}\right\}, \tag{3}\]
where \(\xi\) is the selection threshold achieved by \(\xi=(min(\mathcal{A})+mean(\mathcal{A}))/2\), \(\mathcal{S}\) indicates the chosen regions from the support image that performs compatible with the query at the prototypical level. Then, the heterogeneous or disturbing regions of support foreground will be weeded out with \(\text{softmax}(\cdot)\) function. The preliminary rectified prototypes \(\hat{\mathbf{P}}_{s}^{o}\in\mathbb{R}^{N_{f}\times C}\) is aggregated as:
\[\hat{\mathbf{P}}_{s}^{o}=\text{softmax}(\hat{\mathbf{P}}_{s}\hat{\mathbf{P}}_ {q}^{\top}+\mathcal{S})\hat{\mathbf{P}}_{q}. \tag{4}\]
The refined \(\hat{\mathbf{P}}_{s}^{o}\) will be fed into the following components designed based on the self-attention mechanism to produce the output \(\mathbf{P}_{s}^{1}\in\mathbb{R}^{N_{f}\times C}\). Formally,
\[\hat{\mathbf{P}}_{s}^{o+1}=\text{LN}(\text{MHA}(\hat{\mathbf{P}}_{s}^{o})+\hat {\mathbf{P}}_{s}^{o}),\qquad\mathbf{P}_{s}^{1}=\text{LN}(\text{MLP}(\hat{ \mathbf{P}}_{s}^{o+1})+\hat{\mathbf{P}}_{s}^{o+1}), \tag{5}\]
where \(\hat{\mathbf{P}}_{s}^{o+1}\in\mathbb{R}^{N_{f}\times C}\) is the intermediate generated prototype, \(\text{LN}(\cdot)\) denotes the layer normalization, \(\text{MHA}(\cdot)\) represents the standard multi-head attention module and \(\text{MLP}(\cdot)\) is the multilayer perception.
By stacking multiple BaT blocks, our RPT can iteratively rectify and update all coarse support and the query prototype. Given the prototypes \(\mathbf{P}_{s}^{l-1}\) and \(\mathbf{P}_{q}^{l-1}\) from the previous BaT block, the updates for the current BaT block are computed by:
\[\mathbf{P}_{s}^{l}=\text{BaT}(\mathbf{P}_{s}^{l-1},\mathbf{P}_{q}^{l-1}),\qquad \mathbf{P}_{q}^{l}=\text{QPG}(\text{GAP}(\mathbf{P}_{s}^{l}),\mathbf{F}_{q}), \tag{6}\]
where \(\mathbf{P}_{s}^{l}\in\mathbb{R}^{N_{f}\times C}\) and \(\mathbf{P}_{q}^{l}\in\mathbb{R}^{1\times C}\) (\(l=1,2,...,L\)) are updated prototypes, \(\text{GAP}(\cdot)\) denotes the global average pooling operation. The final output prototypes \(\mathbf{P}_{s}\) optimized by the RPT can be used to predict the foreground of the query image by using Eq.(2: \(\tilde{\mathcal{M}}_{q}^{f}=1-\sigma(S(\mathbf{F}_{q},\text{GAP}(\mathbf{P}_{ s}^{3}))-\tau)\), while its background can be obtained by \(\tilde{\mathcal{M}}_{q}^{b}=1-\tilde{\mathcal{M}}_{q}^{f}\) accordingly.
### Objective Function
The binary cross-entropy loss \(\mathcal{L}_{ce}\) is adopted to determine the error between the predict masks \(\tilde{\mathcal{M}}_{q}\) and the given ground-truth \(\mathcal{M}_{q}\). Formally,
\[\mathcal{L}_{ce}=-\frac{1}{HW}\sum_{h}^{H}\sum_{w}^{W}\mathcal{M}_{q}^{f}(x,y) log(\tilde{\mathcal{M}}_{q}^{f}(x,y))+\mathcal{M}_{q}^{b}(x,y)log(\tilde{ \mathcal{M}}_{q}^{b}(x,y)). \tag{7}\]
Considering the prevalent class imbalance problem in medical image segmentation, the boundary loss [8]\(\mathcal{L}_{B}\) is also adopted and it is written as
\[\mathcal{L}_{B}(\theta)=\int_{\Omega}\phi G(q)s_{\theta}(q)d_{q}, \tag{8}\]
where \(\theta\) denotes the network parameters, \(\Omega\) denotes the spatial domain, \(\phi G:\Omega\rightarrow\mathbb{R}\) denotes the _level set_ representation of the ground-truth boundary, \(\phi G(q)=-D_{G}(q)\) if \(q\in G\) and \(\phi G(q)=D_{G}(q)\) otherwise, \(D_{G}\) is distance map between the boundary of prediction and ground-truth, and \(s_{\theta}(q):\Omega\rightarrow[0,1]\) denotes softmax(\(\cdot\)) function.
Overall, the loss used for training our RPT is defined as \(\mathcal{L}=\mathcal{L}_{ce}+\eta\mathcal{L}_{dice}+(1-\eta)\mathcal{L}_{B}\), where \(\mathcal{L}_{dice}\) is the Dice loss [12], \(\eta\) is initially set to 1 and decreased by 0.01 every epoch.
## 3 Experiments
**Experimental Datasets:** The proposed method is comprehensively evaluated on three publicly available datasets, including **Abd-MRI**, **Abd-CT** and **Card-MRI**. Concretely, **Abd-MRI**[7] is an abdominal MRI dataset used in the ISBI 2019 Combined Healthy Abdominal Organ Segmentation Challenge. **Abd-CT**[9] is an abdominal CT dataset from MICCAI 2015 Multi-Atlas Abdomen Labeling Challenge. **Card-MRI[24]** is a cardiac MRI dataset from MICCAI 2019 Multi-Sequence Cardiac MRI Segmentation Challenge. All 3D scans are
reformatted into 2D axial and 2D short-axis slices. The abdominal datasets **Abd-MRI** and **Abd-CT** share the same categories of labels which includes the liver, spleen, left kidney (LK) and right kidney (RK). The labels for **Card-MRI** include left ventricular myocardium (LV-MYO), right ventricular myocardium (RV), and blood pool (LV-BP).
**Experiment Setup:** The model is trained for 30k iterations with batch size set to 1. During training, the initial learning rate is set to \(1\times 10^{-3}\) with a step decay of 0.8 every 1000 iterations. The values of \(N_{f}\) and iterations \(L\) are set to 10 and 3, respectively. To simulate the scarcity of labeled data in medical scenarios, all experiments embrace a 1-way 1-shot setting, and 5-fold cross-validation is also carried out in the experiments, where we only record the mean value.
**Evaluation:** For a fair comparison, the metric used to evaluate the performance of 2D slices on 3D volumetric ground-truth is the Dice score used in [13]. Furthermore, two different supervision settings are used to evaluate the generalization ability of the proposed method: in Setting 1, the test classes may appear in the background of the training slices, while in Setting 2, the training slices containing the test classes are removed from the dataset to ensure that the test classes are unseen. Note that Setting 2 is impractical for Card-MRI scans, since all classes typically co-occur on one 2D slice, making label exclusion impossible. In addition, as in [13], abdominal organs are categorized into _upper_ abdomen (liver, spleen) and _lower_ abdomen (left, right kidney) to demonstrate whether the learned representations can encode spatial concepts.
### Quantitative and Qualitative Results
Table 1 shows the performance comparison of the proposed method with state-of-the-art methods, including the vanilla PA-Net [20], SE-Net [15], ADNet [5], CRAPNet [3], SSL-ALPNet [13, 14], AAS-DCL [22] and SR&CL [21] under two experimental settings. From Tab. 1, it can be seen that the proposed method outperforms all listed methods in terms of the Mean values obtained under two different settings. Especially, the Mean value on Abd-CT dataset under Setting 1 reaches 77.83, which is 3.31 higher than the best result achieved by AAS-DCL.
\begin{table}
\begin{tabular}{c|l|l|c c c c c|c c c c} \hline \multirow{2}{*}{Setting} & \multirow{2}{*}{Method} & \multirow{2}{*}{Reference} & \multicolumn{4}{c|}{Abd-MRI} & \multicolumn{4}{c}{Abd-CT} \\ & & & \multicolumn{2}{c}{Lower} & \multicolumn{2}{c}{Upper} & \multicolumn{2}{c|}{Mean} & \multicolumn{2}{c}{Lower} & \multicolumn{2}{c}{Upper} & \multicolumn{2}{c}{Mean} \\ & & & & LK & RK & Spleen & Liver & \multicolumn{2}{c}{LK} & RK & Spleen & Liver & Mean \\ \hline \multirow{8}{*}{1} & ADNet [5] & MIA’22 & 73.86 & 85.80 & 72.29 & 82.11 & 78.51 & 72.13 & 79.06 & 63.48 & 77.24 & 72.97 \\ & AAS-DCL [22] & ECCV’22 & 80.37 & 86.11 & 76.24 & 72.33 & 78.76 & 74.58 & 73.19 & 72.30 & 78.04 & 74.52 \\ & SR\&CL [21] & MICCAI’22 & 79.34 & 87.42 & 76.01 & 80.23 & 80.77 & 73.45 & 71.22 & **73.41** & 76.06 & 73.53 \\ & CRAPNet [3] & WACV’23 & **81.95** & 86.42 & 74.32 & 76.46 & 79.79 & 74.69 & 74.18 & 70.37 & 75.41 & 73.66 \\ & **Ours (RPT)** & — & 80.72 & **89.82** & **76.37** & **82.86** & **82.44** & **77.05** & **79.13** & 72.58 & **82.57** & **77.83** \\ \hline \multirow{8}{*}{2} & ADNet [5] & MIA’22 & 59.64 & 56.68 & 59.47 & **77.03** & 63.20 & 48.41 & 40.52 & 50.97 & 70.63 & 52.63 \\ & AAS-DCL [22] & ECCV’22 & 76.90 & 83.75 & 74.86 & 69.94 & 76.36 & 64.71 & **69.95** & 66.36 & 71.61 & 68.16 \\ \cline{1-1} & SR\&CL [21] & MICCAI’22 & 77.07 & 84.24 & 73.73 & 75.55 & 77.65 & 63.37 & 63.37 & 76.36 & 73.63 & 67.94 \\ \cline{1-1} & CRAPNet [3] & WACV’23 & 74.66 & 82.77 & 70.82 & 73.82 & 75.52 & 70.91 & 67.33 & 70.17 & 70.45 & 69.72 \\ \cline{1-1} & **Ours (RPT)** & — & **78.33** & **86.01** & **75.46** & 76.37 & **79.04** & **72.99** & 67.73 & **70.80** & **75.24** & **71.69** \\ \hline \end{tabular}
\end{table}
Table 1: Quantitative Comparison (in Dice score %) of different methods on abdominal datasets under _Setting 1_ and _Setting 2_.
Consistent improvements are also indicated for Card-MRI dataset and can be found in the Appendix. In addition to the quantitative comparisons, qualitative results of our model and the other model on Abd-MRI and Abd-CT are shown in Fig. 3 (See Appendix for CMR dataset). It is not difficult to see that our model shows considerable bound-preserving and generalization capabilities.
### Ablation Studies
The ablation studies were conducted on Abd-MRI dataset under Setting 2. As can be seen from Fig. 4, the use of three stacked BaT blocks is suggested to obtain the best Dice score. From Tab. 2, using a combination of boundary and dice loss gives a 0.61 increase in terms of the dice score compared to using only the cross-entropy loss. More ablation study results can be found in Appendix.
## 4 Conclusion
In this paper, we introduced a Region-enhanced Prototypical Transformer (RPT) to mitigate the impact of large intra-class variations present in medical image segmentation. The model is mainly beneficial from a subdivision-based strategy used for generating a set of regional support prototypes and a self-selection mechanism introduced to the Bias-alleviated Transformer (BaT) blocks. The proposed RPT can iteratively optimize the generated regional prototypes and output a more precise global prototype for predictions. The results of extensive experiments and ablation studies can demonstrate the advancement and effectiveness of the proposed method.
\begin{table}
\begin{tabular}{l c|c} \hline \(\mathcal{L}_{ce}\) & \(\mathcal{L}_{B}\) & \(\mathcal{L}_{dice}\) & Dice score \\ \hline \(\checkmark\) & & 78.43 \\ \(\checkmark\) & \(\checkmark\) & 78.81 \\ \(\checkmark\) & \(\checkmark\) & **79.04** \\ \hline \end{tabular}
\end{table}
Table 2: Ablation study of the three loss functions.
Figure 4: Analysis of the number of BaT blocks.
Figure 3: Qualitative results of our model on Abd-MRI and Abd-CT. |
2309.10688 | On the different regimes of Stochastic Gradient Descent | Modern deep networks are trained with stochastic gradient descent (SGD) whose
key hyperparameters are the number of data considered at each step or batch
size $B$, and the step size or learning rate $\eta$. For small $B$ and large
$\eta$, SGD corresponds to a stochastic evolution of the parameters, whose
noise amplitude is governed by the ''temperature'' $T\equiv \eta/B$. Yet this
description is observed to break down for sufficiently large batches $B\geq
B^*$, or simplifies to gradient descent (GD) when the temperature is
sufficiently small. Understanding where these cross-overs take place remains a
central challenge. Here, we resolve these questions for a teacher-student
perceptron classification model and show empirically that our key predictions
still apply to deep networks. Specifically, we obtain a phase diagram in the
$B$-$\eta$ plane that separates three dynamical phases: (i) a noise-dominated
SGD governed by temperature, (ii) a large-first-step-dominated SGD and (iii)
GD. These different phases also correspond to different regimes of
generalization error. Remarkably, our analysis reveals that the batch size
$B^*$ separating regimes (i) and (ii) scale with the size $P$ of the training
set, with an exponent that characterizes the hardness of the classification
problem. | Antonio Sclocchi, Matthieu Wyart | 2023-09-19T15:23:07Z | http://arxiv.org/abs/2309.10688v4 | # On the different regimes of Stochastic Gradient Descent
###### Abstract
Modern deep networks are trained with stochastic gradient descent (SGD) whose key parameters are the number of data considered at each step or batch size \(B\), and the step size or learning rate \(\eta\). For small \(B\) and large \(\eta\), SGD corresponds to a stochastic evolution of the parameters, whose noise amplitude is governed by the 'temperature' \(T\equiv\eta/B\). Yet this description is observed to break down for sufficiently large batches \(B\geq B^{*}\), or simplifies to gradient descent (GD) when the temperature is sufficiently small. Understanding where these cross-overs take place remains a central challenge. Here, we resolve these questions for a teacher-student perceptron classification model and show empirically that our key predictions still apply to deep networks. Specifically, we obtain a phase diagram in the \(B\)-\(\eta\) plane that separates three dynamical phases: _(i)_ a noise-dominated SGD governed by temperature, _(ii)_ a large-first-step-dominated SGD and _(iii)_ GD. These different phases also correspond to different regimes of generalization error. Remarkably, our analysis reveals that the batch size \(B^{*}\) separating regimes _(i)_ and _(ii)_ scale with the size \(P\) of the training set, with an exponent that characterizes the hardness of the classification problem.
Stochastic gradient descent \(|\) Phase diagram \(|\) Critical batch size \(|\) Implicit bias Stochastic gradient descent (SGD), with its variations, has been the algorithm of choice to minimize the loss and train neural networks since the introduction of backpropagation [1; 2; 3]. When minimizing an empirical loss on a training set of size \(P\), SGD consists in estimating the loss gradient using a mini-batch of the data selected randomly at each step. When the size \(B\) of the batch is small, SGD becomes a noisy version of gradient descent (GD) and the magnitude of this noise is controlled by the 'temperature' \(T=\eta/B\), with \(\eta\) the learning rate [4]. However, understanding the scale of parameters where this description holds and SGD noise matters has remained a challenge. Specific questions include _(i)_ below which temperature \(T_{c}\) noise becomes irrelevant and the dynamics corresponds to gradient descent? _(ii)_ What determines the critical batch size \(B^{*}\), beyond which SGD is observed not to be controlled by temperature in a variety of settings [5; 6]? This question is of practical importance: after searching for an optimal temperature, practitioners can maximize the batch size up to \(B^{*}\) while keeping temperature fixed, as large batches can lead to faster computations in practice [7; 8]. _(iii)_ It was observed that the variation of the network weights during training increases as power laws of both \(T\) and \(P\), both for deep nets and for simple models like the perceptron [9]. Yet, a quantitative explanation of this phenomenon is lacking.
SGD noise has attracted significant research interest, in particular for its connections to loss landscape properties and performance. Recent works describing SGD via a stochastic differential equation (SDE) have emphasized several effects of this noise, including its ability to escape saddles [10; 11; 12; 13; 14; 15; 16], to bias the dynamics toward broader minima [17; 18; 19; 20; 21] or toward sparser distributions of the network parameters [22; 23; 24; 25; 26]. Yet, most of these works assume that fresh samples are considered at each time step, and are thus unable to explain the dependence of SGD on the finite size of the training set \(P\). Here, we study this dependence on classification problems and we identify three regimes of SGD in deep learning. Following [9], we first consider a perceptron setting where classes are given by a teacher \(y(\mathbf{x})=\mathrm{sign}(\mathbf{w}^{*}\cdot\mathbf{x})\), where \(\mathbf{w}^{*}\) is a unit vector, and are learnt by a student \(f(\mathbf{x})\propto\mathbf{w}\cdot\mathbf{x}\). We solve this problem analytically for the hinge loss, which vanishes if all data are fitted by some margin \(\kappa\). Our central results, summarized in Fig.1.A and the phase diagrams therein, are as follows.
_Noise-dominated SGD._ For small batches and large learning rates, the dynamics is well-described by an SDE with a noise scale \(T=\eta/B\). We show that this noise controls the components \(\mathbf{w}_{\perp}\) of the student weights orthogonal to the teacher, such that at the end of training \(\|\mathbf{w}_{\perp}\|\sim T\). Using this result, together with considerations on the angle of the predictor that can fit the entire training set, implies that the weight magnitude after training indeed depends both on \(T\) and \(P\), as \(\|\mathbf{w}\|\sim TP\gamma\), thus explaining the observation of [9]. The exponent \(\gamma\) characterizes the difficulty of the classification problem.
_Gradient Descent._ This regime breaks down when the temperature, and therefore the magnitude of orthogonal weights \(\|\mathbf{w}_{\perp}\|\), is too small. Then, fitting all data no longer corresponds to a constraint on the angle of the predictor \(\|\mathbf{w}_{\perp}\|/\|\mathbf{w}\|\), but instead to a constraint on \(\|\mathbf{w}\|\) to satisfy the margin. This latter constraint is the most stringent one for \(T\leq T_{c}\sim\kappa\). In that regime, temperature can be neglected and the dynamics corresponds to gradient descent, thus answering point _(i)_ above.
_First-step-dominated SGD._ The noise-dominated SGD regime also breaks down when the batch and learning rates are increased at fixed \(T\). Indeed, the first step will increase the weight magnitude to \(\|\mathbf{w}\|\sim\eta\), which is
larger than the noise-dominated prediction \(\|\mathbf{w}\|\sim TP^{\gamma}\) if \(B\geq B^{*}\equiv P^{\gamma}\), answering _(ii)_ above.
Our second central result is empirical: we find this phase diagram also in deep neural networks. We train using the hinge loss as in the perceptron case (using the cross-entropy loss with early stopping leads to similar performance and weights increase [27; 9]). In this case, it is useful to introduce the quantity \(\left\langle y(\mathbf{x})f(\mathbf{x})\right\rangle_{\mathbf{x}}\) characterizing the magnitude with which the output aligns to the task (for the perceptron, it simply corresponds to the quantity \(\mathbf{w}\cdot\mathbf{w}^{*}\)). Fig.1 shows this alignment in the diagram \(\eta,B\) for fully-connected (B) and Convolutional nets (C), revealing in each case the three regimes also obtained for the perceptron. As shown below, in both cases \(B^{*}\) depends as a power law of the training set size, as predicted.
### Related works
_SGD descriptions._ In the high-dimensional setting, several works have analysed online-SGD in teacher-student problems [28; 29; 30; 31; 32]. It has been rigorously showed [15] that, for online-SGD in high dimensions, the effective dynamics of summary statistics obey an ordinary differential equation with correction terms given by SGD stochasticity. Our analysis of the perceptron falls into this regime, with fixed points of the dynamics depending explicitly on \(T=\eta/B\). We further introduce the effects of a finite training set \(P\) by studying when this online description breaks down. Other studies [33; 34] have analysed the correlations of SGD noise with the training set using dynamical mean-field theory. They consider the regime where the batch size \(B\) is extensive in the training set size \(P\), while our online description considers the regime where \(B\) is much smaller than \(P\).
In finite-dimensional descriptions of SGD, theoretical justification for approximating SGD with an SDE has generally relied upon vanishing learning rates [35; 13]. This condition is too restrictive for standard deep learning settings and [36] has shown empirically that the applicability of the SDE approximation can be extended to finite learning rates. Other SDE approximations assume isotropic and homogeneous noise [37; 38; 10; 39] or a noise covariance proportional to Hessian of the loss [4; 14; 19; 40], while in the online limit of the perceptron model we can compute the exact dependence of the noise covariance on the model weights. Other studies have observed a link between the effects of SGD noise and the size of the training set [41]. In our work, we show that these effects do not depend on a direct increase of the noise scale with the training set size, but rather on the fact a larger \(P\) implies that larger weights are needed to fit the data, a process that depends on SGD noise.
_Critical batch size._ An empirical study at fixed training set size of large-batch training has observed in several learning settings that \(B^{*}\) is inversely proportional to the signal-to-noise ratio of loss gradients across training samples [8]. Our work is consistent with these findings, but most importantly further predicts and tests a non-trivial power law dependence of \(B^{*}\) with \(P\).
## I Noise-dominated regime
### Perceptron model
We consider data \(\mathbf{x}\in\mathbb{R}^{d}\) with \(d\gg 1\) that are linearly separable. Without loss of generality, we chose that the class label \(y(\mathbf{x})=\pm 1\) is given by the sign of the first component \(y(\mathbf{x})=\text{sign}(x_{1})\). \(\{\mathbf{e}_{i}\}_{i=1,\dots,d}\) is the canonical basis and \(\mathbf{e}_{1}\) corresponds to the teacher direction. The informative component \(x_{1}\) of each datum is independently sampled from the probability distribution
\[\rho(x_{1})=|x_{1}|^{\chi}e^{-x_{1}^{2}/2}/Z, \tag{1}\]
where \(Z\) is a normalization constant and \(\chi\geq 0\)[42]. The other \(d-1\) components \(\mathbf{x}_{\perp}=[x_{i}]_{i=2,\dots,d}\) are distributed as standard multivariate Gaussian numbers, i.e. \(\mathbf{x}_{\perp}\sim\mathcal{N}(\mathbf{0},\mathbf{I}_{d-1})\), being \(\mathbf{I}_{d-1}\) the \((d-1)\) identity matrix. The parameter \(\chi\) controls the data distribution near the decision boundary \(\mathbf{x}_{1}=0\) and how 'hard' the classification problem is. Smaller \(\chi\) corresponds to harder problems, as more points lie close to the boundary. The case \(\chi=0\) corresponds to the Gaussian case.
As an architecture, we chose the perceptron \(f(\mathbf{w},\mathbf{x})=\mathbf{w}\cdot\mathbf{x}/\sqrt{d}\). The weights are trained by minimizing the hinge loss \(L(\mathbf{w})=\frac{1}{P}\sum_{\mu=1}^{P}(\kappa-y^{\mu}f(\mathbf{w},\mathbf{x }^{\mu}))^{+}\), where \((x)^{+}=\max(0,x)\), \(\kappa>0\) is the margin, \(\{(\mathbf{x}^{\mu},y^{\mu}=y(\mathbf{x}^{\mu}))\}_{\mu=1,\dots,P}\) is the training set with size \(P\gg d\). We denote \(\mathbf{w}^{t}\) the weights obtained at time \(t\), defined as the number of training steps times the learning rate, starting from the initialization \(\mathbf{w}^{0}=\mathbf{0}\).
The hinge loss is minimized with a SGD algorithm, in which the batch is randomly selected at each time step among all the \(P\) data. The learning rate \(\eta\) is kept constant during training. The end of training is reached at time \(t^{*}\) when \(L(\mathbf{w}^{t^{*}})=n(\mathbf{w}^{t^{*}})=0\), where \(n(\mathbf{w}^{t})\) indicates the fraction of data not fitted at time \(t\): \(n(\mathbf{w})=\frac{1}{P}\sum_{\mu=1}^{P}\theta(\kappa-y^{\mu}f(\mathbf{w}, \mathbf{x}^{\mu}))\), with \(\theta(\dots)\) the Heaviside step function.
### Stochastic Differential Equation
In the limit of small batches \(B\ll P\), SGD can be described with a stochastic differential equation (SDE) [35]:
\[d\mathbf{w}^{t}=-dt\nabla L(\mathbf{w}^{t})+\sqrt{T}\sqrt{\mathbf{\Sigma}\left( \mathbf{w}^{t}\right)}d\mathbf{W}^{t} \tag{2}\]
where \(\mathbf{W}^{t}\) is a \(d\)-dimensional Wiener process (Ito's convention) and \(\mathbf{\Sigma}\left(\mathbf{w}\right)/B\) is the covariance of the mini-batch gradients [43], given in Appendix A.1. We first
Figure 1: **SGD phase diagrams for different data and architectures.** The different data sets considered are a teacher perceptron model with \(P=8192\), dimension \(d=128\) and data distribution of Eq. 1 with \(\chi=1\)_(A.1)_, \(P=32768\) images of MNIST _(B.1)_ and \(P=16384\) images of CIFAR 10_(C.1)_. The different neural network architectures trained on these datasets correspond respectively to _(A.2)_ a perceptron, for which the output is linear in both the input \(\mathbf{x}\) and the weights \(\mathbf{w}\), and trained with hinge loss margins \(\kappa=2^{-7}\); _(B.2)_ a fully-connected network with 5 hidden layers, 128 hidden neurons per layer and margin \(\kappa=2^{-15}\); _(C.2)_ a CNN made by several blocks composed by depth-wise, point-wise and standard convolutions plus residual connections (more details in Appendix A), with margin \(\kappa=2^{-15}\). Panels _(A.3)_,_(B.3)_,_(C.3)_ display the alignment after training in the \(\eta,B\) phase diagram. The black dots correspond to diverging trainings where the algorithm does not converge. We can distinguish the noise-dominated SGD regime, for which the alignment is constant along the diagonals \(\frac{\eta}{B}=T\). Within the first-step-dominated SGD, instead, the alignment is constant at constant \(\eta\). For small \(\eta\), one enters in the gradient descent (GD) regime where the alignment does not depend on \(\eta\) and \(B\). Taking this value \(m_{GD}\) of the alignment as a reference, the black dashed line \(\eta_{c}(B)\) delimiting the GD region corresponds to the alignment taking value \(2m_{GD}\). The vertical black dashed line guides the eye to indicate the critical batch size \(B^{*}\). Panels _(A.4)_,_(B.4)_,_(C.4)_ display the test error again as a function of \(\eta,B\). As expected, the test error is constant along the diagonals \(\frac{\eta}{B}=T\) for noise-dominated SGD and constant in the GD regime. For first-step-dominated SGD, the test error can be affected by both \(\eta\) and \(B\), and can improve at very large batches (see discussion below).
make the approximation of substituting the empirical gradient \(\nabla L(\mathbf{w}^{t})\) and this covariance with their population averages \(\nabla L(\mathbf{w}^{t})\approx\mathbf{g}^{t}=\mathbb{E}_{\mathbf{x}}\left[- \nabla_{\mathbf{w}}l(\mathbf{w}^{t},\mathbf{x})\right]\) and \(\mathbf{\Sigma}^{t}\approx\mathbb{E}_{\mathbf{x}}\left[\nabla_{\mathbf{w}}l( \mathbf{w}^{t},\mathbf{x})\otimes\nabla_{\mathbf{w}}l(\mathbf{w}^{t},\mathbf{ x})\right]-\mathbf{g}^{t}\otimes\mathbf{g}^{t}\). This approximation does not include finite training set effects. Our strategy below is to estimate the time where such a simplified description breaks down; the solution of the SDE at this time provides the correct asymptotic behaviors for the network at the end of training, as we observe experimentally.
#### ii.2.1 Asymptotic online-SDE dynamics
We decompose the student weights as \(\mathbf{w}=w_{1}\mathbf{e}_{1}+\mathbf{w}_{\perp}\) and we study the dynamics of both the scalar variables \(w_{1}^{t}\) (the 'overlap' between the student and teacher [28]), and the magnitude of the weights in the perpendicular directions \(\|\mathbf{w}_{\perp}^{t}\|\). Given that we consider a data distribution that is radially symmetric in \(\mathbf{x}_{\perp}\), but not in \(\mathbf{x}\), \(w_{1}\) and \(\|\mathbf{w}_{\perp}\|\) are the natural order parameters in our problem. Therefore, the \(d\)-dimensional dynamics of \(\mathbf{w}\) can be studied through a 2-dimensional summary statistics.
One finds that the online-SDE dynamics of \(w_{1}^{t}\) and \(\|\mathbf{w}_{\perp}^{t}\|\), using Ito's lemma (Appendix E.2), can be written as:
\[\begin{cases}dw_{1}^{t}&=dt\ g_{1}^{t}+\sqrt{\frac{T}{d}}\sigma_{1}^{t}d \tilde{W}_{1}^{t}\\ d\|\mathbf{w}_{\perp}^{t}\|&=dt\ \left[g_{\perp}^{t}+\frac{T}{2\|\mathbf{w}_{ \perp}^{t}\|}n^{t}\right]+\sqrt{\frac{T}{d}}\sigma_{2}^{t}d\tilde{W}_{2}^{t} \end{cases} \tag{3}\]
where \(\tilde{W}_{1}^{t}\) and \(\tilde{W}_{2}^{t}\) are Wiener processes and the expressions for \(g_{1}^{t}\), \(g_{2}^{t}\), \(n^{t}\), \(\sigma_{1}^{t}\), \(\sigma_{2}^{t}\), reported in Appendix E.1, are functions of \(w_{1}^{t}\) and \(\|\mathbf{w}_{\perp}^{t}\|\) through the time-dependent ratios:
\[\lambda=\frac{w_{1}^{t}}{\|\mathbf{w}_{\perp}^{t}\|},\quad r=\frac{\kappa \sqrt{d}}{\|\mathbf{w}_{\perp}^{t}\|}. \tag{4}\]
The quantity \(\lambda\) measures the angle \(\theta\) between the student and the teacher directions, since \(\theta=\arctan\lambda^{-1}\).The ratio \(r\) compares the hinge loss margin \(\kappa\) with the magnitude of the orthogonal components \(\|\mathbf{w}_{\perp}^{t}\|\).
In the limit \(d\gg 1\), the stochastic part of Eq. 3 is negligible, as we show in Appendix C.2. The variables \(w_{1}^{t}\) and \(\|\mathbf{w}_{\perp}^{t}\|\), therefore, have a deterministic time evolution given by the deterministic part of Eq. 3. This result has been proved more generally for the summary statistics of different learning tasks in high dimensions [15]. Note that, even if the stochastic fluctuations are negligible, the SGD noise affects the dynamics through the term \(\frac{T}{2\|\mathbf{w}_{\perp}^{t}\|}n^{t}\) in the evolution of \(\|\mathbf{w}_{\perp}^{t}\|\).
The term \(g_{1}^{t}\) (Appendix E.1) is always positive and vanishes in the limit \(\lambda\to\infty\), which determines the fixed point of Eq. 3. Therefore, we consider its vicinity
\[\lambda=\frac{w_{1}^{t}}{\|\mathbf{w}_{\perp}^{t}\|}\to\infty, \tag{5}\]
corresponding to a vanishing angle \(\theta=\arctan\lambda^{-1}\sim\lambda^{-1}\) between the student and teacher directions. In addition, we consider the limit
\[r=\frac{\kappa\sqrt{d}}{\|\mathbf{w}_{\perp}^{t}\|}\to 0, \tag{6}\]
and argue below why this limit corresponds to the noise-dominated regime of SGD.
Under the conditions of Eqs. 5-6, the deterministic part of Eq. 3 reads:
\[\begin{cases}dw_{1}^{t}&=dt\ \lambda^{-\chi-2}\ \frac{c_{1}}{\sqrt{d}}\\ d\|\mathbf{w}_{\perp}^{t}\|&=dt\ \lambda^{-\chi-1}\left[-\frac{1}{\sqrt{d} \sqrt{2\pi}}+\frac{T}{2\|\mathbf{w}_{\perp}^{t}\|}c_{n}\right],\end{cases} \tag{7}\]
with constants \(c_{1}\), \(c_{n}\) (Appendix E.1). Solving Eq. 7 gives:
\[\|\mathbf{w}_{\perp}^{t}\|\propto T\sqrt{d},\quad w_{1}^{t}\sim T\sqrt{d} \left(\frac{t}{Td}\right)^{\frac{1}{3+\chi}}. \tag{8}\]
Therefore the orthogonal component \(\|\mathbf{w}_{\perp}^{t}\|\) tends to a steady state proportional to the SGD temperature \(T\), while the informative one \(w_{1}^{t}\) grows as a power law of time. Eq. 8 implies that \(\lambda\gg 1\) is obtained in the large time limit \(t\gg Td\), while \(r\ll 1\) corresponds to \(T\gg\kappa\), and therefore holds at sufficiently high temperatures.
#### ii.2.2 SDE breakdown and solution obtained at the end of training
Due to the finiteness of \(P\), the online solution of Eq. 8 is expected to be no longer valid at some time \(\hat{t}<t^{*}\). In Appendix D, we provide an argument, confirmed empirically, predicting that \(\hat{t}\) is reached when the number of training points contributing to the hinge loss gradient is of order \(\mathcal{O}(d)\). This is obtained by assuming that the student perceptron follows the dynamics Eq. 8 and applying the central limit theorem to the empirical gradient \(\nabla L(\mathbf{w}^{t})\). One finds that the magnitude of its population average follows \(\|\mathbb{E}_{\mathbf{x}}\left[\nabla L(\mathbf{w}^{t})\right]\|\sim\frac{1}{ \sqrt{d}}n(\mathbf{w}^{t})\) while that of its finite-\(P\) fluctuations is given by \(\frac{1}{\sqrt{p}}\sqrt{n(\mathbf{w}^{t})}\). For \(P\) finite but much larger than \(d\), the average gradient is much larger than its fluctuations as long as the fraction of unfitted training points \(n(\mathbf{w}^{t})\) satisfies \(n(\mathbf{w}^{t})\gg\frac{d}{P}\). As the training progresses, more data points get fitted and \(n(\mathbf{w}^{t})\) decreases until reaching the condition
\[n(\mathbf{w}^{\hat{t}})\sim\mathcal{O}\left(\frac{d}{P}\right). \tag{9}\]
After this point, the empirical gradient and the population one greatly differ, and the online dynamics is no longer a valid description of the training dynamics. This is the time beyond which test and train errors start to differ. At leading order in \(\lambda\to\infty\) as \(r\to 0\), we have that \(n(\mathbf{w}^{t})\sim\lambda^{-\chi-1}\). In fact, for a density of points
\(\rho(x_{1})\sim x_{1}^{X}\) at small \(x_{1}\), \(n(\mathbf{w}^{t})\sim\theta^{\chi+1}\sim\lambda^{-\chi-1}\) is the fraction of points with coordinate \(x_{1}\) smaller than \(\theta\sim\lambda^{-1}\). 1
Footnote 1: A training point \((\mathbf{x},y)\) has a non-zero hinge loss and contributes to \(n(\mathbf{w})\) if \(y(\mathbf{x})<\kappa\), that is \(|x_{1}|<\frac{\|\mathbf{w}_{\perp}\|}{w_{\perp}\|}\left(c+\frac{\kappa\sqrt{ 2}}{\|\mathbf{w}_{\perp}\|}\right)\), with \(c=-\frac{\mathbf{w}_{\perp}\cdot\mathbf{x}_{\perp}}{\|\mathbf{w}_{\perp}\|}\), which becomes \(|x_{1}|<c\lambda^{-1}\) for \(r=\frac{\kappa\sqrt{2}}{\|\mathbf{w}_{\perp}\|}\to 0\). In the limit \(\lambda\to\infty\), the condition \(|x_{1}|<\lambda^{-1}\) corresponds to the scaling relationship \(n(\mathbf{w})\sim\int_{0}^{\lambda^{-1}}dx_{1}\rho(x_{1})\sim\lambda^{-\chi-1}\), since \(\rho(x_{1})\sim x_{1}^{X}\) for \(x_{1}\to 0\).
Therefore, using Eq. 8,
\[n(\mathbf{w}^{t})\sim\lambda^{-\chi-1}\sim\left(\frac{t}{Td}\right)^{-\frac{ \chi+1}{\chi+3}}. \tag{10}\]
Note that, for \(t\ll\hat{t}\), \(n(\mathbf{w}^{t})\) corresponds to the test error. Neglecting the dependence in \(d\), we finally obtain:
\[\hat{t}\sim T\ P^{b},\quad\|\mathbf{w}^{\hat{t}}\|\sim w_{1}^{\hat{t}}\sim TP^{ \gamma}, \tag{11}\]
where \(b=1+\frac{2}{1+\chi}\) and \(\gamma=\frac{1}{1+\chi}\).
In Figure 10, we show experimentally that the asymptotic solution of the online SDE is a valid description of SGD up to \(\hat{t}\), as predicted by Eq. 9. In Figure 11 -12, we observe empirically that the power law scaling in \(T\) and \(P\) of the stopping time \(t^{*}\) and of \(\|\mathbf{w}^{t^{\prime}}\|\) are the same as those of \(\hat{t}\) and \(\|\mathbf{w}^{\hat{t}}\|\), therefore Eqs. 11 also hold to characterize the end of training.
### T condition for the noise dominated regime
To fit a data point \((\mathbf{x}^{\mu},y^{\mu})\), the perceptron weights must satisfy the condition \(y^{\mu}\frac{\mathbf{w}_{1}x_{1}^{\mu}}{\sqrt{d}}+y^{\mu}\frac{\mathbf{w}_{ \perp}\cdot\mathbf{x}_{1}^{\mu}}{\sqrt{d}}\geq\kappa\), which corresponds to [9]:
\[\frac{w_{1}}{\|\mathbf{w}_{\perp}\|}\geq\frac{1}{|x_{1}^{\mu}|}\left(\frac{ \kappa\sqrt{d}}{\|\mathbf{w}_{\perp}\|}+c^{\mu}\right), \tag{12}\]
where we have defined the random quantity \(c^{\mu}=-y^{\mu}\frac{\mathbf{w}_{\perp}}{\|\mathbf{w}_{\perp}\|}\cdot\mathbf{ x}_{\perp}^{\mu}=\mathcal{O}(1)\), and where \(\frac{\kappa\sqrt{d}}{\|\mathbf{w}_{\perp}\|}\propto\frac{\kappa}{T}\) in the noise-dominated regime where Eq. 8 applies.
For \(T\gg\kappa\), \(\frac{\kappa\sqrt{d}}{\|\mathbf{w}_{\perp}\|}\) is negligible with respect to \(c^{\mu}=\mathcal{O}(1)\) and Eq. 12 now becomes \(\frac{w_{1}}{\|\mathbf{w}_{\perp}\|}\geq\frac{c^{\mu}}{|x_{1}^{\mu}|}\), independent of the margin \(\kappa\). In this case, therefore, fitting \((\mathbf{x}^{\mu},y^{\mu})\) is constrained by the SGD noise which inflates the non-informative weight component \(\|\mathbf{w}_{\perp}\|\). For \(T\ll\kappa\), instead, fitting \((\mathbf{x}^{\mu},y^{\mu})\) is constrained by the margin \(\kappa\) and the SGD noise is negligible in Eq. 12, implying that the temperature delimiting the noise-dominated regime of SGD follows:
\[T_{c}\propto\kappa. \tag{13}\]
For \(T\ll T_{c}\), the final magnitude of \(\|\mathbf{w}_{\perp}\|\) is independent of \(T\) as expected. Instead, one finds that it is proportional to \(\kappa\) (Fig. 12-(b)).
## II Critical batch size \(B^{*}\)
In the small batch regime \(B\ll B^{*}\), for a given temperature \(T=\eta/B\), different SGD simulations converge to the the asymptotic trajectory given by the online-SDE, as observed in Fig. 2 showing SGD trajectories in the \((w_{1},w_{\perp})\) plane, at fixed \(T\) and varying batch size. This is not the case in the large batch regime \(B\gg B^{*}\). In fact, for fixed \(T\), a larger batch size corresponds to a larger learning rate and therefore a larger initial step. If the initial step is too large, SGD would converge to a zero-loss solution before being able to converge to the online-SDE asymptotic trajectory. This situation is vividly apparent in Fig. 2. Therefore, we can estimate the scale of the critical batch-size \(B^{*}\) by comparing the first step \(w_{1}^{\eta}\propto\eta\) with the final value \(w_{1}^{*}\sim\frac{\eta}{B}P\gamma\) in the noise
Figure 2: **Dynamical trajectories of SGD in the \((w_{1},w_{\perp})\) plane, at fixed \(T\) and varying batch size as indicated in caption.** Black circles indicate the first step of SGD, black stars indicate the last one. For small enough batches (and therefore small learning rates), trajectories converge to the online SDE solution (black dashed line). For large batches, this is not true anymore, and the final magnitude of the weights increases with batch size. The location of stopping weights corresponds to zero loss, which can be approximately determined by measuring the hinge loss values \(L_{train}(w_{1},w_{\perp})\) (shown in color) computed as a function of the perceptron weights \(\mathbf{w}=w_{1}\mathbf{e}_{1}+w_{\perp}\boldsymbol{\xi}\). Here, \(\boldsymbol{\xi}\) is a \((d-1)\)-dimensional Gaussian random vector. The white area corresponds to interpolating solutions \(L_{train}=0\) in this simplified set-up. For full-batch, we observe that \(\mathbf{w}\) can land directly in the white area and therefore fit the data with at most few steps. This behaviour affects the test error when \(\eta\) is large (Fig. 1-A4). Data correspond to \(P=16384\), \(d=128\), \(\kappa=0.01\), \(\chi=1\), \(T=2\).
dominated regime. When \(w_{1}^{\eta}\ll w_{1}^{t^{*}}\), SGD converges to the asymptotic online-SDE and the final minimum depends on \(T=\eta/B\). When \(w_{1}^{\eta}\gg w_{1}^{t^{*}}\), SGD does not converge to the asymptotic online-SDE and the final weight magnitude depends only on \(w_{1}^{\eta}\), as shown in Fig.3. The condition \(w_{1}^{\eta}\sim w_{1}^{t^{*}}\), that is \(\eta\sim\frac{n}{B^{*}}P^{\gamma}\) gives:
\[B^{*}\sim P^{\gamma}, \tag{14}\]
with \(\gamma=\frac{1}{1+\chi}\) depending on the data distribution. The relationship of Eq. 14 is well verified from the empirical data of Fig. 4-(a).
This finding is consistent with \(B^{*}\) being inversely proportional to the signal-to-noise ratio of loss gradients across training samples as proposed in [8]. In fact, in our setting, increasing the gradient noise scale by a factor \(\sigma\) corresponds to the substitution \(T\to\sigma T\), which transforms Eq. 14 into \(B^{*}\sim\sigma P^{\gamma}\). Moreover, at large \(P\), increasing \(\chi\) reduces the exponent \(\gamma\) and \(B^{*}\). In fact, larger \(\chi\) corresponds to fewer training points close to the decision boundary and therefore a larger gradients' signal-to-noise ratio.
Performance in large step SGD.The first training step leads to \(w_{1}^{\eta}=\frac{\eta}{B}\sum_{\mu\in\mathbb{B}_{1}}\frac{|w_{1}^{\mu}|}{ \sqrt{d}}\) and \(\mathbf{w}_{\perp}^{\eta}=\frac{\eta}{B}\sum_{\mu\in\mathbb{B}_{1}}y^{\mu} \frac{\mathbf{x}_{\perp}^{\mu}}{\sqrt{d}}\), where \(\mathbb{B}_{1}\) is the first sampled batch. \(\mathbf{w}_{\perp}^{\eta}\) is a zero-mean random vector of norm \(\|\mathbf{w}_{\perp}^{\eta}\|=\mathcal{O}(\frac{\eta}{\sqrt{B}})\), while \(w_{1}^{\eta}=\mathcal{O}(\eta)\). We can distinguish several regimes:
(i) If \(w_{1}^{\eta}|x_{1}^{\mu}|\sim\eta|x_{1}^{\mu}|\gg\kappa\ \forall\mu\), then the margin \(\kappa\) can be neglected. From extreme value statistics \(\min_{\mu}|x_{1}^{\mu}|=\mathcal{O}(P^{-1/(1+\chi)})\), thus this condition is saturated when \(\eta\sim\kappa P^{1/(1+\chi)}\). It corresponds to the horizontal dashed line in the diagrams of Fig.1.
(ii) Above this line in the regime \(\eta\gg\kappa P^{1/(1+\chi)}\), the fraction of unfitted points as well as the test error after one step are given by the angle of the predictor \(\lambda^{\eta}=\frac{w_{1}^{\eta}}{\|\mathbf{w}_{\perp}^{\eta}\|}\sim\mathcal{ O}(\sqrt{B})\). Following Eq. 10, \(n(\mathbf{w}^{\eta})\sim(\lambda^{\eta})^{-\chi-1}\sim B^{-(\chi+1)/2}\). If \(n(\mathbf{w}^{\eta})\ll 1/P\) or equivalently \(B\gg P^{2/(1+\chi)}\), then with high probability all points are fitted after one step and the dynamics stops; the final test error is then of order \(\epsilon\sim B^{-(\chi+1)/2}\). Since \(B\leq P\), this condition can only occur for \(\chi\geq 1\). Note that the error is smaller than in the noise-dominated regime where \(\epsilon\sim 1/P\).
For the marginal case \(\chi=1\) in Fig. 2, we observe that the full batch case \(B=16384\) reaches (nearly) zero training loss after the first training step. Correspondingly, in the phase diagram of Fig. 1-A4, the lowest test error is achieved for full batch in the first-step-dominated regime.
## III Experiments for deep networks
We consider a 5-hidden-layer fully-connected architecture with GELU activation functions [44] that classifies the parity MNIST dataset (even vs odd digits) and a deep convolutional architecture (MobileNet) which classifies the CIFAR10 images (animals vs the rest). As before, training corresponds to minimizing the hinge loss with SGD at constant \(\eta\), until reaching zero training loss, see Appendix A for further details.
For deep neural networks, the alignment between the network output \(f(\mathbf{x})\) and the data labels \(y(\mathbf{x})\in\{+1,-1\}\) cannot simply be written as a projection \(w_{1}\) of the weights in some direction. Instead we define it as:
\[\langle y(\mathbf{x})f(\mathbf{x})\rangle_{\mathbf{x}}=\frac{1}{|S_{test}|} \sum_{\mathbf{x}^{\nu}\in S_{test}}y(\mathbf{x}^{\nu})f(\mathbf{x}^{\nu}). \tag{15}\]
where the average is made over the test set \(S_{test}\). For the perceptron, \(w_{1}\) and \(\langle y(\mathbf{x})f(\mathbf{x})\rangle_{\mathbf{x}}\) are proportional.
Small margin \(\kappa\).An interesting parameter to vary is the margin \(\kappa\), which is equivalent to changing the initialization scale of the network. For very small \(\kappa\) and gradient flow, tiny changes of weights can fit the data, corresponding to the lazy regime where neural nets behave as kernel methods [45, 46, 47]. However, by cranking up the learning rate one escapes this regime [48], which is the case in our set-up (Fig. 5 in Appendix B). Our central results are as follows:
(i)The phase diagram predicted for the perceptron holds true, as shown Fig. 1-B3,C3 representing the alignment in the (\(B\), \(\eta\)) plane. We observe a noise-dominated SGD where the alignment depends on the ratio \(T=\eta/B\); a first-step-dominated SGD where the alignment is con
Figure 3: **For large learning rates, the dynamics of the alignment is different in the small batch and large batch regimes.** (Main panel) Perceptron: in this case, the alignment is proportional to the student component \(w_{1}\); data for fixed \(\eta=512\), same setting as Fig. 1. For small \(B\), \(w_{1}\) grows during the training dynamics, while, for large \(B\), its final value is reached after a single training step. (Inset) Fully-connected network on MNIST, small margin (\(\kappa=2^{-15}\)), fixed \(\eta=16\), same setting as Fig. 1: for small and large batch, the alignment shows a similar dynamics to the perceptron case, although for large batch it reaches its final value after some training steps (and not just a single step).
stant at a given \(\eta\); and a GD regime at small \(\eta\) where the alignment does not depend on \(\eta\) and \(B\).
_(ii)_ These three regimes also delineate different behaviors of the test error, as shown in Fig. 1-B4,C4.
_(iii)_ Fig. 3-inset confirms the prediction that in the first-step-dominated SGD regime, the alignment build up in a few steps. By contrast, in the noise-dominated SGD regime, it builds up slowly over time.
_(iv)_ The critical batch \(B^{*}\) separating these two regimes indeed depends on a power law of the training set \(P\). This result is revealed in Fig. 4-(b), reporting the alignment at the end of training as a function of the batch size \(B\). The inset shows that the alignment depends on the batch size for small \(B\), while it only depends on \(\eta\) for large \(B\). The cross-over \(B^{*}\) between the two regimes depends on \(P\). It is estimated in the main panel by rescaling the x-axis by \(P^{-0.2}\) so as to align these cross-overs, indicating a dependence \(B^{*}\sim P^{0.2}\). The same phenomenology is observed for the perceptron in Fig. 4-(a), with the relationship \(B^{*}\sim P^{\frac{1}{1+\chi}}\) given by Eq. 14.
Large margin \(\kappa\).In the alternative case where the margin is large, the predictor has to grow in time or 'inflate' in order to become of the order of the margin and fit the data [45, 47, 49]. As a result, the weights and the magnitude of the gradients initially grow in time. In our set-up, this regime can be obtained choosing \(\kappa=1\), because at initialization the output function is small. 1 Such an inflation is not captured by the perceptron model. As a consequence, reasoning on the magnitude of the first gradient step may not be appropriate, and the finding _(iii)_ above is not observed as shown in Fig. 9-(a). Yet, after inflation occurs, as for the perceptron the output still needs to grow even further in the good direction to overcome the noise from SGD, the more so the larger the training set. This effect is observed both for CNNs and fully-connected nets on different data sets, and in the latter case strongly correlates with performance [9].
Footnote 1: We chose the NTK initialization [45], for which the output at initialization behaves as \(1/\sqrt{\hbar}\) where \(h\) is the width. \(h\) here is \(128\) for the fully-connected and the CNN architectures (Appendix A).
For fully-connected nets learning MNIST, observations are indeed similar to the small margin case: _(i,ii)_ a phase diagram suggesting three regimes affecting performance is shown in Fig. 7 and _(iv)_ a critical batch size \(B^{*}\) that again appears to follow a power law of the training set size, as \(B^{*}\sim P^{0.4}\) as shown in Fig. 9-(b). Interestingly, using early-stopping or not does not affect performance, as shown in Fig. 7-(d). It is consistent with our analysis of weight changes, which for the perceptron are similar at early stopping or at the end of training. For CNNs learning CIFAR10, the picture is different. Although three learning regimes may be identifiable in Fig. 8, most of the dependence of performance with \(\eta,B\) is gone when using early stopping as shown in Fig. 8-(d). It suggests that effects other than the growth of weights induced by
SGD control performance in this example, as discussed below.
## IV Conclusion
In deep nets, the effects of SGD on learning are known to depend on the size of the training set. We used a simple toy model, the perceptron, to explain these observations and relate them to the difficulty of the task. SGD noise increases the dependence of the predictor along incorrect input directions, a phenomenon that must be compensated by aligning more strongly toward the true function. As a result, alignment and weight changes depend on both \(T\) and \(P\). If temperature is too small, this alignment is instead fixed by overcoming the margin and SGD is equivalent to GD. If the batch size is larger than a \(P\)-dependent \(B^{*}\), the weight changes are instead governed by a the first few steps of gradient descent.
As one would expect, the alignment magnitude correlates with performance. It was observed in several cases in [9], and is also reflected by our observation that different alignment regimes corresponds to different regimes of performance. In Appendix B we discuss why it is so in the simple example of a teacher perceptron learnt by a multi-layer net. As shown in Fig. 6, as weights align more strongly in the true direction as SGD noise increases, the network becomes less sensitive to irrelevant directions in input space, thus performing better. Yet, depending on the data structure, a strong alignment could be beneficial or not- a versatility of outcome that is observed [50; 5; 9; 40].
Obviously, other effects of SGD independent on training set size may affect further performance, such as escaping saddles [10; 11; 12; 13; 14; 15; 16], biasing the dynamics toward broader minima [17; 18; 19; 20; 21; 43; 51; 52; 53; 54; 55; 56; 57; 58] or finding sparser solutions [22; 23; 24; 25; 26]. We have shown examples where the alignment effects appear to dominate, and others where SGD instead is akin to a regularization similar to early stopping- a situation predicted in some theoretical approaches, see e.g. [24]. Determining which effect of SGD most strongly affects performance given the structure of the task and the architecture is an important practical question for the future. Our work illustrates that subtle aspects enter in this equation - such as the size of the training set or the density of points near the decision boundary.
###### Acknowledgements.
We thank F. Cagnetta, A. Favero, M. Geiger, B. Goransson, F. Krzakala, L. Petrini, U. Tomasini and L. Zdeborova for discussion. M.W. acknowledges support from the Simons Foundation Grant (No. 454953 Matthieu Wyart).
|
2306.17762 | Enhanced plateau effect at resonance in realistic non-integrable EMRIs | When an EMRI in a perturbed integrable gravitational field, such as a
deformed Kerr black hole, undergoes a prolonged resonance, the frequencies that
engage in resonance retain a fixed rational ratio, despite experiencing
adiabatic changes due to radiation reaction. In the past this plateau effect in
the evolution of the ratio of frequencies has been investigated by studying the
orbital evolution through kludge models, which provide approximate average
losses of energy and angular momentum experienced by a test particle in this
field. By employing a Newtonian gravitational field that closely resembles a
pure Kerr or a perturbed Kerr relativistic field, we demonstrate that the
actual adiabatic evolution of an orbit driven by an artificial ``self-force''
results in more prolonged periods of resonance crossings compared to those
obtained by imposing a predetermined rate of energy and angular momentum change
throughout the orbital progression. | Areti Eleni, Theocharis A. Apostolatos | 2023-06-30T16:13:28Z | http://arxiv.org/abs/2306.17762v1 | # Enhanced plateau effect at resonance
###### Abstract
When an EMRI in a perturbed integrable gravitational field, such as a deformed Kerr black hole, undergoes a prolonged resonance, the frequencies that engage in resonance retain a fixed rational ratio, despite experiencing adiabatic changes due to radiation reaction. In the past this plateau effect in the evolution of the ratio of frequencies has been investigated by studying the orbital evolution through kludge models, which provide approximate average losses of energy and angular momentum experienced by a test particle in this field. By employing a Newtonian gravitational field that closely resembles a pure Kerr or a perturbed Kerr relativistic field, we demonstrate that the actual adiabatic evolution of an orbit driven by an artificial "self-force" results in more prolonged periods of resonance crossings compared to those obtained by imposing a predetermined rate of energy and angular momentum change throughout the orbital progression..
## 1 Introduction
Extreme mass ratio inpirals (EMRIs), are prominent sources of gravitational waves (GWs) for the future space-based detector Laser Interferometer Space Antenna (LISA) [1]. EMRIs are binaries consisting of a stellar mass compact object, i.e., a black hole (BH) or a neutron star (NS) of mass \(m\), inspiraling around a supermassive BH of mass \(M\), with mass ratio \(\epsilon=m/M\leq 10^{-4}\).
Since the lighter compact object of an EMRI spends the last few years of its inspiral tracing out the strong-gravity region of the supermassive BH, EMRIs offer us the opportunity to test the theory of General Relativity (GR) and its astrophysical implications concerning the formation of black holes. The last hundreds of thousands of GWs cycles of such a system encode the details of the spacetime geometry of the massive object; thus by analysing these waves, one could read out its multipole moments [2].
According to GR the gravitational field of an astrophysical BH is described by the Kerr metric [3], the multipole moments of which are determined only by its mass and spin [4], [5]. Since the Kerr metric is characterized by a few symmetries, the equations governing the geodesic motion around a Kerr BH form an integrable system. The conservation of the energy and \(z\)-angular momentum along the axis of symmetry is associated with a time-translation and a rotational Killing vector, respectively, while the existence of the Carter constant [6] is linked to a hidden symmetry of a rank-two Killing tensor. As a consequence, a bound geodesic Kerr orbit in the spatial part of the phase-space is confined to lie on a compact torus characterized by three fundamental frequencies [7]. Trajectories ergodically fill the phase-space tori, unless two or more fundamental frequencies form a rational ratio (resonant orbits).
However, the actual orbit of the small compact object around the massive BH is not exactly geodesic, due to the gravitational self-force (SF) which arises from the object's interaction with the time-dependent gravitational field [8], [9]. The dissipative part of SF drives the object to a gradual inspiral towards the massive BH, following an adiabatic evolution of geodesics, while it radiates away energy and angular momentum in the form of GWs. The orbital motion is obtained from BH perturbation theory with the small mass ratio \(\epsilon\) as an expansion parameter [10], [11]. The SF for a non-spinning particle on a generic orbit around a Kerr BH to first order in the mass ratio \(\epsilon\) has been obtained recently [12].
During the inpiral the three orbital fundamental frequencies change slowly, thus a resonance will occur when two of them form a rational ratio. Usually, the methods used for computing the orbital evolution, and the corresponding waveforms, become inadequate at resonance, where the "constants" of motion then change rapidly, leading to large shifts in waveform's phase [13].
An important characteristic of resonances is that they can be used to discriminate if the background spacetime is not an integrable Kerr one, because either the central BH is not described by GR or the environment of the BH is not vacuum. Such spacetimes probably don't possess all the special symmetries of Kerr that lead to a third integral of motion and form a complete integrable system. These cases could be described as appropriate deformations of Kerr metric. However, when an integrable Hamiltonian is slightly perturbed, its phase-space tori undergo changes. The Poincare-Birkhoff theorem [14], [15] states that the resonant tori desintegrate and form islands of stability (Birkhoff islands), occupying a phase-space volume of non-zero measure, inside which the ratio of frequencies remain locked to a constant rational value. Birkhoff islands are characteristic features of non-integrable dynamical systems.
Ref. [16] investigated the evolution of the ratio of the orbital frequencies of a particle orbiting around a non-Kerr object describing by the Manko-Novikov (MN) metric [17], when its trajectory crosses a Birkhoff island. Due to the lack of an expression for the radiation reaction SF for non-Kerr spacetimes, the numerical integration of an orbit was performed by combining the equations of geodesic motion for MN metric with the hybrid approximative method [18] which provides the average losses of energy and \(z\)-angular momentum. Assuming constant rates of energy and \(z\)-angular momentum losses, the time interval within which the orbit remains at a prolonged resonance (i.e., stays in a Birkhoff island) was computed. During that time both frequencies change while their ratio remains constant. Whenever such a plateau in the evolution of the ratio of frequencies is observed one could conclude that the central object is not a Kerr BH. Also, in [19], following a similar procedure, Destounis et al. found that when the orbit crosses a prolonged resonance the GW frequency appears a rapid but short-lived "glitch".
In the present work we would like to address the question whether the assumption of constant rates of change of energy and \(z\)-angular momentum leads to wrong estimates of the time interval of resonance crossings. Lacking a SF formula for a non-Kerr spacetime, we will resort to a Newtonian analogue problem.
In Ref. [20] it has been shown that the Euler gravitational field of a pair of spatially-fixed point masses at an imaginary distance from each other, is a very good analogue of the Kerr relativistic field. Moreover this particular field can be modified so as to transform the system from an integrable to a slightly non-integrable one. By incorporating an additional small external dissipative force, we could drive adiabatically an orbit, in a similar fashion that a geodesic orbit is driven in a given background spacetime by the radiation reaction caused by a self-force. At the same time, the average losses of energy and \(z\)-angular momentum in the adiabatic limit for such a dissipative force are computed. Once again, the orbit is evolved by a new integration scheme, based on inducing the subsequent time depending "constants" of motion, but without any direct dissipative force implied. Finally the two distinctive numerical schemes were compared with respect to the total resonance crossing time. There was a systematic enhancement of the crossing time, by a factor of at least 2, when the instantaneous dissipative force was employed.
The rest of the article is organized as follows: In Section 2 an overall description of the oblate Euler problem is given. In Section 3 we describe the perturbed version of this problem, constructed by introducing a small mass at the midpoint between the two fixed masses. In Section 4 we give a
brief description of some theoretical features of slightly non-integrable problems. In Section 5, we introduce the dissipative force that is used and explain the two different integration schemes followed to drive an orbit in the perturbed Euler field. The scheme based on average losses is further analysed in Section 6. Finally, in Section 7 we present our results and discuss their implications.
## 2 The oblate Euler problem
The Euler problem of two fixed centers [21] describes the gravitational field of two static point masses \(m_{1}\) and \(m_{2}\) at a fixed distance \(2a\) between them. We assume that the \(z-\)axis is the axis along which the two masses are located at \(z_{1}=a\hat{z}\) and \(z_{2}=-a\hat{z}\), respectively, with \(a\) being constant and real. By setting the two masses equal to each other, i.e., \(m_{1}=m_{2}=M/2\), and their distance imaginary, i.e., \(a\to ia\), the potential becomes oblate (with negative quadrupole moment) and can be considered as the Newtonian analogue of the relativistic Kerr black hole [3], [20], [22]-[24]. We need the symmetric case with equal masses because only then the gravitational potential of each mass is the complex conjugate to the potential of the other mass, allowing the combined potential field of the two masses to be real. The resulting gravitational field of the oblate Euler (also known as Vinti potential in astronomy, used to describe the gravitational field around oblate planets [25]), is stationary, axisymmetric along the \(z-\)axis, and reflection symmetric along the equatorial plane described by the following form:
\[V_{0}=-\frac{G(M/2)}{|\mathbf{r}-ia\mathbf{\hat{z}}|}-\frac{G(M/2)}{|\mathbf{ r}+ia\mathbf{\hat{z}}|}, \tag{1}\]
where \(\mathbf{r}\) is the radial distance from the axes origin and by \(|\mathbf{k}|\) we mean \(\sqrt{\mathbf{k}\cdot\mathbf{k}}\). The latter vector product is a complex number and in order to keep the square root single-valued we should adopt a branch cut. We have chosen the negative real axis of the vector product as the branch-cut of our potential function so that the two denominators in (1) will be conjugate to each other leading to a real potential. Henceforth, when we mention the Euler field, we shall exclusively refer to the oblate Euler field, and later on to its perturbed version.
A general stationary, axisymmetric and reflection symmetric along the equatorial plane Newtonian potential that vanishes at infinity can be fully decomposed in multipole moments \(M_{l}\) through the relation [24], [26]:
\[V=-\sum_{l=0}^{\infty}\frac{M_{2l}}{r^{2l+1}}P_{2l}(z/r),\]
where \(P_{l}\) are the Legendre polynomials. It turns out that the multipole moments of the Euler potential (1) is given by [20], [26]:
\[M_{2l}=M(-a^{2})^{l},\]
which is the same as the "no-hair" relation obeyed by the mass multipole moments of the Kerr metric [4], [5] with the length parameter \(a\) of the Euler field, playing the role of the spin of a Kerr black hole [20], [24].
A more appropriate coordinate system to study the motion in this field is that of oblate spheroidal coordinates, \((\xi,\eta,\phi)\), where \(\phi\) is the usual spherical azimuthal angle, \(\xi\in[0,+\infty)\) and \(\eta\in[-1,1]\). These new coordinates are related to the Cartesian coordinates \((x,y,z)\) by:
\[x =a\sqrt{(1+\xi^{2})(1-\eta^{2})}\cos\phi,\] \[y =a\sqrt{(1+\xi^{2})(1-\eta^{2})}\sin\phi,\] \[z =a\xi\eta,\]
and to the spherical coordinates \((r,\theta)\) by:
\[r =a\sqrt{1+\xi^{2}-\eta^{2}},\] \[\cos\theta =\frac{\xi\eta}{\sqrt{1+\xi^{2}-\eta^{2}}}.\]
In terms of oblate spheroidal coordinates the Euler potential (1) assumes the following simple form:
\[V_{0}(\xi,\eta)=-\frac{GM_{0}\xi}{a(\xi^{2}+\eta^{2})}. \tag{2}\]
From the above multipole expansion, \(M_{0}=M\). It should be noted that the field \(V_{0}(\xi,\eta)\) is defined everywhere except when \(\xi=0\) and \(\eta=0\), which corresponds to the equatorial focal circle (\(r=a,\theta=\pi/2\)), where the potential becomes singular. This singularity corresponds to Kerr's ring singularity.
The motion of a test particle in the Newtonian Euler potential is independent of it's mass, so the Hamiltonian (per unit test-particle mass \(\mu\)) is:
\[H_{0}=\frac{1}{2a^{2}}\left[p_{\xi}^{2}\frac{\xi^{2}+1}{\xi^{2}+\eta^{2}}+p_{ \eta}^{2}\frac{1-\eta^{2}}{\xi^{2}+\eta^{2}}+\frac{p_{\phi}^{2}}{(\xi^{2}+1)( 1-\eta^{2})}\right]+V_{0}(\xi,\eta). \tag{3}\]
The conjugate momenta to \(\xi,\eta,\phi\) are defined as:
\[p_{\xi} =a^{2}\frac{\xi^{2}+\eta^{2}}{\xi^{2}+1}\dot{\xi}, \tag{4}\] \[p_{\eta} =a^{2}\frac{\xi^{2}+\eta^{2}}{1-\eta^{2}}\dot{\eta},\] (5) \[p_{\phi} =a^{2}(\xi^{2}+1)(1-\eta^{2})\dot{\phi}, \tag{6}\]
where \(``\cdot"\) denotes time derivative.
The stationarity and axisymmetry of the system is obvious in the Hamiltonian expression \(H_{0}\). The time and azimuthal coordinate are cyclic, leading to conservation of the energy \(E=H_{0}\) and the angular momentum along the axis of symmetry \(L_{z}=p_{\phi}\), respectively. Furthermore, the Hamilton-Jacobi equation is separable in oblate spheroidal coordinates, leading to a third nontrivial constant of motion, \(\beta\), which is quadratic in momenta [27]. By substituting \(\beta\) by \(-Q-L_{z}^{2}-2a^{2}E\), the quantity \(Q\) can be considered as the Newtonian analogue of Kerr's Carter constant [6] obtaining either one of the following forms [20]:
\[Q =(1-\eta^{2})p_{\eta}^{2}+\eta^{2}\left(-2Ea^{2}+\frac{L_{z}^{2}} {1-\eta^{2}}\right) \tag{7}\] \[=-p_{\xi}^{2}(\xi^{2}+1)+2a^{2}E\xi^{2}+2GMa\xi-\frac{L_{z}^{2} \xi^{2}}{\xi^{2}+1}. \tag{8}\]
The existence of a third integral of motion renders the Euler problem completely integrable in terms of quadratures; as there are three independent and in involution (i.e., \(\{H_{0},L_{z}\}=\{H_{0},Q\}=\{L_{z},Q\}=0\)) integrals of motion as the number of the degrees of freedom of the Euler Hamiltonian system. The expressions (7) and (8) are quite similar to the corresponding expressions relating the Carter constant in Kerr with either \(p_{\theta}\) and \(\theta\), or \(p_{r}\) and \(r\).
An extensive list of key similarities that the Euler potential shares with the gravitational field of the relativistic Kerr black hole can be found in Ref. [20]. The analogy between the two problems is better revealed by replacing \(a\xi\) and \(\eta\) of the Euler field with \(r\) and \(\cos\theta\), respectively, mimicking the Boyer-Lindquist coordinates of Kerr. Actually, the equations of motion in a Kerr metric at 1st Post-Newtonian order and at large \(r\)-values reduces to the equations of motion in the Euler field [28]
## 3 The perturbed Euler
We perturb the Euler field in order to find a Newtonian analogue of a slightly perturbed Kerr spacetime, by adding a small point mass \(m\) (\(m<<M\)) at the origin of the axes. In this case the expression of the quadrupole moment and all higher mass moments are different from those of the unperturbed Euler, now obeying the following relation:
\[M_{0} =M+m, \tag{9}\] \[M_{2l} =(-a^{2})^{l}M, \tag{10}\]
with \(l=1,2...\). The multipole moments \(M_{k}\) with odd \(k\) vanish due to the reflection symmetry along the equatorial plane. The new potential in oblate spheroidal coordinates takes the form:
\[V(\xi,\eta)=-\frac{GM\xi}{a(\xi^{2}+\eta^{2})}-\frac{Gm}{a\sqrt{1+\xi^{2}-\eta^{ 2}}}. \tag{11}\]
We will rewrite the potential in such a way that the unperturbed and the perturbed fields correspond to the same total mass \(M_{0}\), so that both fields will be comparable with respect to their asymptotic limit at infinity:
\[V(\xi,\eta)=-\frac{GM_{0}\xi}{a(\xi^{2}+\eta^{2})}+\frac{Gm}{a}\left(\frac{\xi }{\xi^{2}+\eta^{2}}-\frac{1}{\sqrt{1+\xi^{2}-\eta^{2}}}\right). \tag{12}\]
Thus when \(m=0\) the system degenerates into the integrable Euler problem.
When an integrable Hamiltonian system becomes slightly perturbed, the new Hamiltonian can be written in terms of the old integrable Hamiltonian \(H_{0}\) plus a perturbation term \(H_{1}\):
\[H=H_{0}+\epsilon H_{1}. \tag{13}\]
In our case \(H_{0}\) is the Hamiltonian given exactly by Eq. (3). We assume that the mass \(m\) is small enough, compared to \(M_{0}\), to apply classical perturbation theory. The term \(H_{1}\) is given by:
\[H_{1}=\frac{GM_{0}}{a}\left(\frac{\xi}{\xi^{2}+\eta^{2}}-\frac{1}{\sqrt{1+ \xi^{2}-\eta^{2}}}\right), \tag{14}\]
while the perturbative parameter is defined by \(\epsilon=m/M_{0}\).
The new Hamiltonian has no dependence either on the time variable \(t\) or the azimuthal angle \(\phi\), due to the stationarity and axisymmetry of the new potential. As a result, there are two constants of motion; the energy:
\[E=H=\frac{a^{2}}{2}(\xi^{2}+\eta^{2})\left[\frac{\dot{\xi}^{2}}{1+\xi^{2}}+ \frac{\dot{\eta}^{2}}{1-\eta^{2}}\right]+\frac{a^{2}}{2}(1+\xi^{2})(1-\eta^{2} )\dot{\phi}^{2}+V(\xi,\eta), \tag{15}\]
and the component of angular momentum along the axis of symmetry:
\[L_{z}=p_{\phi}=a^{2}(\xi^{2}+1)(1-\eta^{2})\dot{\phi}. \tag{16}\]
However, the Hamilton-Jacobi equation is not separable anymore; there is no third integral of motion, which is independent and in involution with the energy, \(E\), and the \(z-\)angular momentum, \(L_{z}\). In the next sections we will numerically confirm that, by investigating the Poincare maps of orbits in the potential (12) and find properties related to non-integrability, such as chaotic motion and Birkhoff chains, as long as \(m\neq 0\).
As long as we are interested in bound orbits, we could define an effective potential \(V_{\rm eff}\) to rewrite (15) as
\[0=\frac{1}{2}a^{2}(\xi^{2}+\eta^{2})\left(\frac{\dot{\xi}^{2}}{\xi^{2}+1}+ \frac{\dot{\eta}^{2}}{1-\eta^{2}}\right)+V_{\rm eff}(\xi,\eta), \tag{17}\]
with
\[V_{\rm eff}(\xi,\eta)=\frac{L_{z}^{2}}{2a^{2}(\xi^{2}+1)(1-\eta^{2})}+V(\xi, \eta)-E, \tag{18}\]
where (16) has been used to replace the centrifugal part of the kinetic energy.
From Eq. (17), it is obvious that the motion is allowed only for \(V_{\rm eff}\leq 0\). When an orbit reaches the curve \(V_{\rm eff}=0\), the velocities \(\dot{\xi}\) and \(\dot{\eta}\) become zero (turning points); thus the curve \(V_{\rm eff}=0\) is known as Curve of Zero Velocity (CZV) [16]. Bound orbits are allowed in the interior of a closed CZV where the effective potential is negative. Additionally bound orbits are characterized by \(E<0\), since orbits with \(E\geq 0\) have CZVs that are not closed but are extended to infinity. The number and
the size of the distinct allowed regions on the poloidal plane \((\xi,\eta)\), within which a bound orbit could evolve, depend on the values of \(E\) and \(L_{z}\) of the orbit itself.
On the equatorial plane, i.e. at \(\eta=0\), the effective potential reads:
\[V_{\rm eff,eq}=-E+\frac{L_{z}^{2}}{2a^{2}(\xi^{2}+1)}-\frac{GM_{0}}{a\xi} \left[1-\epsilon\left(1-\frac{\xi}{\sqrt{1+\xi^{2}}}\right)\right]. \tag{19}\]
Especially a circular equatorial orbit (CEO) at \(\xi=\xi_{0}\) satisfies:
\[V_{\rm eff,eq}(\xi_{0})=\left.\frac{\partial V_{\rm eff,eq}}{\partial\xi} \right|_{\xi_{0}}=0. \tag{20}\]
Solving the system of the last two equations we obtain the constants of motion for a CEO:
\[L_{z}= \pm\left.\frac{GM_{0}(\xi_{0}^{2}+1)}{\xi_{0}^{5/2}}\left[1- \epsilon\left(1-\frac{\xi_{0}^{3}}{(1+\xi_{0}^{2})^{3/2}}\right)\right]\right. \tag{21}\] \[E= -\left.\frac{GM_{0}}{2a\xi_{0}^{3}}(\xi_{0}^{2}-1)\left[1- \epsilon\left(1-\frac{\xi_{0}^{3}}{(\xi_{0}^{2}-1)\sqrt{1+\xi_{0}^{2}}}\right)\right] \tag{22}\]
For stable circular equatorial orbits we should have \(\left.\frac{\partial^{2}V_{\rm eff,eq}}{\partial\xi^{2}}\right|_{\xi_{0}}\geq 0\). An innermost stable circular orbit (ISCO) exists when \(\left.\frac{\partial^{2}V_{\rm eff,eq}}{\partial\xi^{2}}\right|_{\xi_{\rm ISCO }}=0\). The perturbed Euler, as well as the corresponding unperturbed one, has an ISCO with the corresponding value of \(\xi_{\rm ISCO}\) depending only on the perturbative parameter \(\epsilon\). For \(\epsilon=0\), \(\xi_{\rm ISCO}=\sqrt{3}\), see [20].
## 4 KAM tori and resonant tori
Due to the integrability of the Euler problem, bound orbits lie on two-dimensional tori in the 4-dimensional phase-space \((\xi,\dot{\xi},\eta,\dot{\eta})\), characterized by the three integrals of motion. Tori corresponding to orbits that are characterised by the same \(E\) and \(L_{z}\) but different \(Q\) are nested within each other. Using action-angle variables one can define the orbit's characteristic frequencies [20] of libration type \((\Omega_{\xi},\Omega_{\eta})\) associated with \(\xi\) and \(\eta\) oscillations. If the ratio of frequencies \(\Omega_{\xi}/\Omega_{\eta}\) is an irrational number, the motion will never repeat itself and it will gradually cover the whole torus (quasi-periodic orbit). When the ratio of frequencies is a rational number (resonance), instead, the orbit repeats itself after an integer number of windings on the corresponding resonant torus (periodic orbit).
The Poincare surface of section is a two-dimensional surface that intersects transversely the foliage of tori [29]. In our case, we have chosen as a surface of section the plane \((\xi,\dot{\xi})\), when the orbit pierces the equatorial plane, \(\eta=0\), with positive \(\dot{\eta}\). The Poincare surface of section of each torus forms an invariant closed curve, which is either covered densely when the orbit is quasi-periodic, or consisting of finite fixed points when the orbit is periodic.
When an intergable Hamiltonian system becomes slightly perturbed, the Kolmogorov-Arnold-Moser (KAM) theorem [30]-[32] states that almost all tori (the non-resonant ones) become slightly deformed. Thus the quasi-periodic orbits survive under a sufficiently small perturbation. They are confined on a 2-torus (KAM torus) which is slightly deviating from the unperturbed one. Consequently the corresponding surface of section resembles the surface of section of the initially integrable system, but with a slightly deformed shape; these are the invariant KAM curves of the perturbed system.
The resonant tori, instead, are destroyed, when the system is slightly perturbed, according to the Poincare-Birkhoff theorem [14], [15], forming Birkhoff chains of islands on the Poincare section. These islands are built around the fixed points of the initial unperturbed (integrable) system. The interior of these islands consists of a new family of KAM curves all sharing the same rational ratio of fundamental frequencies as the corresponding resonant torus of the unperturbed system.
The Birkhoff islands of stability are very thin and their detection on a Poincare section could be quite tedious. A useful method to study nonintegrable systems and numerically detect the location of
a chain of islands is the so-called rotation number, which actually gives the ratio of the fundamental frequencies [33]. The rotation number \(\nu_{\theta}\) is defined by:
\[\nu_{\theta}=\lim_{N\to\infty}\frac{1}{2\pi N}\sum_{i=1}^{N}\theta_{i}, \tag{23}\]
with \(N\) denoting the number of crossings of the Poincare section by a phase-space trajectory. The angles of rotation \(\theta_{i}\) are calculated as follows: at first one finds on the surface of section the fixed central point \(\mathbf{u}_{0}\), which corresponds to the spherical orbit, \(\xi=const\), and around which all KAM curves of quasi-periodic orbits are formed. The position \(\mathbf{R}_{i}=\mathbf{u}_{i}-\mathbf{u}_{0}\) of each crossing point \(\mathbf{u}_{i}\) of a phase space trajectory on the surface of section with respect to \(\mathbf{u}_{0}\) is defined. Finally the angles \(\theta_{i}=angle(\mathbf{R}_{i+1},\mathbf{R}_{i})\) between two successive positions of \(\mathbf{R}_{i}\) on the surface of section are calculated.
The rotation number is intimately related to the ratio of fundamental frequencies of the orbit itself. On KAM tori the ratio of frequencies is irrational and varies from one curve to the other, so the rotation number changes continuously and monotonically as a function of the distance from the center \(\mathbf{u}_{0}\) of the Poincare section. Its monotonic evolution is interrupted though, creating a plateau, by the islands of the periodic orbits, where the value of \(\nu_{\theta}\) is rational and fixed for all these orbits belonging to the same chain of islands. All these orbits are characterized by the same frequency ratio, regardless of the specific KAM curve to which each orbit belongs; within an island of stability, the ratio of frequencies remains constant, even though the frequencies themselves change from one KAM curve to another.
### The Poincare section of the perturbed Euler problem
In order to demonstrate that the perturbed Euler problem is non integrable we have constructed a Poincare section and searched for Birkhoff islands. The physical parameters \(M_{0},a\), as well as the orbital parameters \(E,L_{z}\) of the perturbed system with \(\epsilon=10^{-2}\) were initially chosen so that there are bound orbits. For such a fixed set of parameters, we evolved numerically a set of orbits with different initial conditions \((\xi(0),\dot{\xi}(0)=0,\eta(0)=0\), while the initial velocity \(\dot{\eta}(0)\) was calculated directly by Eqs. (15, 16) apart from its sign, which was chosen to be positive); see Figs. 1a and 1b.
Then we formed the Poincare section of all these orbits (see Fig. 2) and measured the rotation number of each one. Most of them formed KAM-curves on the Poincare section.
By choosing the particular initial condition \(\xi(0)\) that had led to three single fixed points on the Poincare section in the unperturbed problem corresponding to the resonance of \(2:3\), and assuming the same parameters \(M_{0},a,E,L_{z}\), we managed to locate the chain of Birkhoff islands of the corresponding non-integrable system. Fiddling around this initial condition we found a whole set of resonant orbits, belonging to the same chain of Birkhoff islands.
We have also drawn the rotation curve (Fig. 3) of all orbits evolved. The strictly monotonic function \(\nu_{\theta}(\xi(0))\) is interrupted by a narrow plateau corresponding to all orbits at resonance \(2:3\); see Fig. 3b.
The width of the islands is intimately related to the magnitude of the perturbation \(\epsilon\). More specifically, for sufficiently small perturbation parameter, the width should scale as \(\sqrt{\epsilon}\)[34], [35]. We have confirmed this theoretical relation by measuring the width of the leftmost island of resonance \(2:3\) along the \(\xi\)-axis, for a few values of \(\epsilon\) in the range \(10^{-5}\) to \(10^{-2}\).
## 5 Inspirals
In the previous section we studied the evolution of orbits in the perturbed Euler gravitational field alone, that is without any other external force. This can be regarded as the analogue of the GR geodesic orbits in a specifically perturbed Kerr metric, like the Manko-Novikov metric [17]. The orbit of a compact object in a realistic EMRI though is not exactly geodesic, due to radiation reaction self-force. As long as the ratio of masses of the binary is sufficiently small, the orbits could be considered almost geodesics, but with adiabatically varying orbital parameters. This is true not only for EMRIs with a Kerr black hole as the central object, but with a non-Kerr supermassive central object as well.
In order to probe into the effect of resonance-crossing due to an unknown self-force in a perturbed Kerr metric, we have used instead the perturbed Euler problem, endowed with an artificial dissipative "self-force", as a trustworthy toy model. Usually, the study of such crossings in various perturbed Kerr background spacetimes is carried out by inducing the average value of energy and angular momentum losses to the corresponding geodesic equations of motion [16], [18], [19]. Although this method leads in general to crude, though sufficiently accurate, adiabatic evolution of orbits, when the orbit passes through a resonance, this approximation becomes unreliable. The evolution of the orbit through a resonance under the instantaneous self-force itself, could be quite different then.
We have studied the evolution of orbits in the perturbed Euler field, following two different schemes: (i) By numerically integrating the second-order Euler-Lagrange equations of a test body under the specific Newtonian gravitational force, with a given external dissipative force, and (ii) by numerically integrating a new version of the equations of motion of the Newtonian field alone, suitably parametrized by the usual integrals of motion, \(E,L_{z}\), and imposing a prescribed time-dependence in \(E,L_{z}\), caused by the same dissipative force. In Section 6 we will further explain the new set of equations used under the second scheme.
The evolution of the first scheme describes, up to numerical errors, the right evolution of the orbit, while the second scheme gives an approximate evolution of the orbit. When the orbit is not at resonance, the two different schemes are expected to lead to approximately equivalent adiabatic evolutions at the limit of zero "self-force". Since the corresponding torus in phase-space is then densely covered, one should not anticipate any difference in the estimation of average losses, if these are measured either along a "geodesic" orbit (as in the second scheme), or along the actual orbit under the tiny "self-force".
In order to check how generic are our results with respect to the differences arising by using the two schemes described above, we have used two different dissipative forces, as analogues of the relativistic self-force. The general formula assumed for both external forces is
\[\mathbf{F}_{\mathrm{ext}}=-\delta\mu f(\xi,\eta)\mathbf{v}, \tag{24}\]
where \(\mu\) is the mass of the test particle, \(\mathbf{v}\) is its vector velocity on oblate spheroidal coordinates, \(\delta<<1\) measures the magnitude of the "self-force", and the function \(f(\xi,\eta)\) determines how the strength of
Figure 1: The CZV (blue boundary) of orbits in a perturbed Euler field with \(M_{0}=1,a=0.7,\epsilon=10^{-2}\). The orbits are characterized by orbital parameters \(E=-0.156393\), \(L_{z}=1.32878\). The left panel corresponds to an orbit with \(\xi(0)=1.800\) (which leads to a KAM torus in phase space), while the right panel corresponds to a fine-tuned orbit with \(\xi(0)=1.257\) (which leads to a resonant KAM curve enclosed in a Birkhoff island on the Poincaré of section). Both orbits are evolved for the same total time: \(T=500\).
this force depends on the actual position of the particle. The two cases investigated were
\[f_{1}(\xi,\eta)=1, \tag{25}\] \[f_{2}(\xi,\eta)=\frac{\sqrt{1-\eta^{2}}}{\xi}. \tag{26}\]
The first function \(f_{1}\) corresponds to the usual atmospheric drag force, while the second one, \(f_{2}\) has been constructed so as to lead to a loss of energy and angular momentum, while its strength is enhanced at lower \(\xi\) values, where the field is stronger, and depends on the \(\eta\)-coordinate in a simple but physical, reflection-symmetric way.
The components of velocity \(\mathbf{v}\) on spheroidal coordinates are (see Appendix A of [20]):
\[v_{\xi} = a\dot{\xi}\sqrt{\frac{\xi^{2}+\eta^{2}}{\xi^{2}+1}}, \tag{27}\] \[v_{\eta} = a\dot{\eta}\sqrt{\frac{\xi^{2}+\eta^{2}}{1-\eta^{2}}},\] (28) \[v_{\phi} = a\dot{\phi}\sqrt{(1-\eta^{2})(\xi^{2}+1)}, \tag{29}\]
The instantaneous energy and \(z\)-angular momentum losses per unit mass are given from:
\[\left(\frac{dE}{dt}\right)_{i}=\mathbf{v}\cdot\mathbf{a}_{\rm ext} = -\delta\ a^{2}f_{i}(\xi,\eta)\left[(\xi^{2}+\eta^{2})\left(\frac{ \dot{\xi}^{2}}{1+\xi^{2}}+\frac{\dot{\eta}^{2}}{1-\eta^{2}}\right)+(1+\xi^{2} )(1-\eta^{2})\dot{\phi}^{2}\right], \tag{30}\] \[\left(\frac{dL_{z}}{dt}\right)_{i}=\hat{\mathbf{z}}\cdot(\mathbf{ v}\times\mathbf{a}_{\rm ext}) = -\delta\ a^{2}f_{i}(\xi,\eta)(1+\xi^{2})(1-\eta^{2})\dot{\phi}, \tag{31}\]
where \(\mathbf{a}_{\rm ext}=\mathbf{F}_{\rm ext}/\mu\) and \(i\)-index denotes the type of "self-force" used; see Eqs. (25,26). The averaged loss of either \(E\) or \(L_{z}\) at each orbital point is computed by
\[\left\langle\frac{dK}{dt}\right\rangle=\lim_{T\to\infty}\frac{1}{T}\int_{0}^ {T}\frac{dK}{dt}dt, \tag{32}\]
Figure 2: On the left panel, the Poincaré sections of a number of orbits, all characterized by \(E=-0.156393\), and \(L_{z}=1.32878\) (the same as for the previous Figure 1), are drawn. Each orbit is evolved starting from a different initial condition \(\xi(0)\). Most of the orbits lead to KAM curves (among them is the green KAM curve of the orbit shown in Fig. 1a). Even the apparent dashed curve is a normal KAM curve that needs longer evolution time to fill the whole invariant curve. Also shown is the (purple) chain of Birkhoff islands that correspond to an orbit with resonance \(\Omega_{\xi}:\Omega_{\eta}=2:3\). This is exactly the orbit of Fig. 1b. On the right panel a detail of the Poincaré section of Fig. 2a is drawn around the purple leftmost island. A few other Poincaré sections are shown, all corresponding to the same Birkhoff island of resonance \(2:3\).
where \(K\) stands for \(E\) or \(L_{z}\), and the integrant is computed along a "geodesic" orbit; i.e., an orbit where no external force is applied, therefore \(E,L_{z}\) remain constant. The integration time \(T\) need to be infinite so that the "geodesic" orbit has fully covered the whole available phase space for that orbit. Practically, we have integrated this ratio for such a long time that the ratio converges to a finite value. Of course \(T\) should be much longer than the scale of \(\xi\) and \(\eta\) oscillations.
## 6 Orbital evolution from averaged energy and momentum losses.
In contrast to the Newtonian evolution of an orbit under a given instantaneous dissipative "self-force", which is straightforward in the case of an orbit in pure or perturbed Euler potential, the evolution due to the corresponding average losses of energy and \(z\)-angular momentum is quite more complicate. The situation is exactly the opposite in the evolution of an orbit in a perturbed Kerr metric; in that case the self-force itself is not known (actually a complete analytic form is not known for a generic orbit, even in pure Kerr). However, one could easily evolve a geodesic orbit, assuming the energy and the \(z\)-angular momentum are given in analytic forms through a hybrid model [18] for orbits in Kerr, suitably adjusted to accommodate for the non-Kerr mass-quadrupole moment of the specific metric [36].
In order to evolve an orbit in a perturbed Euler potential, with a given average loss of energy and \(z\)-angular momentum, we cannot rely on Hamiltonian formalism, since there is no straightforward way to turn a Hamiltonian problem into a dissipative one, that its equations of motion lead to a given time-dependence of the integrals of motion of its non-dissipative counterpart. We have overcome this issue by transforming the equations of motion into a Hamiltonian-like form (that is to first order differential equations), but suitably parametrized by quantities that are equivalent to the integrals of motion when the "self-force" is absent.
The new set of equations of motion describing the orbit on the polar plane (the azimuthal angle \(\phi\) can be straightforwardly integrated once the angular momentum is given and the polar position is
Figure 3: On the left plot, the rotation number \(\nu_{\theta}\) as a function of \(\xi(0)\) is drawn for orbits with the same physical parameters as the ones presented in the two previous Figures. The horizontal axis spans almost the whole range of allowed \(\xi(0)\)’s up to the fixed central point \(\mathbf{u_{0}}\). Apart of the anticipated monotonic character of \(\nu_{\theta}(\xi(0))\), it is clear that around \(\xi(0)=1.25\) there is narrow plateau corresponding to the particular resonance of \(2:3\). A detail of this plateau is shown on the right panel. The tiny “glitch” on the left side of the plateau is an indication that the Birkhoff island is surrounded by a very narrow chaotic strip.
known as a function of time) are differential equations for \(\xi,\eta\) and an additional angle \(\theta\) defined as
\[\frac{\dot{\xi}}{\sqrt{1+\xi^{2}}} =A\sin\theta \tag{33}\] \[\frac{\dot{\eta}}{\sqrt{1-\eta^{2}}} =A\cos\theta, \tag{34}\]
with \(A\) being the positive definite kinetic energy along the polar plane
\[A=\frac{\dot{\xi}^{2}}{\xi^{2}+1}+\frac{\dot{\eta}^{2}}{1-\eta^{2}}. \tag{35}\]
The angle \(\theta\) is a well defined quantity, related to the ratio of \(\dot{\xi}\) and \(\dot{\eta}\) terms in \(A\), as long as \(A\) is non-vanishing.
The new set of the equations of motion for \(\xi,\eta,\theta\) (assuming the mass of the test particle is unity) then reads:
\[\dot{\xi} =A\sqrt{1+\xi^{2}}\sin\theta, \tag{36}\] \[\dot{\eta} =A\sqrt{1-\eta^{2}}\cos\theta,\] (37) \[\dot{\theta} =-A\sin\theta\cos\theta H_{1}+\frac{A}{\xi^{2}+\eta^{2}}\times\] \[\bigg{(}H_{2}H_{3}-2\sin\theta\cos\theta H_{4}+\frac{1}{A^{2}} \left[\frac{L_{z}^{2}H_{5}}{a^{4}}-\frac{GM_{0}}{a^{3}}\left((1-\epsilon)H_{6 }+\epsilon H_{7}\right)\right]\bigg{)}, \tag{38}\]
Figure 4: The width \(w\) of the leftmost Birkhoff island of resonance \(2:3\) has been computed for a few cases (shown as points) of the perturbative parameter \(\epsilon=m/M_{0}\). All points represent orbits with the same orbital parameters \(E,L_{z}\) as in Fig. 1. The best-fit straight line is the \(\log(w)=-1.7641+0.507812\log(\epsilon)\) which is in accordance with the expected theoretical slope of \(1/2\) (see [35]).
where
\[H_{1} =\frac{\eta\sqrt{1+\xi^{2}}\cos\theta+\xi\sqrt{1-\eta^{2}}\sin\theta }{\sqrt{(1+\xi^{2})(1-\eta^{2})}}, \tag{39}\] \[H_{2} =-(1-\eta^{2})\sin^{2}\theta+(1+\xi^{2})\cos^{2}\theta,\] (40) \[H_{3} =\frac{\xi\cos\theta}{\sqrt{1+\xi^{2}}}+\frac{\eta\sin\theta}{ \sqrt{1-\eta^{2}}},\] (41) \[H_{4} =\eta\sqrt{1-\eta^{2}}\cos\theta-\xi\sqrt{1+\xi^{2}}\sin\theta,\] (42) \[H_{5} =\frac{\xi\sqrt{1-\eta^{2}}\cos\theta+\eta\sqrt{1+\xi^{2}}\sin \theta}{((1-\eta^{2})(1+\xi^{2}))^{3/2}},\] (43) \[H_{6} =\frac{\sqrt{1+\xi^{2}}(\xi^{2}-\eta^{2})\cos\theta-2\xi\eta \sqrt{1-\eta^{2}}\sin\theta}{(\xi^{2}+\eta^{2})^{2}},\] (44) \[H_{7} =\frac{\xi\sqrt{1+\xi^{2}}\cos\theta+\eta\sqrt{1-\eta^{2}}\sin \theta}{(1+\xi^{2}-\eta^{2})^{3/2}}. \tag{45}\]
The \(A\) term in the set of equations above is simply a function of the total energy \(E\) and \(z\)-angular momentum \(L_{z}\), as well as of the coordinates \(\xi,\eta\), through
\[A=\sqrt{\frac{2}{a^{2}(\xi^{2}+\eta^{2})}\left[E-\frac{L_{z}^{2}}{2a^{2}(1+\xi^ {2})(1-\eta^{2})}-V(\xi,\eta)\right]}. \tag{46}\]
The equation (38) for \(\theta\) has been derived by computing the time derivative of the ratio between the first two velocities (36,37), in order to eliminate \(A\), and then introducing the expressions for \(\ddot{\xi}\) and \(\ddot{\eta}\) from the Euler-Lagrange equations of the perturbed Euler field without any induced "self-force", which are given in Appendix A.
Now Eqs. (36,37,38) form a set of three first-order differential equations that describe the evolution of the system under the constrain of a constant energy and \(z\)-angular momentum. As long as the \(A\) term is non-vanishing, the evolution is equivalent to that of Hamilton's equations. However, if the \(A\) term goes to zero, the set of the above equations becomes indeterminate and one cannot use them to evolve the system. The vanishing of \(A\) term though correspond to a very special set of initial conditions: when both \(\dot{\xi}\) and \(\dot{\eta}\) get simultaneously zero along the evolution. This situation arises when the orbit touches the zero-velocity curve (CZV) which could be obtained by extremely fine tuning of initial conditions, corresponding to zero measure. Therefore we don't expect this singular case to arise when arbitrary initial conditions are evolved for a finite time. An orbit, though, might actually come very close to the CZV. However, one should not be worried about that, as long as the \(A\) does not drop below a given threshold, leading to restricted numerical errors in the evolution of the set of first-order differential equations given above.
The advantage of the new set of equations is that they give us the opportunity to evolve the orbit with a predetermined time-varying law for \(E\) and \(L_{z}\). This is what we will exploit to compare the evolution of an orbit under a "self-force" with the evolution of the orbit under the corresponding constant rate of change of energy and \(z\)-angular momentum.
### Accuracy Tests
We have used MATHEMATICA in order to integrate numerically either scheme of orbital evolution. In order to test the numerical accuracy of integration we first run the equations of motion (48) for a few initial conditions without imposing any "self-force". We run also the system of equations (36,37,38) for the same initial conditions, with a constant value of the parameters \(E,L_{z}\), equal to the energy and \(z\)-angular momentum corresponding to the initial conditions. Then we measure the orbital deviations between the two schemes. What we found (see Fig. 5) is that there is some secular increase in the deviations of \(\xi(t)\) and \(\eta(t)\), caused by numerical errors, which are of order \({\cal O}(10^{-3})\) for a total time of \(5000\). As a comparison, the actual oscillations of \(\xi\) and \(\eta\) are of order \(\sim 7\) and \({\cal O}(10^{-1})\), respectively, with oscillation periods of order \(\sim 30\) and \(\sim 70\), respectively. Moreover, we have tested the invariance
Figure 5: The plots demonstrate the typical level of numerical accuracy in orbital evolution under the two integration schemes. In the upper two plots we have drawn the deviation of \(\xi(t)\) and \(\eta(t)\) between the two integration schemes. In the bottom-left plot (c) the evolution of the deviation of the energy \(E\), under the second scheme is presented. Finally, in the bottom-right plot (d) we have drawn the evolution of the parameter \(A\) used in the second scheme (described in Sec. 6). The orbital evolution in all these diagrams refers to an orbit with the same physical and orbital parameters used in Fig. 1, and with initial conditions \(\xi(0)=1.270,\eta(0)=\dot{\xi}(0)=0\).
of the conserved quantity \(E\), under the second scheme of integration. The deviations of \(E\) did not exceed \(10^{-6}\) for the same total time of evolution. Also, we have monitored the evolution of the parameter \(A\), along the integration, to ensure that the new set of equations does not lead to erroneous orbital evolution due to indeterminacy of the equations themselves. In all cases we investigated the value of \(A\) did not drop below \(10^{-7}\) which is quite safe for the numerical accuracy of MATHEMATICA.
## 7 Comparisons between the two schemes and Conclusions
The gravitational waves emitted by an EMRI, the central source of which is not a pure Kerr black hole, are expected to demonstrate a peculiar behavior when a resonance is met [16], [19]. The ratio of the fundamental frequencies encoded in the signal will remain constant, while the system crosses a Birkhoff chain of islands. The duration of this crossing is essential to discern such a non-integrable system, describing the corresponding background spacetime.
In order to study the differences in crossing-times of a given resonance arising from the evolution of the two different schemes described in the Sec. 5, we used a sequence of initial conditions quite close to the concave side of the leftmost Birkhoff island of the \(2:3\) resonance (see Fig. 2b), and evolved them directly with the instantaneous "self-force" scheme up to the point where the particular resonance is hit. Subsequently, we followed two different ways to further evolve the orbit: (i) using the same scheme, up to the point where the orbit exits the corresponding Birkhoff island, and (ii) computing the average losses of \(E\) and \(L_{z}\) at the specific phase-space coordinates when the orbit first enters the Birkhoff island and imposing these losses in Eqs. (36,37,38) of the second scheme until the orbit, again, exits the island. The \(E\) and \(L_{z}\) parameters introduced in these equations, through \(A\) and \(L_{z}\), are assumed to vary linearly with time, with corresponding time-derivatives given by the losses mentioned above.
During the orbital inspiral, we periodically examined if the orbit is at resonance. This involved pausing the evolution using either integration scheme, then progressing the system along a "geodesic", assuming there were no "self-force", and plotting its Poincare section. The orbit is at resonance, if a chain of Birkhoff islands forms on the Poincare section.
For each unique evolution, we recorded the total time that the orbit spends within the island. The obtained results are presented in Fig. 6, illustrating the outcomes for the two types (25,26) of "self-force" employed in our analysis. Depending on the entrance-point, the evolution of an orbit inside a chain of Birkhoff islands varies significantly: the orbit may get trapped at resonance for quite a long time, or pass the resonance in a very short period. This explains the recurrent ups and downs shown in the diagram, for both integration schemes in either type of "self-force" assumed. This feature is reminiscent of the time intervals shown in Figure 11 of [16], where the crossing time of the resonance \(2:3\) for the relativistic non-integrable case of Manko-Novikov was studied.
It is clear that the scheme based on average losses leads to systematic and significant lower values of crossing times, compared to the crossing times under the instantaneous action of the "self-force" itself. The crossing time due to the actual evolution of the orbit is on average \(2\) to \(3.5\) times larger than what one would get by imposing the constant values for \(E\) and \(L_{z}\) losses during the evolution.
Several distinct orbital evolutions were conducted using different types of "self-force", different magnitudes of \(\delta\), and different orbital parameters \(E,L_{z}\). The crossing time, when the actual "self-force" was employed to evolve the orbit, was boosted in all cases by a factor similar, if not greater, to the case analyzed above.
The Newtonian analogue used in this paper is indicative of the differences arising in the evolution of an orbit through a resonance of a slightly non-integrable system under the two different integration schemes. Moreover the similarity of the Kerr metric with the Euler problem suggests that these results are expected in a generically perturbed Kerr system. Therefore, all estimations of the duration of the plateau effect in a slightly perturbed relativistic integrable system presented in the literature up to now [16], [19], might be suppressed, compared to the actual duration of this effect in realistic EMRIs.
## Acknowledgements
Research was supported by the project of bilateral collaboration of scientists in Germany and Greece IKYDA 2022.
## Appendix A Equations of motion
The Lagrangian \(L\) per unit mass of the perturbed Euler field is given by:
\[L=p_{\mu}\dot{q}_{\mu}-H, \tag{47}\]
where \(H\) is the Hamiltonian of Eq. (13), \(p_{\mu}\) are the conjugate momenta given in Eqs. (4,5,6) and \(\dot{q}_{\mu}=(\dot{\xi},\dot{\eta},\dot{\phi})\). The equations of motion, that we solve numerically, are given by Euler-Lagrange equations:
\[\begin{split}\ddot{\xi}=&\frac{\xi}{\xi^{2}+\eta^{ 2}}\left(-\dot{\xi}^{2}\frac{1-\eta^{2}}{\xi^{2}+1}+\dot{\eta}^{2}\frac{\xi^{2 }+1}{1-\eta^{2}}\right)-\frac{2\eta\dot{\xi}}{\xi^{2}+\eta^{2}}+\frac{\xi(\xi ^{2}+1)(1-\eta^{2})}{\xi^{2}+\eta^{2}}\dot{\phi}^{2}\\ &-\frac{G(M_{0}-m)}{a^{3}}\frac{(\xi^{2}+1)(\xi^{2}-\eta^{2})}{( \xi^{2}+\eta^{2})^{3}}-\frac{Gm}{a^{3}}\frac{\xi(\xi^{2}+1)}{(\xi^{2}+\eta^{2 })(1+\xi^{2}-\eta^{2})^{3/2}},\\ \ddot{\eta}=&-\frac{\eta}{\xi^{2}+\eta^{2}}\left(- \dot{\xi}^{2}\frac{1-\eta^{2}}{\xi^{2}+1}+\dot{\eta}^{2}\frac{\xi^{2}+1}{1- \eta^{2}}\right)-\frac{2\xi\dot{\eta}\dot{\xi}}{\xi^{2}+\eta^{2}}-\frac{\eta( \xi^{2}+1)(1-\eta^{2})}{\xi^{2}+\eta^{2}}\dot{\phi}^{2}\\ &-\frac{G(M_{0}-m)}{a^{3}}\frac{2\xi\eta(1-\eta^{2})}{(\xi^{2}+ \eta^{2})^{3}}-\frac{Gm}{a^{3}}\frac{\eta(1-\eta^{2})}{(\xi^{2}+\eta^{2})(1+ \xi^{2}-\eta^{2})^{3/2}},\\ \ddot{\phi}=&\left(-\frac{2\xi\dot{\xi}}{\xi^{2}+1} +\frac{2\eta\dot{\eta}}{1-\eta^{2}}\right)\dot{\phi}.\end{split} \tag{48}\]
|
2309.10850 | Evolution of solar wind sources and coronal rotation driven by the
cyclic variation of the Sun's large-scale magnetic field | The strength and morphology of the Sun's magnetic field evolves significantly
during the solar cycle, with the overall polarity of the Sun's magnetic field
reversing during the maximum of solar activity. Long-term changes are also
observed in sunspot and geomagnetic records, however systematic magnetic field
observations are limited to the last four cycles. We investigate the long-term
evolution of the Sun's magnetic field, and the influence this has on the
topology and rotation of the solar corona. The Sun's photospheric magnetic
field was decomposed into spherical harmonics using synoptic Carrington
magnetograms from 1) WSO, 2) MDI onboard the SOHO, and 3) HMI onboard the SDO.
The time-evolution of the spherical harmonic coefficients was used to explore
the variation of the Sun's magnetic field, focusing on the large-scale modes.
PFSS extrapolations of the photospheric field were computed to follow
topological changes in the corona. The footpoints of the Sun's open magnetic
field vary between the polar coronal holes and activity driven features such as
active regions, and equatorial coronal holes. Consequently, the mean rotation
rate of the solar wind is modulated during each cycle by the latitudinal
variation of open field footpoints, with slower rotation during minima and
faster (Carrington-like) rotation during maxima. Thisc variation is sensitive
to cycle to cycle differences in the polar field strengths and hemispherical
flux emergence rates, with the ratio of quadrupole to dipole energy following a
similar variation. Cycle 23 maintained a larger fraction of quadrupolar energy
in the declining phase, which kept the sources of open magnetic flux closer to
the equator, extending the period of faster equator-ward connectivity. The
ratio of quadrupole to dipole energy could be a useful proxy when examining the
impact of differential rotation on the coronae of other Sun-like stars. | Adam J. Finley, Allan Sacha Brun | 2023-09-19T18:00:33Z | http://arxiv.org/abs/2309.10850v1 | Evolution of solar wind sources and coronal rotation driven by the cyclic variation of the Sun's large-scale magnetic field
###### Abstract
Context:The strength and morphology of the Sun's magnetic field evolves significantly during the solar cycle, with the overall polarity of the Sun's magnetic field reversing during the maximum of solar activity. Long-term changes are also observed in sunspot and geomagnetic records, however systematic magnetic field observations are limited to the last four cycles.
Aims:Here, we investigate the long-term evolution of the Sun's magnetic field, and the influence this has on the topology and rotation of the solar corona.
Methods:The Sun's photospheric magnetic field was decomposed into spherical harmonics using synoptic Carrington magnetograms from 1) the Wilcox Solar Observatory, 2) the Michelson Doppler Imager onboard the Solar and Heliospheric Observatory, and 3) the Helioseismic and Magnetic Imager onboard the Solar Dynamics Observatory. The time-evolution of the spherical harmonic coefficients was used to explore the variation of the Sun's magnetic field, focusing on the large-scale modes. Potential field source surface extrapolations of the photospheric field were computed to follow topological changes in the corona.
Results:The footpoints of the Sun's open magnetic field vary between the polar coronal holes and activity driven features such as active regions, and equatorial coronal holes. Consequently, the mean rotation rate of the solar wind is modulated during each cycle by the latitudinal variation of open field footpoints, with slower rotation during minima and faster (Carrington-like) rotation during maxima.
Conclusions:This variation is sensitive to cycle to cycle differences in the polar field strengths and hemispherical flux emergence rates, with the ratio of quadrupole to dipole energy following a similar variation. Cycle 23 maintained a larger fraction of quadrupolar energy in the declining phase, which kept the sources of open magnetic flux closer to the equator, extending the period of faster equator-ward connectivity. The ratio of quadrupole to dipole energy could be a useful proxy when examining the impact of differential rotation on the coronae of other Sun-like stars.
## 1 Introduction
The Sun's magnetic field undergoes an approximately 11 year cycle of activity (see review of Hathaway 2015). Beginning with a mostly axisymmetric dipolar magnetic field at solar minima (DeRosa et al. 2012), the Sun's magnetic field becomes increasingly complex as the cycle progresses due to the emergence of active regions (the properties of which are reviewed in van Driel-Gesztelyi & Green 2015). These regions are typically bipolar in nature, following Hale's law for leading-trailing polarity and Joy's law for tilt angle (summarised in Hale et al. 1919, and more recently Dasi-Espuig et al. 2010). The presence of strong magnetic fields in these regions suppresses surface convection, creating dark patches on the solar disk, called'sunspots', which have been well-documented throughout the last four centuries (see Clette et al. 2014). Flux emergence typically begins at mid-latitudes, steadily moving down towards the Sun's equator as activity increases (Carrington 1958; Sporer 1880; Hathaway et al. 2003). The emerging field interacts with pre-existing structures in the low-corona, causing magnetic energy to build-up, then be released in the form of flares (Toriumi & Wang 2019), and coronal mass ejections (Forbes 2000). The Sun's magnetic field reverts back to an axisymmetric dipole at the end of the activity cycle, however the magnetic polarity is reversed (Mordvinov & Kitchatinov 2019). After two activity cycles (\(\sim\)22 years), the original magnetic field polarity is restored. Long-term trends in solar activity (on timescales of hundreds to thousands of years) are also observed when studying cosmogenic radionuclides stored in natural archives (e.g. Usoskin et al. 2021).
Features embedded in the photosphere are observed to differentially rotate (see review of Beck 2000), with the Sun's equator rotating faster than the poles. Typically one full revolution at the equator takes 24.5 days, in contrast to 33.4 days near the poles. Helioseismic inversions of the solar interior confirm that this differential rotation pattern permeates the entire convective zone, down to around 0.7\(R_{\odot}\)(Thompson et al. 1996; Schou et al. 1998; Larson & Schou 2018). Below which there is a transition to rigid-body rotation in the radiative zone (Howe 2009). Active regions are broken apart by differential rotation over time (Gibashvili et al. 2013; Imada & Fujiyama 2018), with the exception of some very strong magnetic field regions which maintain a degree of cohesion (Yan et al. 2018). Typically, active regions rotate near the Carrington rotation rate of 25.4 days (27.38 days as viewed from Earth), corresponding to the mean rotation rate at active solar latitudes.
The evolution of the Sun's magnetic field during each solar cycle leads to a cyclic variation in solar wind sources (Wang
2009). This was explored in a range of previous theoretical works, using Potential Field Source Surface (PFSS) models (Stansby et al., 2021), magnetofrictional models (Yeates, 2014), and full magnetohydrodynamic simulations (Reville & Brun, 2017). During solar minima, the coronal magnetic field is mostly dipolar, with the solar wind emerging along open field at the rotational poles (see also remote-sensing and in-situ observations Wilhelm et al., 1998; Harvey & Recely, 2002; McComas et al., 2008). As the Sun becomes more active, emerging active regions increase the complexity of the coronal field (van Driel-Gesztelyi et al., 2012), allowing the solar wind to emerge from a broader range of sources, from around the active regions themselves (recently investigated with Solar Orbiter by Yardley et al., 2023), to equatorial coronal holes, and ephemeral regions. Finley & Brun (2023) proposed that the evolving distribution of source regions drives a variation in the mean rotation rate of the corona during the solar cycle.
In this study, the methodology of Finley & Brun (2023) is applied to a larger range of observations, spanning more than four solar cycles. Synoptic Carrington magnetograms were used to drive PFSS modelling, with each magnetogram consisting of data assimilated during one Carrington rotation (CR) of the Sun (as viewed from Earth). The data products used in this work are summarised in Section 2. The decomposition of the magnetograms into a series of spherical harmonics is detailed in Section 3, from which trends in mode strengths were extracted (continued in Appendix B). Section 4 presents the resulting latitudinal variation of solar wind sources from the PFSS modelling along with the impact on coronal rotation. The cycle to cycle variation of these quantities is discussed in Section 5.
## 2 Observations
Magnetograms were taken from the Wilcox Solar Observatory (WSO) and both the Michelson Doppler Imager (MDI), onboard the Solar and Heliospheric Observatory (SOHO), and the Helioseismic and Magnetic Imager (HMI), onboard the Solar Dynamics Observatory (SDO). Each individual Carrington magnetogram combines multiple full-disk line-of-sight magnetic field observations spanning a full CR of 27.38 days. The radial magnetic field component was derived from each line-of-sight measurements before assimilation, and the polar field strengths are corrected to account for poor visibility and high inclination.
It has been well-established that the magnetic field measurements from different instruments tend to disagree with one another (e.g. Riley et al., 2014). This often includes scale-dependent discrepancies that affect the energy recovered in each spherical harmonic mode (as shown in Virtanen & Mursula, 2017). This is further complicated when comparing ground-based to space-based telescopes where the visibilities and spatial resolutions can vary significantly. For the period of overlap between WSO, SOHO/MDI and SDO/HMI (CR 2100 - 2107), a comparison of their relative field strengths was undertaken, detailed in Appendix A. During this period, WSO field strengths were systematically smaller than their space-based counterparts1, and SOHO/MDI field strengths were slightly larger than SDO/HMI (see also the comparison in Liu et al., 2012). In order to produce a consistent timeseries, the WSO magnetograms were multiplied by a factor of 3.5 and the SOHO/MDI magnetograms by a factor of 0.85. These factors were included throughout our analysis, bringing the field strengths of the timeseries into agreement with magnetograms from SDO/HMI. This normalisation does not influence the spherical harmonic decomposition of the magnetograms, nor the connectivity of the PFSS models, used in this study.
Footnote 1: Previous works have shown that the WSO field strengths are around a factor 1.8 too low due to instrumental effects (Svalgaard et al., 1978). This is included in the factor of 3.5 used to match the space-based observations.
### Wilcox Solar Observatory
Magnetograms from the WSO (Scherrer et al., 1977), used in this study2, span from 1976 to mid-2022 (CR 1641 - 2258). The WSO timeseries covers 46 years (617 CRs), over four sunspot cycles (21 - 24), and includes the rising phase of cycle 25. WSO magnetograms are available with a resolution in latitude and longitude of 36 and 72 points, respectively. Each data point therefore spans an area of 5\({}^{\circ}\) x 5\({}^{\circ}\) (60 Mm x 60 Mm). The evolution of the azimuthally averaged magnetic field from this timeseries is shown in the top panel of Figure 1.
Figure 1: Evolution of the Sun’s radial magnetic field from WSO and SOHO/MDI & SDO/HMI timeseries. The top two panels show azimuthally averaged Carrington magnetograms from WSO (1976 - 2022) and from SOHO/MDI & SDO/HMI (1996 - present). The following two panels show the latitude of the solar dipole (positive pole) deduced from the spherical harmonic-decomposition of the magnetograms from WSO, and SOHO/MDI & SDO/HMI, respectively. The Carrington longitude of the dipole (positive pole) is then indicated in colour. Epochs where the dipole reversal appears to stall are highlighted with black horizontal bars. The bottom row displays the monthly sunspot number during this time, taken from WDC-SILSO, along with the contributions from the northern and southern hemispheres individually.
### SOHO/MDI and SDO/HMI
Magnetograms from SOHO/MDI (Scherrer et al. 1995) are available from 1996 to early 2011 (CR 1908 - 2107), with some data gaps, like the temporary loss of SOHO in 1998. Magnetograms from SDO/HMI (Scherrer et al. 2012) are available from 2010 to present (CR 2100 - 2270). Both data products include a polar field correction (Sun et al. 2011; Sun 2018). For this study, SOHO/MDI magnetograms3 are used from CR 1908 - 2099, and SDO/HMI, from CR 2100- 2270. The combined SOHO/MDI and SDO/HMI timeseries covers 27 years (361 CRs), just over half of the WSO timeseries, encompassing two sunspot cycles (23 and 24), and cycle 25 to present. SOHO/MDI & SDO/HMI magnetograms are available with a resolution in latitude and longitude of 1800 and 3600 points, respectively. Each data point therefore spans an area of \(1^{\circ}\) x \(1^{\circ}\) (6 Mm x 6 Mm). The evolution of the azimuthally averaged magnetic field from this timeseries is shown in the second panel of Figure 1.
Footnote 3: Data accessed May 2023: [http://hmi.stanford.edu/data/synoptic.html](http://hmi.stanford.edu/data/synoptic.html)
### Sunspot cycle lengths and hemispherical offsets
Despite magnetogram records being limited to roughly the last four cycles, historical records of the sunspot number are available for the last four centuries (e.g. Clette et al. 2014). Of interest to this study are the hemispherical sunspot number records, here Veronig et al. (2021) is used. This record spans from May 1874 to present, from which cycle lengths and hemispherical asymmetry were extracted. Table 1 contains the length of each sunspot cycle, magnetic cycle (defined as the current cycle length plus the previous one), timing of the maxima of activity in each hemisphere after solar minimum, and the relative lag time between each hemisphere's maximum.
The mean sunspot cycle, and magnetic cycle, during this period are \(10.85\pm 0.82\) and \(21.99\pm 0.85\) years respectively, inline with the standard quoted values, however individual cycles can have a significant deviation from this mean (see also Wilson 1987; Hathaway et al. 1994; Hathaway 2015). For example, cycle 22 had a length of 9.8 years whereas cycle 23 was much longer with 12.5 years. This in turn allows for significant variation in consecutive magnetic cycles. (20.92 to 23.67). Hemispherical maxima were extracted from the monthly sunspot number records as smoothing with a 13-month window, shown in Figure 2 along with a depiction of the hemispherical lag times for each cycle. The maxima of the northern and southern hemispheres can have as much as 2.6 years of difference. Cycles 21-24, which are the focus of this study, have activity in the northern hemisphere systematically peaking before the southern hemisphere (see also Deng et al. 2016). This is due to a significant quadrupolar mode, previously discussed in DeRosa et al. (2012), see their appendix on dynamo symmetries. The current under
\begin{table}
\begin{tabular}{c|c c c|c c c} \hline \hline Cycle & Start & Sunspot Cycle & Magnetic Cycle & North Max. & South Max. & North-South Lag \\ No. & [Decimal yr] & Length [yr] & Length [yr] & [yr after Min] & [yr after Min] & [yr] \\ \hline
12 & 1878.92 & 11.17 & – & 2.75 & 5.0 & -2.25 \\
13 & 1890.08 & 11.83 & 23.0 & 2.41 & 3.5 & -1.08 \\
14 & 1901.92 & 11.58 & 22.75 & 4.08 & 5.33 & -1.25 \\
15 & 1913.5 & 9.67 & 20.84 & 4.08 & 4.08 & 0.0 \\
16 & 1923.16 & 10.5 & 21.67 & 5.34 & 4.84 & 0.5 \\
17 & 1933.67 & 10.5 & 21.67 & 3.83 & 4.83 & -1.0 \\
18 & 1944.16 & 10.08 & 21.25 & 5.67 & 3.08 & 2.59 \\
19 & 1954.25 & 10.34 & 21.51 & 5.0 & 3.42 & 1.58 \\
20 & 1964.58 & 11.58 & 22.75 & 4.58 & 5.5 & -0.92 \\ \hline
21 & 1976.16 & 10.5 & 21.67 & 3.5 & 4.08 & -0.58 \\
22 & 1986.67 & 9.75 & 20.92 & 3.0 & 4.83 & -1.83 \\
23 & 1996.42 & 12.5 & 23.67 & 4.08 & 5.67 & -1.59 \\
24 & 2008.92 & 11.01 & 22.18 & 2.83 & 5.17 & -2.34 \\
25 & 2019.92 & – & – & – & – & – \\ \hline \hline Range of Values & & 9.67 to 12.5 & 20.84 to 23.67 & 2.41 to 5.67 & 3.08 to 5.67 & -2.34 to 2.59 \\ Mean & & 10.85 & 21.99 & 3.94 & 4.56 & -62 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Cycle lengths and lag times between maxima in each hemisphere.
Figure 2: Smoothed 13-month total and hemispherical sunspot numbers. Maxima in each hemisphere are identified with triangular symbols. The lag times between hemispherical maxima are plotted in the lower panel for each cycle. Cycles 21-24, under investigation in this study, all had activity peaking in the northern hemisphere before the southern hemisphere.
standing is that the quadrupolar (symmetric) dynamo mode is playing a key role in off-setting the dipolar (antisymmetric) dynamo mode, leading to one hemisphere to be ahead or behind the other one depending on the relative signs of the two modes. The nonlinear coupling of the two dynamo modes make it difficult to predict which hemisphere will be ahead. The evolution of these two modes is discussed in Section 3.2.
## 3 Component analysis
Each magnetogram in our timeseries, provided on a latitude \(\theta\) versus Carrington longitude \(\phi\) grid, was decomposed into spherical harmonics4\(Y_{lm}=c_{lm}P_{lm}(\cos\theta)e^{im\theta}\), with degree \(l\) and order \(m\), based on the legende polynomial functions \(P_{lm}(\cos\theta)\). The radial field is written as,
Footnote 4: To compute these coefficients the pySHTOOLS python package is used, which provides access to the Fortran-95 SHTOOLS library.
\[B_{r}(\theta,\phi)=\sum_{l=1}^{l_{max}}\sum_{m=l}^{l}B_{lm}Y_{lm}(\theta,\phi), \tag{1}\]
with the normalisation,
\[c_{lm}=\sqrt{\frac{2l+1}{4\pi}\frac{(l-m)!}{(l+m)!}}, \tag{2}\]
where the coefficients \(B_{lm}\) denote the strength of each component. The maximum spherical harmonic degree recovered \(l_{max}\) was 30 for the WSO magnetograms and 90 for the SOHO/MDI & SDO/HMI magnetograms, based on the available resolution. The variation of the dipolar (\(l=1\)), quadrupolar (\(l=2\)), and octupolar (\(l=3\)) energies recovered from the magnetogram time-series are shown in Figure 3, along with the monthly sunspot number5. Differences between the ground-based and space-based timeseries primarily result from the polar field strengths, with the combined SOHO/MDI & SDO/HMI timeseries including a polar field correction. Supported by the field strengths of the dipole and octupole having a larger discrepancy between the two timeseries than the quadrupole during solar minima. A more detailed breakdown of these components, along with the higher order modes, is included in Appendix B.
Footnote 5: Data accessed May 2023: [https://www.sidc.be/SILSO/datafiles](https://www.sidc.be/SILSO/datafiles)
### Dipole axis evolution
As the dipole mode has the slowest radial decay (in comparison to the quadrupole, octupole, etc), it has a significant role in shaping the coronal field. At large distances, the coronal field often appears dipolar, despite the complexity visible in the low
Figure 4: Comparison of energy in the axisymmetric dipole (\(l=1,m=0\)) and non-axisymmetric quadrupole components (\(l=2,m>0\)) versus solar cycle. The top row shows the energy in the axisymmetric dipole component. The second row shows the energy in the quadrupole \(m=\pm 1\), and \(m=\pm 2\) components quadratically summed. The third row shows the ratio of energy between the two energies, which a dashed horizontal line marking unity. The bottom row displays the monthly sunspot number during this time, taken from WDC-SILSO.
Figure 3: Comparison of energy in the dipole (\(l=1\)), quadrupole (\(l=2\)), and octupole (\(l=3\)) modes versus solar cycle. The top row shows the energy in the dipole mode (quadratic sum of the \(m=0\) and \(m=\pm 1\) components). The second row shows the energy in the quadrupole (\(m=0,m=\pm 1\), and \(m=\pm 2\)) mode. The third row shows the energy in the octupole (\(m=0,m=\pm 1\), \(m=\pm 2\), and \(m=\pm 3\)) mode. The bottom row displays the monthly sunspot number during this time, taken from WDC-SILSO.
corona (seen during some solar eclipses Mikic et al., 2018). From the spherical harmonic decomposition of the magnetogram time-series, the evolution of the dipole axis is shown in the third and fourth panels of Figure 1. Values for latitude and longitude are given for the positive pole. There are some differences between the two timeseries, but overall the agreement is good. The cyclic reversals of the large-scale magnetic field during the solar cycle are easily identifiable, with the dipole reversing quickly during the onset of the cycle, reaching a fully inclined position around three years after sunspot minimum. The dipole then slowly completes its reversal during the remainder of the \(\sim 11\) year cycle, returning back to the same overall polarity after \(\sim 22\) years.
The dipole reverses during activity maxima once the pre-existing polar fields have been cancelled by newly emerged flux transported towards the poles (Mordvinov and Kitchatinov, 2019). However, each reversal is not smooth. Several epochs of stalling are identified with black horizontal bars in Figure 1. In addition, during these stalling epochs, the dipole axis slowly drifts in Carrington longitude (shown by the evolving colour of the points). This drift relates either to the local rotation rate of the magnetic flux comprising the dipole mode, or the emergence/decay of strong active regions and ephemeral regions contributing to the dipole mode. A more systematic analysis is left for future works.
### Quadrupole versus dipole ratio
The ratio of quadrupole to dipole energy varies with solar activity (see DeRosa et al., 2012), with the quadrupolar energy overwhelming the dipolar energy during solar maxima, and vice versa during solar minima. The time-evolution of the dipole and quadrupole energies derived from the magnetogram timeseries are shown in Figure 3 (see also Appendix B). Each mode has an axisymmetric component (\(m=0\)), along with non-axisymmetric
Figure 5: Solar cycle comparison of the monthly sunspot number, the dipole inclination angle, the ratio of non-axisymmetric quadrupole to axisymmetric dipole, and the hemispherical asymmetry in both sunspot number and unsigned magnetic flux, from both the WSO, and SOHO/MDI & SDO/HMI timeseries. Each cycle is given a different line colour and style, cycle 25 is highlighted with a solid black line. Quantities are plotted with respect to the smoothed sunspot minima of each cycle..
component(s). For the quadrupole mode, the axisymmetric field strengths are significantly weaker than the non-axisymmetric components (see Figure 2), unlike the dipole whose components have similar strengths but appear with a phase lag (see Figure 1). As both dipolar and quadrupolar energies are largest during the maximum of activity, they are both sensitive to the strength of active regions. During solar minima, the dipolar energy scales with the polar field strengths. As the polar fields weaken with rising activity, the dipole then becomes more sensitive to the contributions from active regions. In contrast, the quadrupolar energy is mostly linked to the emergence of active regions.
The ratio of non-axisymmetric quadrupolar (\(m>0\)) and axisymmetric dipolar (\(m=0\)) energies, therefore reflects the competition between the polar fields and active regions in sculpting the large-scale coronal field. The ratio between these two terms is shown in Figure 4. The selection of these components, in comparison to using the total dipolar and quadrupolar energies, as was done in DeRosa et al. (2012), accentuates the underlying trend. A smaller quadrupolar to dipolar ratio signifies more open field concentrated at the rotational poles, and a larger value means more field emerging at active solar latitudes. This is discussed further in Section 5.
### Cycle to cycle variation
Figure 5 shows the monthly sunspot number, dipole latitude, quadrupole to dipole ratio, along with the north-south asymmetry in monthly sunspot number and unsigned magnetic flux, for the WSO and SOHO/MDI & SDO/HMI timeseries respectively. Each quantity is plotted in time with respect to the start of each solar cycle, as defined by the World Data Center SILSO6, at the Royal Observatory of Belgium (see Table 1). The dipole latitude follows whichever polarity is in the northern hemisphere at cycle minima, such that each reversal progresses from north to south. The reversal of the dipole mode progresses faster during the rising phase, with the dipole mode fully inclined around three years after sunspot minimum. Consequently, the quadrupole to dipole ratio, as defined here using the axisymmetric dipole, also changes regime around this time. Despite significant differences in solar cycle strengths (up to a factor of two in sunspot number),
Figure 6: Solar cycle evolution of the solar wind footpoints. The top two panels show the fraction of open field lines, in each Carrington rotation, that trace down through the PFSS from a uniform distribution in latitude-longitude at the source surface to a given latitude bin at the surface (width \(\sim 2^{\circ}\)) using WSO magnetograms (1976-2022) and SOHO/MDI & SDO/HMI magnetograms (1996-present), respectively (the longitudinal information is ignored, and the connectivity sums to 100% for each Carrington rotation). The third panel displays a combined radial magnetic field butterfly diagram, in which the pole-ward surges of magnetic flux are easily distinguished. The bottom row displays the monthly sunspot number during this time, taken from WDC-SILSO, along with the contributions from the northern and southern hemispheres individually.
the evolution of the dipole inclination angle and quadrupole to dipole ratio is very similar from cycle to cycle. Notable exceptions are the declining phase of cycle 23 and the onset of cycle 24. In both cases, the dipole is more inclined than the other cycles, and the quadrupole to dipole ratio is elevated.
From the lower panels of Figure 5, the north-south asymmetry in sunspot number and unsigned magnetic flux (however noisy) follows the same trend shown by the hemispherical lag times in Figure 2, peaking for the northern hemisphere before the southern hemisphere. The asymmetric variation in sunspot number and unsigned flux are correlated, as they principally measure the same phenomena, except that the unsigned flux is also sensitive to the polar fields and ephemeral regions. The north-south asymmetry is generally limited to a difference of around 60 sunspots or \(8\times 10^{21}\)Mx of unsigned flux between hemispheres.
## 4 Solar wind sources and coronal rotation
The coronal magnetic field topology was reconstructed using a PFSS model (Altschuler and Newkirk, 1969; Schrijver and DeRosa, 2003). Coronal magnetic fields are computed efficiently due to the simplicity of the PFSS model, and so it can be applied to large datasets of magnetograms. The PFSS models were driven by the spherical harmonic coefficients extracted in Section 3 from the radial field at the photosphere. The magnetic field in the model was constrained to be current-free (\(\nabla\times B=0\)) and purely radial at the source surface (fixed at \(2.5R_{\odot}\) in this study). The coronal magnetic field is described by,
\[B_{r}(r,\theta,\phi)=\sum_{l=1}^{l_{max}}\sum_{m=-l}^{l}\alpha_{lm}(r)Y_{lm}( \theta,\phi), \tag{3}\]
\[B_{\theta}(r,\theta,\phi)=\sum_{l=1}^{l_{max}}\sum_{m=-l}^{l}\beta_{lm}(r)Z_{ lm}(\theta,\phi), \tag{4}\]
\[B_{\theta}(r,\theta,\phi)=\sum_{l=1}^{l_{max}}\sum_{m=-l}^{l}\beta_{lm}(r)X_{ lm}(\theta,\phi), \tag{5}\]
with,
\[Y_{lm} = c_{lm}P_{lm}(\cos\theta)e^{im\theta}, \tag{6}\] \[Z_{lm} = \frac{c_{lm}}{l+1}\frac{dP_{lm}(\cos\theta)}{d\theta}e^{im\phi},\] (7) \[X_{lm} = \frac{c_{lm}}{l+1}P_{lm}(\cos\theta)\frac{im}{\sin\theta}e^{im \phi}, \tag{8}\]
where \(r\) denotes the radial distance from the origin, \(\alpha_{lm}(r)\) and \(\beta_{lm}(r)\) are functions denoting the radial dependence of each spherical harmonic component (see Finley and Brun, 2023, and reference therein), and the normalisation \(c_{lm}\) follows equation (2). In this study, the coronal magnetic field was reconstructed using an \(l_{max}\) of 30 in all the PFSS reconstructions.
### Latitudinal connectivity
PFSS models were computed for every magnetogram in each timeseries. The distribution of solar wind source latitudes was evaluated by tracing field lines down from the source surface to the photosphere. The resulting distribution was summed in longitude and stacked in time in order to produce the connectivity butterfly diagrams in Figure 6. The top panel contains the WSO timeseries, and the second panel the SOHO/MDI & SDO/HMI timeseries. Differences between the two timeseries arise primarily from the relative resolutions of the underlying magnetogram timeseries. The WSO connectivity butterfly diagram is less sharp, but in general shows the same patterns and trends as the SOHO/MDI & SDO/HMI timeseries.
During each activity cycle, the sources of the open coronal magnetic field, and by extension the solar wind, evolve from the polar field at minimum to the active latitudes as solar activity increases (explored in Stansby et al., 2021). The emergence of active regions distorts the coronal field and provides new source regions, be it the active regions themselves, equatorial coronal holes, or ephemeral regions. Easily identified are the pole-ward surges of magnetic flux (see discussion in Finley and Brun, 2023), which also account for a significant fraction of the open field during the decaying phase of the activity cycle, as the polar fields begin to regenerate.
### Coronal rotation
From the cyclic variation of footpoint latitudes identified with the PFSS modelling, the impact on coronal rotation was estimated by extrapolating the photospheric rotation rate along open magnetic field lines. The conservation of effective rotation rate is assumed, i.e. the balance between rotational flows and magnetic stresses in the field lines is maintained with height, as in Finley and Brun (2023). This does not account for the torques exerted between neighbouring field lines anchored at different latitudes, or time-dependent changes to the coronal field. Despite these caveats, the mean effective rotation rate is thought to be a reasonable constraint on the solid-body rotation rate required to match the angular momentum-loss rate of the solar wind for a given epoch (see Ireland et al., 2022). The Sun's photospheric rotation rate was parameterised as,
\[\Omega_{\star}(\theta)=\Omega_{eq}+\alpha_{2}\cos^{2}\theta+\alpha_{4}\cos^{4}\theta, \tag{9}\]
where \(\Omega_{eq}\) is the equatorial rotation rate, and the values of \(\alpha_{2}\) and \(\alpha_{4}\) describe the north-south symmetric differential rotation profile. Values of \(\Omega_{eq}=472.6\) nHz, \(\alpha_{2}=-73.9\) nHz, and \(\alpha_{4}=-52.1\) nHz were adopted from Snodgrass (1983), which are consistent with Finley and Brun (2023). A version of Figure 6 colouring the connectivity butterfly diagram with the photospheric rotation rate is available in Appendix C.
Snapshots from the PFSS modelling are shown in Figure 7, with open field lines coloured by their photospheric rotation rates. For each activity cycle, five PFSS models have been selected, one at minimum of activity, then after \(\sim 1.5\) years (rising phase), \(\sim 3\) years (reversal), \(\sim 5\) years (high activity), and finally \(\sim 7.5\) years (declining phase). These epochs are indicated with coloured arrows in the lower panel with the monthly sunspot number. During each cycle, the overall trend is the same. As activity increases, field lines connect to more rapidly rotating latitudes at the photosphere, and so the open field lines are more frequently red-orange tones, in comparison to the slowly rotating blue-green of solar minima. The declining phase of each cycle typically contains a few decaying active regions which maintain areas of enhanced rotation, mostly absent at solar minima.
The azimuthally averaged rotation rate in the corona was extracted from the PFSS models using the rotation rates traced along open magnetic field lines. This avoids the degeneracy from closed field lines whose footpoints are anchored at different latitudes. If no values were available for a given latitude and radius, i.e the location contained only closed field, a value was linearly interpolated from neighbouring regions. Figure 8 contrasts the result of this process for two PFSS models, one from solar minimum and one from solar maximum (both from the SDO/HMI |
2309.06954 | Limit-closed Profiles | Tangle-tree theorems are an important tool in structural graph theory, and
abstract separation systems are a very general setting in which tangle-tree
theorems can still be formulated and proven. For infinite abstract separation
systems, so far tangle-tree theorems have only been shown for special cases of
separation systems, in particular when the separation system arises from a
(locally finite) infinite graph. We present a tangle-tree theorem for infinite
separation systems where we do not place restrictions on the separation system
itself but on the tangles to be arranged in a tree. | Ann-Kathrin Elm, Hendrik Heine | 2023-09-13T13:41:07Z | http://arxiv.org/abs/2309.06954v1 | # Limit-closed Profiles
###### Abstract.
Tangle-tree theorems are an important tool in structural graph theory, and abstract separation systems are a very general setting in which tangle-tree theorems can still be formulated and proven. For infinite abstract separation systems, so far tangle-tree theorems have only been shown for special cases of separation systems, in particular when the separation system arises from a (locally finite) infinite graph. We present a tangle-tree theorem for infinite separation systems where we do not place restrictions on the separation system itself but on the tangles to be arranged in a tree.
Key words and phrases:infinite abstract separation system, tree set, tangle-tree theorem, profile 2020 Mathematics Subject Classification: 06-XX (Primary), 05C63, 05C05 (Secondary)
## 1. Introduction
Tangles were first introduced by Robertson and Seymour in [11] as an obstruction to high tree-width. One of the important ingredients in their graph minor project was a tangle-tree theorem, that is a theorem giving a tree decomposition separating all the tangles of a graph.
As it turns out these features of tangles, both being an obstruction and admitting a tangle-tree theorem, can be formulated more abstractly. This is the core of the theory of abstract separation systems formulated in [5]. In this setting tangle-tree theorems are now usually formulated more generally in terms of profiles instead of tangles and tree sets instead of tree decompositions, as seen for instance in [7]. While that article gives a very general such theorem for finite separation systems, for infinite separation systems there is no general tangle-tree theorem. But there are several tangle-tree theorems for special cases of infinite separation systems: for separation systems that come from a (locally finite) graph (see for example [3]), are the inverse limit of finite separation systems [8], or in which every separation crosses only finitely many others [10]. In this article we consider a property not of the infinite separation system but of the profiles to be distinguished in order to obtain a tangle-tree theorem. In particular we ask that the profiles to be distinguished are closed under taking limits (the formal definition follows in Section 3).
In that same section we find, for a separation system and a set of regular closed profiles, a tree set distinguishing the profiles in a two step process. Here, a regular profile is a profile that does not contain certain separations that behave very counter-intuitively. First, we consider a set of equivalence classes of separations, show that these form a separation systems with special properties, and find a tree set of such equivalence classes. Then we choose representatives of for these equivalence classes that form a tree set. In Section 4 we show that the regularity can be dropped from the requirements on the profiles by slightly adjusting the separation system such that it does not contain separations with counter-intuitive behaviour.
Introduction
The study of the behavior of the separation system is a very important topic in the study of the behavior of the separation system. In this paper, we study the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the separation system of the system of the separation system of the separation system of the system of the separation system of the separation system of the separation system of the separation system of the separation system of the system of the separation system of the separation system of the separation system of the separation system of the separation system of the system of the separation system of the separation system of the separation system of the separation system of the system of the separation system of the system of the separation system of the separation system of the separation system of the system of the separation system of the separation system of the system of the separation system of the separation system of the system of the separation system of the system of the separation system of the system of the separation system of the system of the separation system of the system of the separation system of the system of the separation system of the system of the separation system of the system of the separation system of the system of the separation system of the system of the separation system of the system of the separation system of the system of the separation system of the system of the separation system of the system of the separation system of the system of the separation system of the system of the separation system of the system of the separation system of the system of the separation system of the system of the separation system of the system of the separation system of the system of the separation system of the system of the system of the separation system of the system of the system of the separation system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of the system of system of the system of the system of the system of
and \(t\) in \(S\) at least one of \(s\lor t\) and \(s\wedge t\) is contained in \(S\). A _profile of \(S\)_ is a consistent orientation of \(S\) such that for any elements \(s\) and \(t\) of the profile the separation \((s\lor t)^{*}\) is not contained in the profile. In particular, if \(s\lor t\) is contained in \(S\) then it is also contained in the profile, and a separation system containing a degenerate separation does not have a profile. Note that the definition of a profile thus not only depends on the separation system but also on the surrounding universe. If \(U\) is submodular, then a _\(k\)-profile_ of \(U\) is a profile of \(S_{k}\), the subsystem of \(U\) consisting of the separations of order less than \(k\). A _profile of \(U\)_ is a \(k\)-profile of \(U\) for some \(k\in\mathbb{N}\), and \(k\) is its _order_.
Given a \(k\)-profile \(P\) of \(U\) and \(l\in\mathbb{N}\) with \(l\leq k\), the set \(P\cap S_{l}\) is the \(l\)-profile _induced_ by \(P\). Two profiles \(P\) and \(Q\) of \(U\) are _distinguished_ by a separation \(s\) if \(s\neq s^{*}\) and one of \(s\) and \(s^{*}\) is contained in \(P\) and the other one in \(Q\). If \(U\) is submodular and the order of \(s\) is minimal among all separations distinguishing \(P\) and \(Q\), then \(s\) distinguishes them _efficiently_. A set of separations \(T\) distinguishes a set \(\mathcal{P}\) of profiles of \(U\) (efficiently) if for every pair of distinguishable profiles in \(\mathcal{P}\) (that is, profiles for which there is a separation distinguishing them) there is a separation in \(t\) distinguishing them (efficiently). A profile \(P\) of \(U\) is _robust_ if for every \(r\in P\) and every \(l\)-separation \(s\) the following holds: If the orders of both \(r^{*}\wedge s\) and \(r^{*}\wedge s^{*}\) are less than the order of \(r\), then \(P\) does not contain both \(r^{*}\wedge s\) and \(r^{*}\wedge s^{*}\). There are also other, slightly weaker definitions of robustness. But that is not relevant, as robustness is never used directly, only the following consequence [7, Lemma 3.6]: In a submodular universe, if \(r\) efficiently distinguishes two robust profiles \(P\) and \(P^{\prime}\) and \(s\) efficiently distinguishes two robust profiles \(Q\) and \(Q^{\prime}\) and the order of \(r\) is less than the order of \(s\), then one of \(r\wedge s\), \(r\wedge s^{*}\), \(r^{*}\wedge s\) and \(r^{*}\wedge s^{*}\) efficiently distinguishes \(Q\) and \(Q^{\prime}\).
## 3. Equivalence classes
Let \(S\) be a separation system in some universe \(U\) and \(\mathcal{P}\) a set of regular profiles of \(S\). Then we can define a map \(f_{\mathcal{P}}:S\to 2^{\mathcal{P}}\) via \(f_{\mathcal{P}}(s)=\{P\in\mathcal{P}|s^{*}\in P\}\). Intuitively, the image of \(f_{\mathcal{P}}\) condenses from \(S\) the information how it distinguishes the elements of \(\mathcal{P}\). We consider \(2^{\mathcal{P}}\) as a separation system with inclusion as the order.
**Lemma 3.1**.: _The map \(f_{\mathcal{P}}\) is a homomorphism of separation systems which respects \(\vee\) and \(\wedge\)._
Proof.: Since profiles orient every separation, \(f_{\mathcal{P}}\) respects the involution. Furthermore if \(s\leq t\) and \(s^{*}\in P\) for some profile \(P\), then we cannot have \(t\in P\) by consistency, so \(f_{\mathcal{P}}(s)\leq f_{\mathcal{P}}(t)\). Now it suffices to prove that for \(s,t\in S\) with \(s\lor t\in S\) an arbitrary \(P\in\mathcal{P}\) contains \(s\lor t\) if and only if it contains both \(s\) and \(t\). The forward implication follows by consistency, the backwards one by the profile property.
The fibers of \(f_{\mathcal{P}}\) are exactly the equivalence classes obtained by regarding two separations as equivalent if they are oriented the same way by every profile in \(\mathcal{P}\). Assuming a structurally submodular \(S\), comparing these equivalence classes via their images under \(f_{\mathcal{P}}\) gives the same partial order as comparing them via their elements. In fact, a slightly weaker condition suffices. Call \(S\)_weakly \(\mathcal{P}\)-submodular (with respect to \(\mathcal{P}\)_) if whenever \(f_{\mathcal{P}}(s)\leq f_{\mathcal{P}}(t)\) at least one of \(s\lor t\) and \(s\wedge t\) is contained in \(S\).
**Proposition 3.2**.: _Let \(S\) be weakly \(\mathcal{P}\)-submodular with respect to \(\mathcal{P}\). Then for \(A,B\in\operatorname{im}(f_{\mathcal{P}})\) we have \(A\leq B\) if and only if there are \(a\in f_{\mathcal{P}}^{-1}(A),b\in f_{\mathcal{P}}^{-1}(B)\) with \(a\leq b\)._
Proof.: The backward direction is immediate by Lemma 3.1. For the forward direction choose \(a\in f_{\mathcal{P}}^{-1}(A),b\in f_{\mathcal{P}}^{-1}(B)\). At least one of \(a\wedge b\) and \(a\lor b\) is contained in \(S\) by weak submodularity. If \(a\wedge b\in S\), then by Lemma 3.1 it is contained in \(f_{\mathcal{P}}^{-1}(B)\) and thus forms the required pair together with \(b\). Similarly, if \(a\lor b\in S\), then \(a\lor b\in f_{\mathcal{P}}^{-1}(A)\) and it forms the required pair with \(a\).
We want to use the transformation \(f_{\mathcal{P}}\) to find a distinguishing set for \(\mathcal{P}\). We will proceed in two steps: First we will look for an abstract distinguishing set in the image and then look for separations of \(S\) to represent them.
Let us start by stating our objective for the first step more formally. We will say that \(A\in\operatorname{im}(f_{\mathcal{P}})\)_separates_\(P,Q\in\mathcal{P}\) if \(P\in A\) and \(Q\notin A\) or vice versa. Then we are looking for some tree set \(T\subseteq\operatorname{im}(f_{\mathcal{P}})\) such that different elements of \(\mathcal{P}\) are always separated by some element of \(\operatorname{im}(f_{\mathcal{P}})\). Since structural submodularity of \(S\) translates to \(\operatorname{im}(f_{\mathcal{P}})\), for finite \(\mathcal{P}\) standard techniques easily show that this condition is enough to reach our goal. If \(\mathcal{P}\) is infinite these standard methods are not sufficient, but there is a useful idea, seen for instance in [1] or [4], which may help. That idea is taking only the _good_ separations, that is those not crossed by any other separation, for our tree set.
In the following we will show that, given certain conditions, the set \(T(S,\mathcal{P})\) of good separations of \(\operatorname{im}(f_{\mathcal{P}})\) (except \(\emptyset\) and \(\mathcal{P}\)) does indeed meet our demands. Since \(T(S,\mathcal{P})\) is a tree set by definition, all that needs to be shown is that \(T(S,\mathcal{P})\) separates any two distinct elements of \(\mathcal{P}\). When dealing with finite separation systems, it is sometimes useful to consider maximal separations. If we want to use this trick in the infinite case, we encounter some difficulties. First of all, profiles may not even have maximal elements, which would render our strategy impossible. Thus we require our profiles to be _closed_, meaning that any chain in the profile has a supremum in the universe which is contained in the profile. This ensures that each profile has a maximal element, but even these maximal elements may not have the nice properties which we are used to, say when we have an order function.
Thus we need one more condition, which emulates some of the additional structure provided by an order function. We call \(S\)_orderly_ (with respect to \(\mathcal{P}\)) if for any \(s,t\in S\) such that both \(\{s,t\}\) and \(\{s^{*},t^{*}\}\) are subsets of (possibly different) elements of \(\mathcal{P}\) we have \(s\lor t\in S\) and \(s\wedge t\in S\). To give an example, it will follow from Lemma 5.3 that the set of proper \(k\)-separations in a \(k\)-connected graph is orderly.
**Lemma 3.3**.: _If \(S\) is orderly and every \(P\in\mathcal{P}\) is closed, all different \(P,Q\in\mathcal{P}\) are separated by some \(t\in T(S,\mathcal{P})\)._
Proof.: Let \(C\) be a chain in \(S\) such that each element of \(C\) is contained in \(P\) and not in \(Q\) which is maximal with these properties. Since \(P\) is closed, \(C\) has a supremum \(s\), which is contained in \(P\). Furthermore, by consistency, we have \(s\notin Q\). Now it suffices to prove that \(f_{\mathcal{P}}(s)\) is good. If not, there exists some \(x\in S\) such that the images of \(s\) and \(x\) cross, without loss of generality \(x\in P\). But since \(S\) is orderly, this would imply \(s\lor x\in S\) and hence \(s\lor x\in P\) and this separation could have been added to \(C\)
Now let us consider the second step. Starting with a regular tree set \(T\) in \(\operatorname{im}(f_{\mathcal{P}})\), we want to find a nested set of preimages (or equivalently one isomorphic to \(T\)). Once again we want to use maximal separations and so need the same conditions as before. Closure guarantees that each equivalence class has a maximal separation and orderliness that it is even greatest.
**Lemma 3.4**.: _If every \(P\in\mathcal{P}\) is closed, for every \(t\in T\) the set \(f_{\mathcal{P}}^{-1}(t)\) has a greatest element._
Proof.: Let \(X=f_{\mathcal{P}}^{-1}(t)\) and let \(C\) be a nonempty chain in \(X\). Since \(\mathcal{P}\setminus t\) is nonempty and each profile in that set is closed, \(C\) has a supremum \(s\) with \(f_{\mathcal{P}}(s)\supseteq t\). Conversely there cannot be any \(Q\in t\) with \(s\in Q\), since we would then have \(C\subseteq Q\) by consistency, a contradiction. Thus \(s\in X\). This implies that any element of \(X\) lies below some maximal element.
Now let \(a\) and \(b\) be two maximal elements of \(X\). Since \(S\) is orderly, \(a\lor b\in S\) and then also \(a\lor b\in X\). By maximality of \(a\) and \(b\) these must then both be equal to \(a\lor b\) and we have \(a=b\). Thus \(X\) only has a single maximal element which must then be greater than all other separations in \(X\).
Let \(m\) be the function mapping each \(t\in T\) to the greatest element of \(f_{\mathcal{P}}^{-1}(t)\). We would like to use \(m\) to choose the representatives, however, this is not quite possible, since for \(t\in T\) the separations \(m(t)\) and \(m(t^{*})\) are usually not inverses. Fortunately, this is no great obstacle. If we simply choose one of these two possible unoriented separations, the only thing that could go wrong is that for \(s\leq t\) we choose \(m(s)\) and \(m(t^{*})^{*}\) as the representing separations. It thus suffices to choose exactly opposite to a consistent orientation, which always exists. So we fix a consistent orientation \(o\) of \(T\) and define the function \(m_{o}\) by mapping \(s\in T\) to \(m(s^{*})^{*}\) for \(s\in o\) and to \(m(s)\) for \(s\notin o\). Let \(\hat{T}\) be the image of \(T\) under \(m_{o}\).
**Corollary 3.5**.: _Let \(S\) be orderly and every \(P\in\mathcal{P}\) closed. Then \(f_{\mathcal{P}}\) restricts to an isomorphism between \(\hat{T}\) and \(T\)._
Proof.: Clearly, \(m_{o}\) is the inverse map of \(f_{\mathcal{P}}\) restricted to the image of \(m_{o}\). Since \(f_{\mathcal{P}}\) is a homomorphism by Lemma 3.1, it suffices to show that \(m_{o}\) is, too. So let \(s,t\in T\) be such that \(s\leq t\). We need to show that \(m_{o}(s)\leq m_{o}(t)\). If \(t\notin o\), we have \(m_{o}(t)=m(t)\). Since \(S\) is orderly, we have \(m(t)\lor m_{o}(s)\in S\). By Lemma 3.1\(f_{\mathcal{P}}(m(t)\lor m_{o}(s))=f_{\mathcal{P}}(m(t))\cup f_{\mathcal{P}}( m_{o}(s))=t\cup s=t\). But since \(m(t)\) is a greatest element, we must have \(m_{o}(s)\leq m(t)=m_{o}(t)\). So we may assume \(t\in o\) and by consistency also \(s\in o\). Then we have \(m_{o}(s)=m(s^{*})^{*}\) and \(m_{o}(t)=m(t^{*})^{*}\). Since \(S\) is orderly, we have \(m(s^{*})\lor m(t^{*})\in S\) and calculating with Lemma 3.1 as before we get \(f_{\mathcal{P}}(m(s^{*})\lor m(t^{*}))=s^{*}\). Again by choice of \(m(s^{*})\) we must have \(m(s^{*})\geq m(t^{*})\) and thus \(m_{o}(s)\leq m_{o}(t)\).
This completes the second step. Overall, we have now proven the following theorem.
**Theorem 3.6**.: _Let \(S\) be a regular separation system orderly with respect to a set \(\mathcal{P}\) of closed profiles. Then there is a tree set \(T\) with the following properties:_
1. _Any two different elements_ \(\mathcal{P}\) _are distinguished by some element of_ \(T\)_._
2. _Any element of_ \(T\) _distinguishes some elements of_ \(\mathcal{P}\)_._
3. _In the set_ \(P\cap T\) _for every_ \(P\in\mathcal{P}\) _every separation lies below some maximal separation._
## 4. Non-regular profiles
In this section we want to show that in Theorem 3.6, the regularity condition can be dropped. In order to do so, we will slightly adjust \(S\) and \(\mathcal{P}\) to a regular separation system with a modified set of profiles. Then the modified profiles are necessarily regular, and theory for regular profiles can be applied to them and the adjusted separation system. As a last step we will show that any tree set distinguishing the modified profiles also distinguishes \(\mathcal{P}\).
There is a procedure to make a separation system regular in [5]: first taking its _essential core_, i.e. deleting all trivial, co-trivial and degenerate elements, and then taking the _regularization_ of that essential core, that is, dropping relations of the form \(s\leq s^{*}\) from the partial order. This two-step-procedure is necessary because dropping relations of the form \(s\leq s^{*}\) from the partial order need not yield another partial order if there are degenerate or trivial elements present. But if \(S\) is a subsystem of a surrounding universe, it is in general not possible to just delete trivial elements from the universe, and then it is not possible to adjust the surrounding universe to also be a surrounding universe for the regularization. In order to overcome this problem, we will have to relax the notion of corners in a separation system once more so we can cope without a surrounding universe.
**Definition 4.1**.: Let \((S,{}^{*},\leq)\) be a separation system. A _corner map_ is a map \(\vee\) from a subset of \(S\times S\) to \(S\) such that
* if \(s\leq t\), then \(\vee(s,t)\) is defined, and
* if \(\vee(s,t)\) is defined, then also \(\vee(t,s)\) is defined and it is the supremum of \(s\) and \(t\) in the partial order \(\leq\).
For a separation system with corner map, in analogy to a separation system that is a subsystem of a universe, we say that a corner \(s\lor t\)_exists_ in \(S\) or _is contained in \(S\)_ if \(\vee(s,t)\) is defined. Also, \(\vee(s,t)\) will be denoted by \(s\lor t\), and \(s\wedge t\) is a shorthand for \((s^{*}\lor t^{*})^{*}\).
**Example 4.2**.: If \(S\) is a separation system that is a subsystem of some universe \(U\), then \(S\) naturally comes with a corner map where \(\vee(s,t)\) is defined if \(s\lor t\), which exists in the surrounding universe, is contained in \(S\). Another example of a corner map, which can be defined without taking a surrounding universe into account, is to define \(\vee(s,t)\) to exist whenever \(s\) and \(t\) have a supremum in the partial order \(\leq\). As every separation system can be embedded into a universe in a way that preserves suprema of \(\leq\) (see [9, Theorem 3.1]), this example is actually a special case of the previous example for a corner map. As a third example, one can define a corner map where \(\vee(s,t)\) is defined only when \(s\leq t\) or \(t\leq s\).
Formally, we have to redefine our terminology for separation systems with corner map. But of course all these definitions will be the same as before and just as expected.
**Definition 4.3**.: Let \((S,\leq,{}^{*},\vee)\) be a separation system with a corner map. A _profile_ of the separation system with corner map is a consistent orientation \(P\) of \((S,\leq,{}^{*})\) such that for all \(s,t\) in \(P\), if \(s\lor t\) is defined then \((s\lor t)^{*}\) is not contained in \(P\). Given a set of profiles \(\mathcal{P}\), the separation system with corner map is _orderly_ with respect to \(\mathcal{P}\) if for all separations \(s\) and \(t\) such that some profile in \(\mathcal{P}\) contains \(s\) and \(t\) and some profile in \(\mathcal{P}\) contains \(s^{*}\) and \(t^{*}\) then both \(s\lor t\) and \(s^{*}\lor t^{*}\) are defined.
A profile \(P\) is _closed_ in \(S\) if for all \(\leq\)-chains in \(P\) the supremum with respect to \(\leq\) exists in \(S\) and is contained in \(P\).
Note that if \(S\) is a separation system that is a subsystem of a universe \(U\), then a consistent orientation of \(S\) is a profile of \(S\subseteq U\) if and only if it is a profile of \(S\) with the induced corner map, and a profile of \(S\) that is closed with respect to \(S\) as a subsystem of \(U\) is also closed in \(S\) itself. Furthermore, if \(\mathcal{P}\) is a set of profiles of \(S\), then \(S\subseteq U\) is orderly with respect to \(\mathcal{P}\) if and only if \(S\) with the induced corner map is orderly with respect to \(\mathcal{P}\).
Now we want to apply regularization to a separation system with corner map and a set of profiles. Through this process we can essentially keep the corner map and the set of profiles, and we preserve orderliness and closedness.
**Definition 4.4**.: Let \((S,\leq,^{*},\vee)\) be a separation system with a corner map. The _essential core_ of \((S,\leq,^{*},\vee)\) is the set \(S^{\prime}\) of separations in \(S\) that are neither degenerate nor trivial nor co-trivial, together with the restrictions of \(\leq,^{*}\) and \(\vee\) to \(S^{\prime}\). The _regularization_ of the essential core is the tuple \((S^{\prime},\leq^{\prime},^{*^{\prime}},\vee^{\prime})\) where \(s\leq^{\prime}t\) for elements \(s\) and \(t\) of \(S^{\prime}\) if and only if \(s\leq t\) and \(s\neq t^{*}\), \({}^{*^{\prime}}\) is the restriction of \({}^{*}\) to \(S^{\prime}\) and \(\vee^{\prime}\) is defined for all those elements \(s\) and \(t\) of \(S^{\prime}\) for which \(s\lor t\) is defined, contained in \(S^{\prime}\), and the supremum of \(s\) and \(t\) in \(\leq^{\prime}\).
**Lemma 4.5**.: _Let \(\mathcal{P}\) be a set of profiles of a separation system with corner map \((S,\leq,^{*},\vee)\). Then the regularization \((S^{\prime},\leq^{\prime},^{*^{\prime}},\vee^{\prime})\) of the essential core is a regular separation system with corner map of which \(\mathcal{P}^{\prime}:=\{P\cap S^{\prime}\colon P\in\mathcal{P}\}\) is a set of profiles. Moreover, if \(S\) is orderly with respect to \(\mathcal{P}\), then \(S^{\prime}\) is orderly with respect to \(\mathcal{P}^{\prime}\), and if the elements of \(\mathcal{P}\) are closed with respect to \(\leq\) then the elements of \(\mathcal{P}^{\prime}\) are closed with respect to \(\leq^{\prime}\)._
Proof.: It has already been shown in [5] that \((S^{\prime},\leq^{\prime},^{*^{\prime}})\) is a regular separation system, and \(\vee^{\prime}\) is clearly a corner map of that separation system.
Let \(P\in\mathcal{P}\). By the definition of profile of a separation system with corner map, \(P\cap S^{\prime}\) is a profile of \((S^{\prime},\leq^{\prime},^{*^{\prime}},\vee^{\prime})\). We will now show that if \(P\) is closed with respect to \(\leq\), then \(P\cap S^{\prime}\) is closed with respect to \(\leq^{\prime}\). So assume \(P\) is closed with respect to \(\leq\) and let \((s_{i})_{i\in I}\) be a \(\leq^{\prime}\)-chain of elements of \(P\cap S^{\prime}\). Then \((s_{i})_{i\in I}\) is also a \(\leq\)-chain of elements of \(P\), so it has a supremum \(s\) with respect to \(\leq\) and that supremum is contained in \(P\). If \(s=s_{i}\) for some index \(i\), then \(s_{i}\) is contained in \(P\cap S^{\prime}\) and is the supremum of the \(s_{i}\) with respect to \(\leq^{\prime}\). We will show that \(s\) is also the supremum of the \(s_{i}\) with respect to \(\leq^{\prime}\) in the case that \(s\neq s_{i}\) for all indices \(i\). For that, let \(i\) be some index. As \(s_{i}<s\), and \(s_{i}\) is contained in \(S^{\prime}\) and thus neither trivial nor degenerate, also \(s\) is neither trivial nor degenerate. Furthermore, \(s\) is contained in \(P\) and hence cannot be cotrivial. Thus \(s\) is contained in \(S^{\prime}\) and hence \(s\in P\cap S^{\prime}\). In order to show that \(s_{i}\leq^{\prime}s\), let \(j\) be another index such that \(s_{i}<^{\prime}s_{j}\). Then \(s_{i}<s_{j}<s\), and as \(s_{j}\) is not trivial, \(s_{i}\neq s^{*}\). Thus \(s_{i}\leq^{\prime}s\). Let \(s^{\prime}\) be another upper bound of the \(s_{i}\) with respect to \(\leq^{\prime}\). Then \(s_{i}<s<s^{\prime}\), and as \(s_{i}\) is not trivial, \(s^{*}\neq s^{\prime}\). Hence \(s\leq^{\prime}s^{\prime}\), and \(s\) is indeed the supremum of the \(s_{i}\) with respect to \(\leq^{\prime}\).
We will now show that if \(S\) is orderly with respect to \(\mathcal{P}\), then \(S^{\prime}\) is orderly with respect to \(\mathcal{P}^{\prime}\). So let \(s\) and \(t\) be elements of \(S^{\prime}\) and let \(P\) and \(Q\) be elements of \(\mathcal{P}^{\prime}\) such that \(P\cap S^{\prime}\) contains both \(s\) and \(t\) and \(Q\) contains both \(s^{*}\) and \(t^{*}\). We will only show that \(s\vee^{\prime}t\) is defined, the fact that \(s^{*}\vee^{\prime}t^{*}\) is also defined then follows from swapping the roles of \(P\) and \(Q\). As \(S\) is orderly with respect to \(\mathcal{P}\), \(s\lor t\) is defined and thus contained in \(P\). In particular \(s\lor t\) is not co-trivial. Consider the
case that \(s\lor t=s\). In that case \(t\leq s\). As both \(s\) and \(t\) are contained in \(P\) and are non-degenerate, \(t\neq s^{*}\) and thus \(t\leq^{\prime}s\). Hence \(s\lor^{\prime}t\) is defined and we are done. Similarly if \(s\lor t=t\) then we are done. So we may assume that \(s<s\lor t\) and \(t<s\lor t\). As \(s\) is not trivial, \(s\lor t\) is neither degenerate nor trivial, and thus \(s\lor t\) is contained in \(P\cap S^{\prime}\). Again, as \(s\) and \(s\lor t\) are both contained in \(P\) and non-degenerate, \(s\neq(s\lor t)^{*}\) and thus \(s\leq^{\prime}s\lor t\). Similarly \(t\leq^{\prime}s\lor t\). Finally, let \(p\) be another upper bound of \(s\) and \(t\) in \(\leq^{\prime}\) with \(s\lor t\neq p\). Then \(s<s\lor t<p\), so if \(s\lor t=p^{*}\) then \(s\) is trivial. But \(s\) is not trivial, and thus \(s\lor t\neq p^{*}\) so \(s\lor t\leq^{\prime}p\). Hence \(s\lor t\) is the supremum of \(s\) and \(t\) with respect to \(\leq^{\prime}\) and so \(s\lor^{\prime}t\) is defined.
So we have shown that we can make a separation system with corner map and a set of profiles \(\mathcal{P}\) regular. Now we have to show that if we find a tree set of the regularization that distinguishes \(\mathcal{P}^{\prime}\), then we can translate it back to a nested subset of the original separation system distinguishing \(\mathcal{P}\).
**Lemma 4.6**.: _Let \((S,\leq,^{*},\lor)\) be a separation system with a corner map and let \(\mathcal{P}\) be a set of profiles of that separation system. Denote the regularization of the essential core of the separation system by \((S^{\prime},\leq^{\prime},^{*^{\prime}},\lor^{\prime})\). If \(T\) is a tree set contained in \(S^{\prime}\) distinguishing \(\{P\cap S^{\prime}\colon P\in\mathcal{P}\}\), then \(T\) is also a tree set of \(S\) and distinguishes \(\mathcal{P}\)._
Proof.: Let \(T\) be a tree set contained in \(S^{\prime}\) that distinguishes \(\{P\cap S^{\prime}\colon P\in\mathcal{P}\}\). Then \(T\) is also a nested subset of \(S\) and a tree set of \(\leq\). Every profile contains every trivial and degenerate separation, so every separation in \(S\) distinguishing two profiles in \(\mathcal{P}\) is also contained in \(S^{\prime}\). Hence, if \(P\) and \(Q\) are distinct elements of \(\mathcal{P}^{\prime}\), then \(P\cap S^{\prime}\) and \(Q\cap S^{\prime}\) are distinct profiles of \(S^{\prime}\) and thus \(P\cap S^{\prime}\) and \(Q\cap S^{\prime}\) are distinguished by \(T\). So \(T\) distinguishes all elements of \(\mathcal{P}\).
Let us now summarise the results of this section so far.
**Lemma 4.7**.: _Let \((S,\leq,^{*},\lor)\) be a separation system with corner map, and let \(\mathcal{P}\) be a set of profiles of this separation system. Then there is a regular separation system with corner map \((S^{\prime},\leq^{\prime}{}^{*^{\prime}},\lor^{\prime})\) and set of profiles \(\mathcal{P}^{\prime}\) such that every tree set distinguishing \(\mathcal{P}^{\prime}\) in \(S^{\prime}\) also distinguishes \(\mathcal{P}\) in \(S\). Moreover, if \(S\) is orderly with respect to \(\mathcal{P}\) then \(S^{\prime}\) is orderly with respect to \(\mathcal{P}^{\prime}\) and if the elements of \(\mathcal{P}\) are closed with respect to \(\leq\) then the elements of \(\mathcal{P}^{\prime}\) are closed with respect to \(\leq\) then the elements of \(\mathcal{P}^{\prime}\) are closed with respect to \(\leq^{\prime}\)._
So now we can show that in Theorem 3.6 the regularity condition can be dropped. Note that in the proof, the only properties of the universe surrounding \(S\) that is used are encoded by the properties of the induced corner map. Therefore, Theorem 3.6 also holds (with the same proof) if \(S\) is not a subsystem of a universe but instead has a corner map.
So in particular we can prove the following generalisation of Theorem 3.6.
**Corollary 4.8**.: _Let \((S,\leq,^{*},\lor)\) be a separation system with a corner map that is orderly with respect to a set \(\mathcal{P}\) of closed profiles. Then there is a tree set \(T\) with the following properties:_
1. _Any two different elements_ \(\mathcal{P}\) _are distinguished by some element of_ \(T\)_._
2. _Any element of_ \(T\) _distinguishes some elements of_ \(\mathcal{P}\)_._
3. _In the set_ \(P\cap T\) _for every_ \(P\in\mathcal{P}\) _every separation lies below some maximal separation._
Proof.: By Lemma 4.5, the regularization \((S^{\prime},\leq^{\prime},^{*^{\prime}},\lor^{\prime})\) is orderly with respect to the set of profiles \(\mathcal{P}^{\prime}:=\{P\cap S^{\prime}\colon P\in\mathcal{P}\}\) and \(\mathcal{P}^{\prime}\) is closed in \(S^{\prime}\). We can now obtain
by Theorem 3.6 a tree set \(T\) in \(S^{\prime}\) distinguishing the elements of \(\mathcal{P}^{\prime}\), which is a subset of \(S\) as required in this Corollary by Lemma 4.6.
## 5. A tangle-tree theorem for submodular separation systems
In this section, we will prove the following theorem. Recall that a profile of a submodular universe is defined to be a \(k\)-profile of the universe for some \(k\in\mathbb{N}\).
**Theorem 5.1** (main theorem).: _Let \(U\) be a submodular universe and let \(\mathcal{P}\) be a collection of distinguishable regular robust closed profiles. Then there is a tree set \(T\) that efficiently distinguishes \(\mathcal{P}\) such that every \(t\in T\) efficiently distinguishes two profiles in \(\mathcal{P}\)._
**Remark 5.2**.: The tree set \(T\) constructed in the proof of Theorem 5.1 additionally has the property that for every \(P\in\mathcal{P}\), every element of \(T\cap P\) is less than or equal to a maximal element of \(T\cap P\).
We are going to recursively construct tree sets \(T_{k}\) whose union then efficiently distinguishes \(\mathcal{P}\). To do this, let \(\mathcal{P}_{k}\) be the set of profiles of order at most \(k\) that are induced by elements of \(\mathcal{P}\). For the construction of a tree set \(T_{k+1}\) distinguishing the elements of \(\mathcal{P}_{k+1}\) we will employ the following proof strategy in two steps, that is common in proofs of tree-of-tangles-theorems, for example in [7]. First, for a \(k\)-profile \(P\in\mathcal{P}_{k}\), Theorem 3.6 is applied to the set of all \(k+1\)-profiles in \(\mathcal{P}_{k+1}\) whose induced \(k\)-profile is \(P\) and to a carefully chosen subuniverse of \(U\) to obtain a tree set \(T_{P}\). Second, it will be shown that \(T_{k}\) together with all the tree sets \(T_{P}\) is a tree set that efficiently distinguishes \(\mathcal{P}_{k+1}\).
More precisely, we are going to show by induction that for every \(k\in\mathbb{N}\) there is a tree set \(T_{k}\) with the following properties:
* \(T_{k}\) is a subset of \(S_{k}\), and if \(T_{k-1}\) exists then \(T_{k-1}\subseteq T_{k}\).
* Every element of \(T_{k}\) distinguishes two profiles in \(\mathcal{P}_{k}\) efficiently.
* \(T_{k}\) distinguishes \(\mathcal{P}_{k}\) efficiently.
* For every \(Q\in\mathcal{P}_{k}\), every element of \(Q\cap T_{k}\) is less than or equal to a maximal element of \(Q\cap T_{k}\).
As the only \(0\)-profile is the empty set we define \(T_{0}\) to be the empty set. Assume that \(T_{k}\) is already defined and we now want to define \(T_{k+1}\). For each \(k\)-profile \(P\in\mathcal{P}_{k}\) let \(\mathcal{Q}_{P}\) be the set of all \((k+1)\)-profiles in \(\mathcal{P}_{k+1}\) whose induced \(k\)-profile is \(P\), \(N_{P}\) the set of maximal elements of \(P\cap T_{k}\) and \(U_{P}\) the set of all separations in \(U\) towards which all separations of \(N_{P}\) point. Note that \(U_{P}\) is a universe and closed under taking infima and suprema of chains of bounded order, and that as a result the set of \(k\)-separations of \(U_{P}\) is structurally submodular and closed under taking suprema of chains of bounded order. Then every profile in \(\mathcal{Q}_{P}\) induces a closed \((k+1)\)-profile of \(U_{P}\).
We will want to apply the results of the previous section, in particular Theorem 3.6, to \(U_{P}\) and the profiles of \(U_{P}\) induced by \(\mathcal{Q}_{P}\). In order to do so, we need to show that the separation system \(S_{k}\) of \(U_{P}\) is orderly with respect to \(\mathcal{Q}_{P}\). That follows from the following, slightly more general statement.
**Lemma 5.3**.: _Let \(U\) be a submodular universe, \(k\) a non-negative integer and \(\mathcal{P}\) a set of \(k+1\)-profiles that all have the same induced \(k\)-profile. Then the set of \(k\)-separations of \(U\) is orderly with respect to \(\mathcal{P}\)._
Proof.: Assume \(s\) and \(t\) are \(k\)-separations of \(U\) and \(P\) and \(Q\) are elements of \(\mathcal{P}\) such that \(\{s,t\}\subseteq P\) and \(\{s^{*},t^{*}\}\subseteq Q\). By submodularity, one of \(s\lor t\) and \(s\wedge t\) is also a \(k\)-separation, assume without loss of generality that \(s\lor t\) is a \(k\)-separation. Then \(s\lor t\) is also contained in \(P\).
First consider the case that \(s\lor t\) is not contained in \(Q\). Then \(s\lor t\) distinguishes \(P\) and \(Q\), and thus has order exactly \(k\). Hence by submodularity also \(s\wedge t\) is a \(k\)-separation and we are done.
So consider the case that \(s\lor t\in Q\). As a separation system with a degenerate separation does not have a profile, \(s\) is not contained in \(Q\). Thus by consistency \(s\leq s\lor t\in Q\) implies \(s=(s\lor t)^{*}\). Similarly \(t=(s\lor t)^{*}\), so \(s=t\) and the lemma holds.
Now we will show that indeed all profiles in \(\mathcal{Q}_{P}\) can be distinguished by separations of \(U_{P}\).
**Lemma 5.4**.: _All profiles in \(\mathcal{Q}_{P}\) are distinguished by separations of \(U_{P}\) of order \(k\)._
Proof.: Let \(f_{\mathcal{Q}_{P}}:U\to 2^{\mathcal{Q}_{P}}\) be defined as Section 3. It suffices to show that if \(r\) is a separation such that \(f_{\mathcal{Q}_{P}}(r)\) is neither \(\emptyset\) nor \(\mathcal{Q}_{P}\), then there is an element of \(U_{P}\) that has the same image under \(f_{\mathcal{Q}_{P}}\) as \(r\). Let \(X\) be the set of separations whose image under \(f_{\mathcal{Q}_{P}}\) is \(f_{\mathcal{Q}_{P}}(r)\). Several times in this proof we will use the fact that \(f_{\mathcal{Q}_{P}}\) preserves infima and suprema, and in particular any two elements of \(X\) have an infimum and a supremum.
If \(N_{P}\) is empty, then \(r\) itself is a separation of \(U_{P}\) and we are done, so assume otherwise. By Lemma 3.4, \(X\) has a smallest element \(s\). Let \(N^{\prime}\) be the set of all elements \(n\) of \(N_{P}\) such that \(n\leq s^{*}\). Define \(X^{\prime}\) to contain all elements \(x\in X\) with \(n\leq x^{*}\) for all \(n\in N^{\prime}\). Just as in the proof of Lemma 3.4, the fact that the profiles in \(\mathcal{Q}_{P}\) are closed and Zorn's Lemma imply that \(X^{\prime}\) has a maximal element \(t\). See Fig. 2 for an illustration.
We will show that all elements of \(N_{P}\) point towards \(t\). In order to do so let \(n\) be an element of \(N_{P}\). If there is \(q\in X\) such that \(q\leq n^{*}\), then \(s\wedge q\in X\). By minimality of \(s\) this implies \(s=s\wedge q\), so \(s\leq q\leq n^{*}\) and thus \(n\in N^{\prime}\). Hence \(n\) points towards \(t\). The other case is where there is no such separation \(q\), in particular \(t\wedge n^{*}\) is not a candidate for \(q\). If \(t\wedge n^{*}\) is a \(\leq k\)-separation, then it is contained in \(X\), because \(n\) is contained in all elements of \(\mathcal{Q}_{P}\). So the fact that \(t\wedge n^{*}\) is not contained in \(X\) implies that it is not a \(\leq k\)-separation. Because \(\mathcal{P}\) is robust and
Figure 2. The separation \(s\) is less than the separations \(n_{1}^{*}\) and \(n_{2}^{*}\), which is equivalent to \(n_{1}\leq s^{*}\) and \(n_{2}\leq s^{*}\), and the separation \(t\) is additionally bigger than \(n_{3}\).
consistent, this implies that \(t\lor n=(t^{*}\wedge n^{*})^{*}\) efficiently distinguishes the same elements of \(\mathcal{Q}_{P}\) that \(t\) distinguishes. Then \(t\) is a \(k\)-separation and thus contained in \(X\), and all \(n^{\prime}\in N^{\prime}\) point towards \(t\lor n\), so by maximality of \(t\) in \(X^{\prime}\) we have \(t\lor n=t\) and hence \(n\leq t\). So also in this case \(n\) points towards \(t\).
So in order to distinguish the elements of \(\mathcal{Q}_{P}\), it suffices to distinguish their intersections with \(U_{P}\). We will do that by applying Theorem 3.6 to the \(k\)-separations of \(U_{P}\) and the profiles induced by \(\mathcal{Q}_{P}\). Call the obtained tree set \(T_{P}\). Let \(T_{k+1}\) be the union of \(T_{k}\) and all tree sets \(T_{P}\) where \(P\) is a \(k\)-profile contained in \(\mathcal{P}_{k}\).
**Lemma 5.5**.: \(T_{k+1}\) _is a tree set that distinguishes \(\mathcal{P}_{k+1}\) efficiently, and in which every separation distinguishes some elements of \(\mathcal{P}_{k+1}\)._
Proof.: Let \(P\) be a \(k\)-profile in \(\mathcal{P}_{k}\). As for every separation \(n\) in \(P\cap T_{k}\) there is an element \(n^{\prime}\) of \(N_{P}\) such that \(n\leq n^{\prime}\), \(T_{k}\) is nested with every separation in \(U_{P}\) and thus with \(T_{P}\). Also, if \(P\) and \(P^{\prime}\) are distinct \(k\)-profiles in \(\mathcal{P}_{k}\), then they are distinguished by some separation \(n\) of \(T_{k}\), so they are also distinguished by some separation of \(N_{P}\) which then witnesses that every separation in \(T_{P^{\prime}}\) is nested with every separation in \(T_{P}\). So \(T_{k}\) is nested. Also by construction every element of \(T_{k+1}\) distinguishes two elements of \(\mathcal{P}_{k+1}\) and thus is neither small nor cosmall. Hence every element of \(T_{k+1}\) is neither trivial nor co-trivial nor degenerate.
In order to show that every two elements of \(\mathcal{P}_{k+1}\) are distinguished efficiently, let \(P\) and \(Q\) be two such elements that can be distinguished. If \(P\) and \(Q\) can be distinguished by a separation of order at most \(k-1\), then they induce distinguishable elements of \(\mathcal{P}_{k}\) which are thus efficiently distinguished by some separation \(t\) of \(T_{k}\). Then \(t\) also efficiently distinguishes \(P\) and \(Q\). So we are left with the case that \(P\) and \(Q\) cannot be distinguished by a separation of order less than \(k\), which implies that they are both \((k+1)\)-profiles and induce the same \(k\)-profile \(P^{\prime}\). But then \(P\) and \(Q\) are distinguished by a separation in \(T_{P^{\prime}}\), and that separation distinguishes them efficiently.
So in order to complete the proof of Theorem 5.1, we only have to prove the following statement.
**Lemma 5.6**.: _For every \(Q\in\mathcal{P}_{k+1}\), every element of \(Q\cap T_{k+1}\) is less than or equal to a maximal element of \(Q\cap T_{k+1}\)._
Proof.: Let \(Q\in\mathcal{P}_{k+1}\) and let \(s\in Q\cap T_{k+1}\). By induction it suffices to consider the case that \(Q\) has order \(k+1\). Denote the induced \(k\)-profile of \(Q\) by \(Q^{\prime}\). If \(s\) is not contained in \(T_{Q^{\prime}}\), then it is less than or equal to an element of \(N_{Q^{\prime}}\), and every element of \(N_{Q^{\prime}}\) is either maximal in \(T_{k+1}\cap Q\) or less than a separation in \(Q\cap T_{Q^{\prime}}\). So it suffices to consider the case that \(s\in T_{Q^{\prime}}\). But in this case by Theorem 3.6\(s\) is less than or equal to a maximal element of \(T_{Q^{\prime}}\cap Q\) which then also is a maximal element of \(Q\cap T_{k+1}\).
Proof of Theorem 5.1.: Let \(T\) be the union of all the \(T_{k}\) as defined in this section. Then \(T\) is a tree set which distinguishes all distinguishable profiles in \(\mathcal{P}\) efficiently, and every separation in it distinguishes two elements of \(\mathcal{P}\) efficiently. |
2305.19579 | On a structure of non-wandering set of an $Ω$-stable
3-diffeomorphism possessing a hyperbolic attractor | This paper belongs to a series of papers devoted to the study of the
structure of the non-wandering set of an A-diffeomorphism. We study such set
$NW(f)$ for an $\Omega$-stable diffeomorphism $f$, given on a closed connected
3-manifold $M^3$. Namely, we prove that if all basic sets in $NW(f)$ are
trivial except attractors, then every non-trivial attractor is either
one-dimensional non-orientable or two-dimensional expanding. | Marina Barinova, Olga Pochinka, Evgeniy Yakovlev | 2023-05-31T05:59:04Z | http://arxiv.org/abs/2305.19579v1 | ###### Abstract
###### Abstract
This paper belongs to a series of papers devoted to the study of the structure of the non-wandering set of an A-diffeomorphism. We study such set \(NW(f)\) for an \(\Omega\)-stable diffeomorphism \(f\), given on a closed connected \(3\)-manifold \(M^{3}\). Namely, we prove that if all basic sets in \(NW(f)\) are trivial except attractors, then every non-trivial attractor is either one-dimensional non-orientable or two-dimensional expanding.
**On a structure of non-wandering set of an \(\Omega\)-stable \(3\)-diffeomorphism possessing a hyperbolic attractor**
Marina Barinova, Olga Pochinka, Evgeniy Yakovlev, HSE University
## 1 Introduction and formulation of results
Let \(M^{n}\) be a smooth closed connected \(n\)-manifold with a Riemannian metric \(d\) and \(f:M^{n}\to M^{n}\) be a diffeomorphism. A set \(\Lambda\subset M^{n}\) is called an _invariant set_ if \(f(\Lambda)=\Lambda\). An invariant compact set \(\Lambda\subset M^{n}\) is called _hyperbolic_ if there is a continuous \(Df\)-invariant splitting of the tangent bundle \(T_{\Lambda}M^{n}\) into _stable_ and _unstable subbundles_\(E^{s}_{\Lambda}\oplus E^{u}_{\Lambda}\), \(\dim E^{s}_{x}+\dim E^{u}_{x}=n\) (\(x\in\Lambda\)) such that for \(i>0\) and for some fixed \(C_{s}>0\), \(C_{u}>0\), \(0<\lambda<1\)
\[\|Df^{i}(v)\|\leq C_{s}\lambda^{i}\|v\|,\quad v\in E^{s}_{\Lambda},\]
\[\|Df^{-i}(w)\|\leq C_{u}\lambda^{i}\|w\|,\quad w\in E^{u}_{\Lambda}.\]
The hyperbolic structure of \(\Lambda\) implies the existence of stable and unstable manifolds \(W^{s}_{x}\), \(W^{u}_{x}\) respectively for any point \(x\in\Lambda\):
\[W^{s}_{x}=\{y\in M^{n}:\lim_{j\to+\infty}d(f^{j}(x),f^{j}(y))=0\},\]
\[W^{u}_{x}=\{y\in M^{n}:\lim_{j\to+\infty}d(f^{-j}(x),f^{-j}(y))=0\},\]
which are smooth injective immersions of the \(E^{s}_{x}\) and \(E^{u}_{x}\) into \(M^{n}\). Moreover, \(W^{s}_{x}\), \(W^{u}_{x}\) are tangent to \(E^{s}_{x}\) and \(E^{u}_{x}\) at \(x\) respectively. For \(r>0\) we will denote by \(W^{s}_{x,r}\), \(W^{u}_{x,r}\) the immersions of discs on the subbundles \(E^{s}_{x}\), \(E^{u}_{x}\) of the radius \(r\).
Recall that a point \(x\in M^{n}\) is _non-wandering_ if for any neighborhood \(U\) of \(x\) the inequation \(f^{n}(U)\cap U\neq\emptyset\) holds for infinitely many integers \(n\). Then
\(NW(f)\), the _non-wandering set_ of \(f\), defined as the set of all non-wandering points, is an \(f\)-invariant closed set.
If the non-wandering set \(NW(f)\) of \(f\) is hyperbolic and periodic points are dense in \(NW(f)\) then \(f\) is called _an A-diffeomorphism_[1]. In this case the non-wandering set is a finite union of pairwise disjoint sets, called _basic sets_
\[NW(f)=\Lambda_{1}\sqcup\cdots\sqcup\Lambda_{m},\]
each of which is compact, invariant and topologically transitive. A basic set \(\Lambda_{i}\) of an A-diffeomorphism \(f:M^{n}\to M^{n}\) is called _trivial_ if it coincides with a periodic orbit and _non-trivial_ in the opposite case.
By [2], every non-trivial basic set \(\Lambda_{i}\), similarly to a periodic orbit, is uniquely expressed as a finite union of compact subsets
\[\Lambda_{i}=\Lambda_{i_{1}}\sqcup\cdots\sqcup\Lambda_{i_{q_{i}}},q_{i}\geqslant 1\]
such that \(f^{q_{i}}(\Lambda_{i_{j}})=\Lambda_{i_{j}},f(\Lambda_{i_{j}})=\Lambda_{i_{j+1}}\), \(j\in\{1,\ldots,q_{i}\}\left(\Lambda_{i_{q_{i}+1}}=\Lambda_{i_{1}}\right)\). These subsets \(\Lambda_{i_{q_{i}}}\), \(q_{i}\geqslant 1\) are called _periodic components_ of the set \(\Lambda_{i}\)1. For every point \(x\) of a periodic component \(\Lambda_{i_{j}}\) the set \(W^{s}_{x}\cap\Lambda_{i_{j}}\) (\(W^{u}_{x}\cap\Lambda_{i_{j}}\)) is dense in \(\Lambda_{i_{j}}\).
Footnote 1: R. Bowen [2] called these components \(C\)-dense.
Without loss of generality, everywhere below we will assume that \(\Lambda_{i}\) consists of a unique periodic component and, in addition, \(f|_{W^{u}_{\Lambda_{i}}}\) preserves orientation if \(\Lambda_{i}\) is trivial.
A sequence of basic sets \(\Lambda_{1},\ldots,\Lambda_{l}\) of an \(A\)-diffeomorphism \(f:M^{n}\to M^{n}\) is called _a cycle_ if \(W^{s}_{\Lambda_{i}}\cap W^{u}_{\Lambda_{i+1}}\neq\emptyset\) for \(i=1,\ldots,l\), where \(\Lambda_{l+1}=\Lambda_{1}\). A-diffeomorphisms without cycles form the set of \(\Omega\)_-stable_ diffeomorphisms; if, in addition, the stable and the unstable manifolds of every non-wandering point intersect transversaly then \(f\) is _structurally stable_ (see, for example, [3]).
A non-trivial basic set \(\Lambda_{i}\) is called _orientable_ if for any point \(x\in\Lambda_{i}\) and any fixed numbers \(\alpha>0\), \(\beta>0\) the intersection index2\(W^{u}_{x,\alpha}\cap W^{s}_{x,\beta}\) is the
same at all intersection points (\(+1\) or \(-1\)) [5]. Otherwise, the basic set is called _non-orientable_.
Footnote 1: The _non-orientable_ is a non-orientable (non-orientable) \(\Lambda_{i}\).
A basic set \(\Lambda_{i}\) is called an _attractor_ if there exists a compact neighborhood \(U_{\Lambda_{i}}\) (_a trapping neighborhood_) of \(\Lambda_{i}\) such that \(f(U_{\Lambda_{i}})\subset\operatorname{int}U_{\Lambda_{i}}\) and \(\Lambda_{i}=\bigcap\limits_{i=0}^{\infty}f^{i}(U_{\Lambda_{i}})\). Due to [6], a non-trivial attractor \(\Lambda_{i}\) of \(f\) is said to be _expanding_ if \(\dim\,\Lambda_{i}=\dim\,W_{x}^{u}\), \(x\in\Lambda_{i}\).
The main result of this paper is following.
**Theorem 1**.: _Let \(f:M^{3}\to M^{3}\) be an \(\Omega\)-stable diffeomorphism whose basic sets are trivial except attractors. Then every non-trivial attractor is either one-dimensional non-orientable or two-dimensional expanding._
Notice, that the attractors of both types described in the Theorem 1 are realized. In particular, the Figure 1 shows a phase portrait of a structurally
Figure 1: \(\Omega\)-stable diffeomorphism \(f:\mathbb{S}^{3}\to\mathbb{S}^{3}\) with the unique non-trivial basic set which is Plykin attractor
stable diffeomorphism of a 3-sphere, whose non-wandering set consists of a one-dimensional non-orientable Plykin attractor, four saddle points with a two-dimensional unstable manifold and two sources. The DA-diffeomorphism of 3-torus on Figure 2 is an example of a combination of an orientable two-dimensional expanding attractor with a source in the non-wandering set of a structurally stable diffeomorphism. An example of a diffeomorphism with non-orientable 2-dimensional expanding attractor will be constructed in section 6.
_Acknowledgments_. This work was supported by grant 22-11-00027, except section 2.3, whose results were supported by the Laboratory of Dynamical Systems and Applications NRU HSE, by the Ministry of Science and Higher Education of the Russian Federation (ag. 075-15-2022-1101).
## 2 Attractor, index of a hyperbolic point, filtration
### Attractors of an A-diffeomorphism \(f:M^{3}\to M^{3}\)
Let \(f:M^{3}\to M^{3}\) be an \(A\)-diffeomorphism and \(\Lambda_{i}\) be its basic set. Then
\[\mbox{dim }W_{x}^{u}+\mbox{dim }W_{x}^{s}=3,\,x\in\Lambda_{i}.\]
Figure 2: DA-map on \(\mathbb{T}^{3}\)
If \(\Lambda_{i}\) is a non-trivial then, moreover, dim \(W_{x}^{u}>0\), dim \(W_{x}^{s}>0\).
Now let \(\Lambda_{i}\) be a non-trivial attractor. It follows from [7] that
\[\Lambda_{i}=\bigcup_{x\in\Lambda_{i}}W_{x}^{u}\]
and, hence, dim \(\Lambda_{i}>0\).
If dim \(\mathbf{\Lambda_{i}=3}\) then \(\Lambda_{i}=M^{3}\cong\mathbb{T}^{3}\)[8].
If dim \(\mathbf{\Lambda_{i}=2}\) then \(\Lambda_{i}\) is either expanding (as in the Figure 2) or an _Anosov torus_ (\(f|_{\Lambda_{i}}\) is conjugate to an Anosov algebraic automorphism of a torus \(\mathbb{T}^{2}\)) [9], [10]. Herewith, an expanding attractor \(\Lambda_{i}\) is locally homeomorphic to the product of \(\mathbb{R}^{2}\) with a cantor set [11, 12]. There are both type of such attractor, orientable and non-orientable [13]. By [10] every Anosov torus \(\Lambda_{i}\) is a locally flat (possible non-smoothly [14]) embedded in \(M^{3}\) and, hence, it is always orientable and has a trapping neighborhood \(U_{\Lambda_{i}}\) which is homeomorphic to \(\mathbb{T}^{2}\times[-1,1]\).
If dim \(\mathbf{\Lambda_{i}=1}\) then \(\Lambda_{i}\) is automatically expanding, derived from an expansions on a 1-dimensional branched manifold [6] and is the nested intersections of handlebodies [15]. Thus, any one-dimensional attractor \(\Lambda_{i}\) of an A-diffeomorphism \(f:M^{3}\to M^{3}\) has a trapping neighborhood \(U_{\Lambda_{i}}\) which is a handlebody. There are both type of such attractor, orientable and non-orientable, it is enough to consider \(f=f_{DA}\times f_{NS}\) (see Figure 3) and \(f=f_{Pl}\times f_{NS}\), where \(f_{DA}:\mathbb{T}^{2}\rightarrow\mathbb{T}^{2}\) is derived from Anosov diffeomorphism, \(f_{NS}:\mathbb{S}^{1}\rightarrow\mathbb{S}^{1}\) is a "source-sink" diffeomorphism, \(f_{Pl}:\mathbb{S}^{2}\rightarrow\mathbb{S}^{2}\) is a diffeomorphism with the Plykin attractor (as in the Figure 1) and four sources.
The most famous one-dimensional attractor is _Smale solenoid_ (see Figure 4) which appears as intersection of the nested tori \(f^{k}(\mathbb{D}^{2}\times\mathbb{S}^{1})\), \(k\in\mathbb{N}\) for
Figure 3: A one-dimensional attractor for a diffeomorphism \(f_{DA}\times f_{NS}\)
\(f(d,z)=(d/10,2z)\). An arbitrary one-dimensional attractor is sometimes called _Smale-Williams solenoid_.
It is well known that the presence of an attractor with certain properties in a non-wandering set of an A-diffeomorphism can determine both the character of the remaining basic sets and the topology of the ambient manifold.
* If \(f:M^{3}\to M^{3}\) is a structurally stable diffeomorphism whose non-wandering set \(NW(f)\) contains a two-dimensional expanding attractor \(\Lambda_{i}\), then it is orientable, \(M^{3}\cong\mathbb{T}^{3}\) and the set \(NW(f)\setminus\Lambda_{i}\) consists of a finite number of isolated sources and saddles [16], [13].
* If \(f:M^{3}\to M^{3}\) is an A-diffeomorphism whose every basic set is two-dimensional then its attractors are either all Anosov tori or all expanding [17].
* If \(f:M^{3}\to M^{3}\) is a structural stable diffeomorphism whose every basic set is two-dimensional then its attractors are all Anosov tori and \(M^{3}\) is a mapping torus [18].
* An orientable manifold \(M^{3}\) admits an A-diffeomorphism \(f:M^{3}\to M^{3}\) with the non-wandering set which is a union of finitely many Smale solenoids if and only if \(M^{3}\) is a Lens space \(L_{p,q}\), \(p\neq 0\). Every such a diffeomorphism is not structurally stable [19].
Figure 4: Smale’s solenoid
### Orientability of the basic set and index of the hyperbolic point
In this section let \(M\) be a compact smooth \(n\)-manifold \(M\) (possibly with a non-empty boundary) and \(f:M\to f(M)\) be a smooth embedding of a compact \(n\)-manifold \(M\) to itself and \(Fix(f)\) be its set of the fixed points.
Let \(p\in Fix(f)\) be an isolated hyperbolic point. By [1, Proposition 4.11] the _index_\(I(p)=I(p,f)\) of \(p\) is defined by the formula
\[I(p)=(-1)^{\dim\,W^{u}_{p}}\Delta_{p},\]
where \(\Delta_{p}=+1\) if \(f\) preserves orientation on \(W^{u}_{p}\) and \(\Delta_{p}=-1\) if \(f\) reverses it.
**Lemma 2.1**.: _If \(\Lambda_{i}\) is an orientable hyperbolic attractor with \(\dim\,W^{u}_{x}=1,x\in\Lambda_{i}\) for \(f\) then \(I(p)=I(q)\) for any \(p,q\in(Fix(f)\cap\Lambda_{i})\)._
Proof.: Suppose the contrary: there are different points \(p,q\in(Fix(f)\cap\Lambda_{i})\) such that \(I(p)=-I(q)\). As \(p,\,q\) belongs to the same basic set \(\Lambda_{i}\) then \(\dim\,W^{u}_{p}=\dim\,W^{u}_{q}\) and, hence, \(\Delta_{p}=-\Delta_{q}\). Let us assume for the definiteness that \(\Delta_{q}=-1\) and \(\Delta_{p}=+1\). As \(\Lambda_{i}\) is an attractor then \(W^{u}_{p}\), \(W^{u}_{q}\subset\Lambda_{i}\), moreover, \(\operatorname{cl}W^{u}_{p}=\operatorname{cl}W^{u}_{p}=\Lambda_{i}\). Denote by \(\ell^{1}_{p},\ell^{2}_{p}\); \(\ell^{1}_{q},\ell^{2}_{q}\) the connected components of the sets \(W^{u}_{p}\backslash p\); \(W^{u}_{q}\backslash q\). By [20] every such a component is dense in \(\Lambda_{i}\). Due to hyperbolicity of \(\Lambda_{i}\) there is a point \(x_{1}\) of the transversal intersection \(\ell^{1}_{q}\cap W^{s}_{p}\) (see Figure 5).
As \(\Delta_{q}=-1\) then \(x_{2}=f(x_{1})\) belongs to \(\ell^{2}_{q}\). Let \((y_{1},z_{1})\subset\ell^{1}_{q}\) be a neighbourhood of the point \(x_{1}\) and \(y_{2}=f(y_{1}),\,z_{2}=f(z_{1})\). Then the arc \((y_{2},z_{2})\subset\ell^{2}_{q}\) be a neighbourhood of the point \(x_{2}\). By the orientability of \(\Lambda_{i}\) we get that \(y_{1}\), \(y_{2}\) are separated by \(W^{s}_{p}\). By \(\lambda\)-lemma (see, for example, [3]) the iteration of \((y_{1},z_{1}),\,(y_{2},z_{2})\) with respect to \(f\) are \(C^{1}\)-closed to \(W^{u}_{p}\). By continuity of \(f\) we conclude that \(f(\ell^{1}_{p})=\ell^{2}_{p}\). Thus, \(\Delta_{p}=-1\), that contradicts to the assumption.
Denote by \(f_{*k}:H_{k}(M)\to H_{k}(M)\), \(k\in\{0,\ldots,n\}\) the induced automorphism of the \(k\)-th homology group \(H_{k}(M)\) of \(M\) with real coefficients. The number
\[\Lambda(f)=\sum_{k=0}^{n}(-1)^{k}\mathrm{tr}(f_{*k})\]
is called a _Lefschetz number_ of \(f\)[21].
Suppose \(f\) has only hyperbolic fixed points and their set \(Fix(f)\) is finite. The following equality is named _Lefschetz-Hopf theorem_.
\[\sum_{p\in Fix(f)}I(p)=\Lambda(f). \tag{1}\]
Denote by \(N_{m},\,m\in\mathbb{N}\) the number of points in \(Fix(f^{m})\). Let \(\lambda_{*k,j},\,j\in\{1,\ldots,\dim\,H_{k}(M)\}\) be eigenvalues of \(f_{*k}\). If \(I(p,f^{m})=I(q,f^{m})\) for any \(p,q\in Fix(f^{m})\) then the Lefschetz-Hopf theorem has the following form
\[N_{m}=\left|\sum_{k=0}^{n}(-1)^{k}\left(\sum_{j=1}^{\dim\,H_{k}(M)}\lambda_{*k,j}^{m}\right)\right|. \tag{2}\]
Sometimes it is convenient to pass from homology groups to cohomology groups. Let us prove the following lemma for this aim.
**Lemma 2.2**.: _Let \(M\) be an \(n\)-dimensional orientable smooth manifold with boundary \(\partial M\), \(f:M\to M\) be a diffeomorphism, \(k\in\{0,1,\ldots,n\}\), \(f_{*}:H_{k}(M)\to H_{k}(M)\), \(\tilde{f}_{*}:H_{n-k}(M,\partial M)\to H_{n-k}(M,\partial M)\) and \(f^{*}:H^{k}(M)\to H^{k}(M)\) be induced automorphisms for groups with real coefficients. Then:_
* _if_ \(\lambda\) _is an eigenvalue for_ \(f_{*}\)_, then_ \(\tilde{\lambda}=\pm\lambda^{-1}\) _is an eigenvalue for_ \(\tilde{f}_{*}\)_;_
* _if_ \(\tilde{\lambda}\) _is an eigenvalue for_ \(\tilde{f}_{*}\)_, then_ \(\lambda=\pm\tilde{\lambda}^{-1}\) _is an eigenvalue for_ \(f^{*}\)
Figure 5: Illustration to the proof of Lemma 2.1
_In the both cases a sign \(+\) corresponds to an orientation-preserving diffeomorphism and a sign \(-\) is used in the opposite situation._
Proof.: According to the strong part of the Poincare-Lefschetz duality groups \(H_{k}(M)\) and \(H_{n-k}(M,\partial M)\) have bases \(e_{1},\ldots,e_{m}\) and \(\varepsilon_{1},\ldots,\varepsilon_{m}\), dual with respect to the intersection form \(\operatorname{Ind}:H_{k}(M)\times H_{n-k}(M,\partial M)\to\mathbb{R}\). The duality means that the following equalities take place
\[\operatorname{Ind}(e_{i},\varepsilon_{j})=\delta_{ij},\quad i,j=1,\ldots,m.\]
Let \(A\) and \(B\) be matrices of automorphisms \(f_{*}\) and \(\tilde{f}_{*}\) in the bases \(e_{1},\ldots,e_{m}\) and \(\varepsilon_{1},\ldots,\varepsilon_{m}\) correspondingly. Then
\[f_{*}(e_{i})=\sum_{s=1}^{m}a_{is}e_{s},\quad\tilde{f}_{*}(\varepsilon_{j})= \sum_{t=1}^{m}b_{jt}\varepsilon_{t}\]
Herewith
\[\operatorname{Ind}(f_{*}(e_{i}),\tilde{f}_{*}(\varepsilon_{j}))=\sum_{s,t=1} ^{m}a_{is}b_{jt}\operatorname{Ind}(e_{s},\varepsilon_{t})=\sum_{s,t=1}^{m}a_{ is}b_{jt}\delta_{st}=\sum_{s=1}^{m}a_{is}b_{js}. \tag{3}\]
On the other hand, since \(\deg f=\pm 1\), then
\[\operatorname{Ind}(f_{*}(e_{i}),\tilde{f}_{*}(\varepsilon_{j}))=\pm \operatorname{Ind}(e_{i},\varepsilon_{j})=\pm\delta_{ij}. \tag{4}\]
(3) and (4) imply \(B^{T}=\pm A^{-1}\). Therefore, the roots of the characteristic equations \(|A-\lambda E|=0\) and \(|B-\tilde{\lambda}E|=0\) are related by the equation \(\tilde{\lambda}=\pm\lambda^{-1}\). Thus, the first statement is proved.
For the Poincare-Lefschetz isomorphism \(l:H^{k}(M)\to H_{n-k}(M,\partial M)\) the following diagram is commutative
\[\begin{CD}H^{k}(M)@<{f^{*}}<{}<H^{k}(M)\\ @V{\pm l}V{}V@V{}V{l}V\\ H_{n-k}(M,\partial M)@>{\tilde{f}_{*}}>{}>H_{n-k}(M,\partial M).\end{CD} \tag{5}\]
Let \(v\in H_{n-k}(M,\partial M)\), \(v\neq 0\), \(\tilde{\lambda}\in\mathbb{R}\)\(\mathtt{i}\)\(\tilde{f}_{*}(v)=\tilde{\lambda}v\). Then \(\tilde{f}_{*}^{-1}(v)=\tilde{\lambda}^{-1}v\). Set \(\alpha=l^{-1}(v)\). Since \(l\) is an isomorphism, then \(\alpha\neq 0\). According to (5) we have
\[f^{*}(\alpha)=\pm l^{-1}\circ\tilde{f}_{*}^{-1}\circ l(\alpha)=\pm l^{-1} \circ\tilde{f}_{*}^{-1}(v)=\pm l^{-1}(\tilde{\lambda}^{-1}v)=\pm\tilde{ \lambda}^{-1}l^{-1}(v)=\pm\tilde{\lambda}^{-1}\alpha.\]
Thus, \(\lambda=\tilde{\lambda}^{-1}\) is an eigenvalue of the automorphism \(f^{*}\) corresponding to the eigenvector \(\alpha\in H^{k}(M)\)
According to the lemma proved above for the eigenvalues \(\lambda_{k,j}^{*},\,j\in\{1,\ldots,\dim\,H^{k}(M)\}\) of \(f_{k}^{*}\) and \(f^{m}\) such that \(I(p,f^{m})=I(q,f^{m})\) for any \(p,q\in Fix(f^{m})\) the following equality takes place
\[N_{m}=\left|\sum_{k=0}^{n}(-1)^{n-k}\left(\sum_{j=1}^{\dim\,H^{k}(M)}\lambda_{k,j}^{*m}\right)\right|. \tag{6}\]
### Filtration
Let \(f:M^{n}\to M^{n}\) be an \(\Omega\)-stable diffeomorphism. As \(f\) has no cycles then \(\prec\) is a partial order relation on the basic sets
\[\Lambda_{i}\prec\Lambda_{j}\iff W_{\Lambda_{i}}^{s}\cap W_{\Lambda_{j}}^{u} \neq\emptyset.\]
Intuitively the definition means that "everything trickles down" towards "smaller elements". The partial order \(\prec\) extends to the order relation, i.e. the basic sets can be enumerated \(\Lambda_{1},\ldots,\Lambda_{m}\) in accordance with the relation \(\prec\):
\[\mbox{if}\,\Lambda_{i}\prec\Lambda_{j},\,\mbox{then}\,i\leq j.\]
We pick a sequence of nested subsets of the ambient manifold \(M^{n}\) in the following way. Let the first subset of \(M^{n}\) be a neighborhood \(M_{1}\) of the basic set \(\Lambda_{1}\), let the next subset \(M_{2}\) be the union of \(M_{1}\) and some neighborhood of the unstable manifold of the element \(\Lambda_{2}\). If we continue this process we get the entire manifold \(M^{n}\). This construction gives the idea to the following notion of filtration.
A sequence \(M_{1},\ldots,M_{m-1}\) of compact \(n\)-submanifolds of \(M^{n}\), each having a smooth boundary, and such that \(M^{n}=M_{m}\supset M_{m-1}\supset\cdots\supset M_{1}\supset M_{0}=\emptyset\) is called a _filtration_ for a diffeomorphism \(f\) with its ordered basic sets \(\Lambda_{1}\prec\cdots\prec\Lambda_{m}\) if for each \(i=1,\ldots,m\) the following holds:
1. \(f(M_{i})\subset\mbox{int}\,M_{i}\);
2. \(\Lambda_{i}\subset\mbox{int}\,(M_{i}\setminus M_{i-1})\);
3. \(\Lambda_{i}=\bigcap\limits_{l\in\mathbb{Z}}f^{l}(M_{i}\setminus M_{i-1})\);
4. \(\bigcap\limits_{l\geq 0}f^{l}(M_{i})=\bigcup\limits_{j\leq i}W_{\Lambda_{j}}^{u}= \bigcup\limits_{j\leq i}cl(W_{\Lambda_{j}}^{u})\).
Below we describe following from [22] interrelations between actions \(f\) on cohomology groups \(H^{k}(M^{n})\), \(H^{k}(M_{i},M_{i-1})\) and homology group \(H_{k}(M^{n})\) with real coefficients. If an action in these group is _nilpotent_ then all eigenvalues equal zero and if it is _unipotent_ then it has only roots of unity as eigenvalues.
**Proposition 2.1**.: _Let \(f:M^{n}\to M^{n}\) be an \(\Omega\)-stable diffeomorphism and \(M^{n}=M_{m}\supset M_{m-1}\supset\cdots\supset M_{1}\supset M_{0}=\emptyset\) be a filtration for its ordered basic sets \(\Lambda_{1}\prec\cdots\prec\Lambda_{m}\). Then_
1. _If_ \(\lambda\) _is an eigenvalue of_ \(f_{k}^{*}:H^{k}(M^{n})\to H^{k}(M^{n})\)_, then there is an_ \(i\in\{1,\ldots,m\}\) _such that_ \(f_{k}^{*}:H^{k}(M_{i},M_{i-1})\to H^{k}(M_{i},M_{i-1})\) _has_ \(\lambda\) _as an eigenvalue._
2. _If_ \(\Lambda_{i}\) _is a trivial basic set then_ \(f_{k}^{*}:H^{k}(M_{i},M_{i-1})\to H^{k}(M_{i},M_{i-1})\) _is nilpotent unless_ \(k=\dim\,W_{x}^{u},\,x\in\Lambda_{i}\) _and_ \(f_{k}^{*}:H^{k}(M_{i},M_{i-1})\to H^{k}(M_{i},M_{i-1})\) _is unipotent for_ \(k=\dim\,W_{x}^{u},\,x\in\Lambda_{i}\)_._
## 3 Proof of theorem 1
In this section we prove that if \(f:M^{3}\to M^{3}\) is an \(\Omega\)-stable diffeomorphism whose basic sets are trivial except attractors, then every non-trivial attractor is either one-dimensional non-orientable or two-dimensional expanding. We will use in this proof some results, which will be proven in the next section. As above, the symbols \(H_{k}(X,A)\) and \(H^{k}(X,A)\) will denote homology and cohomology groups with real coefficients. For homology groups with integer coefficients, the notation \(H_{k}(X,A;\mathbb{Z})\) will be used.
Proof.: Suppose the contrary: \(NW(f)\) contains a non-trivial attractor \(A\) such that \(A\) is either one-dimensional orientable or two-dimensional Anosov torus. Without loss of generality we can assume that in the order \(\prec\), first positions occupied by attractors and \(A\) is the last of them. Let \(M^{n}=M_{k}\supset M_{k-1}\supset\cdots\supset M_{1}\supset M_{0}=\emptyset\) be a filtration for the ordered basic sets \(\Lambda_{1}\prec\cdots\prec\Lambda_{k}\). Then \(\tilde{M}_{i}=M^{n}\setminus\operatorname{int}M_{k-i}\) is the filtration for the basic sets \(\tilde{\Lambda}_{i}=\Lambda_{k-i}\) of the diffeomorphism \(g=f^{-1}\). Let \(A=\tilde{\Lambda}_{i_{0}}\). Without loss of generality we can assume that the manifold \(\tilde{M}_{i_{0}}\) is connected (in the opposite case let us consider its connected component containing \(A\)). Then \(g(\tilde{M}_{i_{0}})\subset\operatorname{int}\tilde{M}_{i_{0}}\). Notice, that \(i_{0}>1\) since any \(\Omega\)-stable diffeomorphism has non-empty sets of attractors and repellers.
Let \(N_{m}\) be a number of points in \(Fix(g^{m})\). As the non-trivial basic set \(A\) belongs to \(\tilde{M}_{i_{0}}\) then \(\lim\limits_{m\to\infty}N_{m}=\infty\). Since \(A\) is orientable then the Lemma 2.1 and the formula (6) gives the existence of an eigenvalue \(\lambda\) with absolute value greater than \(1\) for \(g_{k}^{*}:H^{k}(\tilde{M}_{i_{0}})\to H^{k}(\tilde{M}_{i_{0}})\) for some \(k\in\{0,\ldots,3\}\).
First of all, let us show that it is impossible for orientable \(M^{3}\). We will prove it separately for each dimension \(k=0,1,2,3\).
a) \(k=0\). Eigenvalues of the automorphism \(g^{*}:H^{0}(\tilde{M}_{i_{0}})\to H^{0}(\tilde{M}_{i_{0}})\) are roots of unity by the lemma 4.5.
b) \(k=3\). The group \(H_{3}(\tilde{M}_{i_{0}};\mathbb{Z})\) is trivial when \(\partial\tilde{M}_{i_{0}}\neq\emptyset\) and is isomorphic to \(\mathbb{Z}\) when \(\partial\tilde{M}_{i_{0}}=\emptyset\). In the first case we have \(H^{3}(\tilde{M}_{i_{0}})=0\) and so \(g^{*}:H^{3}(\tilde{M}_{i_{0}})\to H^{3}(\tilde{M}_{i_{0}})\) does not have eigenvalues. In the second case, \(g^{*}=\pm\,\mathrm{id}\) by the lemma 4.4.
c) \(k=1\). Suppose, that the automorphism \(g^{*}:H^{1}(\tilde{M}_{i_{0}})\to H^{1}(\tilde{M}_{i_{0}})\) has an eigenvalue \(\lambda\), for which \(\lambda^{2}\neq 1\). Then it follows from the item 1 of the proposition 2.1, that there exists a number \(i\), \(1\leqslant i\leqslant i_{0}\) such that the automorphism \(g^{*}:H^{1}(M_{i},M_{i-1})\to H^{1}(M_{i},M_{i-1})\) also has the eigenvalue \(\lambda\).
As all basic sets of \(g\) before \(A\) in the Smale order \(\prec\) are trivial then by the item 2 of proposition 2.1 for \(i<i_{0}\) we get that the automorphisms \(g^{*}\) on \(H^{1}(M_{i},M_{i-1})\) are either nilpotent or unipotent. Hence, it is precisely the automorphism \(g^{*}:H^{1}(\tilde{M}_{i_{0}},\tilde{M}_{i_{0}-1})\to H^{1}(\tilde{M}_{i_{0} },\tilde{M}_{i_{0}-1})\) must have the eigenvalue \(\lambda\).
Let \(\dim\,A=1\). In this case \(\tilde{M}_{i_{0}}=Q_{g}\cup\tilde{M}_{i_{0}-1}\), where \(Q_{g}\) is a handlebody of a genus \(g\geqslant 0\) such that \(Q_{g}\cap\tilde{M}_{i_{0}-1}=\partial Q_{g}\). By lemma 4.2\(H_{1}(\tilde{M}_{i_{0}},\tilde{M}_{i_{0}-1};\mathbb{Z})=0\). Then \(H^{1}(\tilde{M}_{i_{0}},\tilde{M}_{i_{0}-1})=0\) and therefore \(\lambda\) cannot be an eigenvalue of the automorphism \(g^{*}\).
If \(\dim\,A=2\), then \(\tilde{M}_{i_{0}}=Q\cup\tilde{M}_{i_{0}-1}\), where \(Q\cong\mathbb{T}^{2}\times[0,1]\) and \(Q\cap\tilde{M}_{i_{0}-1}=\partial Q\). In this situation by the lemma 4.3\(H_{1}(\tilde{M}_{i_{0}},\tilde{M}_{i_{0}-1};\mathbb{Z})\cong\mathbb{Z}\). From here and from the lemma 4.4 it follows, that \(g^{*}=\pm\,\mathrm{id}\). Thus, we obtain a contradiction for \(k=1\) as well.
d) \(k=2\). Let us finally assume that \(g^{*}:H^{2}(\tilde{M}_{i_{0}})\to H^{2}(\tilde{M}_{i_{0}})\) has as eigenvalue \(\lambda\), for which \(\lambda^{2}\neq 1\). Due to lemma 2.2, in such a situation the automorphism \(g^{*}:H^{1}(\tilde{M}_{i_{0}},\partial\tilde{M}_{i_{0}})\to H^{1}(\tilde{M}_{ i_{0}},\partial\tilde{M}_{i_{0}})\) has an eigenvalue \(\tilde{\lambda}=\pm\lambda^{-1}\).
Consider the following diagram
\[\begin{CD}\dots@>{}>{}>H^{0}(\partial\tilde{M}_{i_{0}})@>{\delta^{*}}>{}>H^{1}( \tilde{M}_{i_{0}},\partial\tilde{M}_{i_{0}})@>{j^{*}}>{}>H^{1}(\tilde{M}_{i_{0}} )@>{}>{}>\dots\\ @V{}V{g^{*}}V@V{}V{g^{*}}V@V{}V{g^{*}}V\\ \dots@>{}>{}>H^{0}(\partial\tilde{M}_{i_{0}})@>{\delta^{*}}>{}>H^{1}(\tilde{M}_ {i_{0}},\partial\tilde{M}_{i_{0}})@>{j^{*}}>{}>H^{1}(\tilde{M}_{i_{0}})@>{}>{}> \dots,\end{CD} \tag{7}\]
where the rows are taken from the cohomological sequence of the pair \((\tilde{M}_{i_{0}},\partial\tilde{M}_{i_{0}})\) and the vertical arrows denote the mappings induced by the diffeomorphism \(g\). All squares of the diagram are commutative, and the middle automorphism \(g^{*}\) from (7) has an eigenvalue \(\tilde{\lambda}\). From this, by [22, Lemma 3] it follows that for one of the extreme vertical automorphisms of the diagram (7) \(\tilde{\lambda}\) is also an eigenvalue. Since \(\tilde{\lambda}^{2}\neq 1\), then for the automorphism \(g^{*}:H^{1}(\tilde{M}_{i_{0}})\to H^{1}(\tilde{M}_{i_{0}})\) this is impossible according to proven in c). Since the manifold \(\tilde{M}_{i_{0}}\) is compact, its boundary \(\partial\tilde{M}_{i_{0}}\) consists of a finite set of connected components. Then by Lemma 4.5 all eigenvalues of the automorphism \(g^{*}:H^{0}(\partial\tilde{M}_{i_{0}})\to H^{0}(\partial\tilde{M}_{i_{0}})\) are roots of unity. Thus, in this case we also obtain a contradiction.
If \(M^{n}\) is non-orientable then, by lemma 5.2, there is an oriented two-fold covering \(p:\bar{M}^{n}\to M^{n}\) and a lift \(\bar{g}:\bar{M}^{n}\to\bar{M}^{n}\) of the diffeomorphism3\(g\). Herewith, by lemma 5.3, \(\bar{A}=p^{-1}(A)\) is orientable, like \(A\). So we can apply all arguments from an orientable case to \(\bar{g}\) and get a contradiction.
Footnote 3: We have not found a reference for this fact, so we prove it in the section 5 below.
## 4 Homology and induced automorphisms
In this section, we calculate the homology groups of some topological pairs and study the properties of automorphisms of cohomology groups induced by homeomorphisms.
### Calculations
In this section we calculate relative homology for the following situation. Let \(M\) and \(N\) be smooth \(3\)-manifolds with boundaries such that \(P=M\cup N\) is connected, \(M\cap N=\partial M\) and connected components of \(\partial M\) are some of connected components of \(\partial N\). Let us calculate the relative homology groups of the pair \((P,N)\).
Firstly notice that \(H_{0}(P,N;\mathbb{Z})=0\) as the manifold \(P\) is connected and \(N\) is not empty. For the calculation of other relative homology groups we need the following fact.
**Lemma 4.1**.: _For every natural \(k\) the following isomorphism takes place_
\[H_{k}(P,N;\mathbb{Z})\cong H_{k}(M,\partial M;\mathbb{Z}).\]
Proof.: By [4, Theorem 6.1, Chapter 4] the boundary \(\partial N\) possesses a collar in \(N\). As connected components of \(\partial M\) are connected components of \(\partial N\) then there is an embedding \(\phi:\partial M\times[0,1)\to N\) such that \(\phi(a,0)=a\) for every \(a\in\partial M\). Let \(V=\phi(\partial M\times[0,1))\), \(B=N\setminus V\) and \(\operatorname{cl}B\) be the closure of \(B\subset P\) in \(P\), \(\operatorname{int}N\) be the interior of \(N\subset P\) in \(P\). By the construction \(\operatorname{cl}B=B\subset N\setminus\partial M=\operatorname{int}N\). The excision theorem [23, Corollary 7.4, Chapter III] claims in such case that
\[H_{k}(P,N;\mathbb{Z})\cong H_{k}(P\setminus B,N\setminus B;\mathbb{Z})=H_{k}( M\cup V,V;\mathbb{Z}).\]
But the pair \((M\cup V,V)\) is homotopically equivalent to the pair \((M,\partial M)\). Hence, \(H_{k}(M\cup V,V;\mathbb{Z})\cong H_{k}(M,\partial M;\mathbb{Z})\) for a natural \(k\).
Below we calculate \(H_{k}(M,\partial M;\mathbb{Z})\) in two cases: 1) \(M\) is a handlebody of a genus \(g\geqslant 0\), 2) \(M\cong\mathbb{T}^{2}\times[0,1]\).
**Lemma 4.2**.: _If \(M\) is a handlebody of a genus \(g\geqslant 0\) then_
\[H_{3}(M,\partial M;\mathbb{Z})\cong\mathbb{Z},\quad H_{2}(M,\partial M;\mathbb{ Z})\cong\mathbb{Z}^{g},\quad H_{1}(M,\partial M;\mathbb{Z})=0. \tag{8}\]
Proof.: As \(H_{3}(M;\mathbb{Z})=0\) and \(H_{0}(M,\partial M;\mathbb{Z})=0\) then the homological sequence of the pair \((M,\partial M)\) has the following form [23, Proposition 4.4, Chapter III]:
\[0\longrightarrow H_{3}(M,\partial M;\mathbb{Z})\stackrel{{ \partial_{*}^{3}}}{{\longrightarrow}}H_{2}(\partial M;\mathbb{Z})\stackrel{{ i_{*}^{2}}}{{\longrightarrow}}H_{2}(M;\mathbb{Z})\stackrel{{ j_{*}^{2}}}{{\longrightarrow}}H_{2}(M,\partial M;\mathbb{Z})\stackrel{{ \partial_{*}^{2}}}{{\longrightarrow}}\\ \longrightarrow H_{1}(\partial M;\mathbb{Z})\stackrel{{ i_{*}^{1}}}{{ \longrightarrow}}H_{1}(M;\mathbb{Z})\stackrel{{ j_{*}^{1}}}{{ \longrightarrow}}H_{1}(M,\partial M;\mathbb{Z})\stackrel{{\partial _{*}^{1}}}{{\longrightarrow}}\\ \longrightarrow H_{0}(\partial M;\mathbb{Z})\stackrel{{ i_{*}^{0}}}{{ \longrightarrow}}H_{0}(M;\mathbb{Z})\stackrel{{ j_{*}^{0}}}{{ \longrightarrow}}0. \tag{9}\]
Handlebody \(M\) of a genus \(g\) is the 3-ball with glued \(g\) 3-handles of the index 1. That is \(M\) is homotopically equivalent to the bouquet of \(g\) circles. Therefore,
\[H_{2}(M;\mathbb{Z})=0,\quad H_{1}(M;\mathbb{Z})\cong\mathbb{Z}^{g},\quad H_{0 }(M;\mathbb{Z})\cong\mathbb{Z}.\]
On the other side the boundary \(\partial M\) is homeomorphic to the surface \(S_{g}\) of the genus \(g\). Hence,
\[H_{2}(\partial M;\mathbb{Z})\cong\mathbb{Z},\quad H_{1}(\partial M;\mathbb{Z}) \cong\mathbb{Z}^{2g},\quad H_{0}(\partial M;\mathbb{Z})\cong\mathbb{Z}.\]
Substituting the latter in (9), we get the exact sequence
\[0\longrightarrow H_{3}(M,\partial M;\mathbb{Z})\xrightarrow{ \partial_{*}^{3}}\mathbb{Z}\xrightarrow{\imath_{*}^{2}}0\xrightarrow{\jmath_ {*}^{2}}H_{2}(M,\partial M;\mathbb{Z})\xrightarrow{\partial_{*}^{2}}\\ \longrightarrow\mathbb{Z}^{2g}\xrightarrow{\imath_{*}^{1}} \mathbb{Z}^{g}\xrightarrow{\jmath_{*}^{1}}H_{1}(M,\partial M;\mathbb{Z}) \xrightarrow{\partial_{*}^{1}}\mathbb{Z}\xrightarrow{\imath_{*}^{0}}\mathbb{ Z}\xrightarrow{\jmath_{*}^{0}}0. \tag{10}\]
As \(\imath_{*}^{1}\) is an epimorphism then (10) decomposes into short exact sequences
\[0\longrightarrow H_{3}(M,\partial M;\mathbb{Z})\xrightarrow{ \partial_{*}^{3}}\mathbb{Z}\xrightarrow{\imath_{*}^{2}}0,\] \[0\longrightarrow H_{2}(M,\partial M;\mathbb{Z})\xrightarrow{ \partial_{*}^{2}}\mathbb{Z}^{2g}\xrightarrow{\imath_{*}^{1}}\mathbb{Z}^{g} \longrightarrow 0,\] \[0\longrightarrow H_{1}(M,\partial M;\mathbb{Z})\xrightarrow{ \partial_{*}^{1}}\mathbb{Z}\xrightarrow{\imath_{*}^{0}}\mathbb{Z} \longrightarrow 0,\]
from which follows the statement of the lemma.
**Lemma 4.3**.: _If \(M=\mathbb{T}^{2}\times[0,1]\) then_
\[H_{3}(M,\partial M;\mathbb{Z})\cong\mathbb{Z},\quad H_{2}(M,\partial M; \mathbb{Z})\cong\mathbb{Z}^{2},\quad H_{1}(M,\partial M;\mathbb{Z})=\mathbb{Z}. \tag{11}\]
Proof.: As \(M\) is homotopically equivalent to \(\mathbb{T}^{2}\) and \(\partial M\) is homeomorphic to \(\mathbb{T}^{2}\times\mathbb{S}^{0}\) then \(H_{k}(M;\mathbb{Z})\cong H_{k}(\mathbb{T}^{2};\mathbb{Z})\) and \(H_{k}(\partial M;\mathbb{Z})\cong H_{k}(\mathbb{T}^{2};\mathbb{Z})\times H_{k }(\mathbb{T}^{2};\mathbb{Z})\). In such situation the homological sequence (9) of the pair \((M,\partial M)\) has the following form:
\[0\longrightarrow H_{3}(M,\partial M;\mathbb{Z})\xrightarrow{ \partial_{*}^{3}}\mathbb{Z}^{2}\xrightarrow{\imath_{*}^{2}}\mathbb{Z} \xrightarrow{\jmath_{*}^{2}}H_{2}(M,\partial M;\mathbb{Z})\xrightarrow{ \partial_{*}^{2}}\\ \longrightarrow\mathbb{Z}^{4}\xrightarrow{\imath_{*}^{1}} \mathbb{Z}^{2}\xrightarrow{\jmath_{*}^{1}}H_{1}(M,\partial M;\mathbb{Z}) \xrightarrow{\partial_{*}^{1}}\mathbb{Z}^{2}\xrightarrow{\imath_{*}^{0}} \mathbb{Z}\xrightarrow{\jmath_{*}^{0}}0. \tag{12}\]
As the inclusion of every connected component of \(\partial M\) to \(M\) is a homotopical equivalence then \(\imath_{*}^{2}\) and \(\imath_{*}^{1}\) are epimorphisms. Herewith (12) decomposes into short exact sequences
\[0\longrightarrow H_{3}(M,\partial M;\mathbb{Z})\xrightarrow{ \partial_{*}^{3}}\mathbb{Z}^{2}\xrightarrow{\imath_{*}^{2}}\mathbb{Z} \longrightarrow 0,\] \[0\longrightarrow H_{2}(M,\partial M;\mathbb{Z})\xrightarrow{ \partial_{*}^{2}}\mathbb{Z}^{4}\xrightarrow{\imath_{*}^{1}}\mathbb{Z}^{2} \longrightarrow 0,\] \[0\longrightarrow H_{1}(M,\partial M;\mathbb{Z})\xrightarrow{ \partial_{*}^{1}}\mathbb{Z}^{2}\xrightarrow{\imath_{*}^{0}}\mathbb{Z} \longrightarrow 0,\]
from which follows the statement of the lemma.
### Eigenvalues of induced automorphisms
In this section we again consider all homology groups \(H_{k}(X,A;\mathbb{Z})\) with integer coefficients and cohomology groups \(H^{k}(X,A)\) with real coefficients. Firstly, by the Universal Coefficient Formula [23, Section 7, Chapter VI], the previous subsection results give the following calculations.
**Statement 4.1**.: _If \(M\) is a handlebody of a genus \(g\geqslant 0\) then_
\[H^{3}(M,\partial M)\cong\mathbb{R},\,H^{2}(M,\partial M)\cong\mathbb{R}^{g}, \,H^{1}(M,\partial M)=0,\,H^{0}(M,\partial M)=0.\]
**Statement 4.2**.: _If \(M=\mathbb{T}^{2}\times[0,1]\) then_
\[H^{3}(M,\partial M)\cong\mathbb{R},\,H^{2}(M,\partial M)\cong\mathbb{R}^{2}, \,H^{1}(M,\partial M)=\mathbb{R},\,H^{0}(M,\partial M)=0.\]
The groups \(H^{k}(X,A)\cong\mathbb{R}^{m}\) admit many automorphisms even for \(m=1\). But in some cases only a small part of them can be induced by homeomorphisms of the topological space \(X\).
**Lemma 4.4**.: _Let \(X\) be a topological space, \(A\subset X\) be its subspace, \(f:X\to X\) be homeomorphism, and \(f(A)\subset A\). Denote by \(H^{\prime}_{k}(X,A;\mathbb{Z})\) a free part of the group of \(k\)-dimensional singular homology of the pair \((X,A)\), and by \(H^{k}(X,A)\) its \(k\)-dimensional cohomology group with real coefficients. It \(H^{\prime}_{k}(X,A;\mathbb{Z})\cong\mathbb{Z}\) for some \(k\), then for the induced automorphism \(f^{*}:H^{k}(X,A)\to H^{k}(X,A)\) the equality \(f^{*}=\pm\operatorname{id}\) holds._
Proof.: Let the automorphism \(f_{*}:H^{\prime}_{k}(X,A;\mathbb{Z})\to H^{\prime}_{k}(X,A;\mathbb{Z})\) also induced by the homeomorphism \(f\). The formula \(f_{h}^{*}(q)=q\circ f_{*}\) defines the automorphism \(f_{h}^{*}:\operatorname{Hom}(H^{\prime}_{k}(X,A;\mathbb{Z});\mathbb{R})\to \operatorname{Hom}(H^{\prime}_{k}(X,A;\mathbb{Z});\mathbb{R})\). If \(H^{\prime}_{k}(X,A;\mathbb{Z})\cong\mathbb{Z}\), then \(f_{*}=\pm\operatorname{id}\). Moreover, \(f_{h}^{*}(q)=q\circ(\pm\operatorname{id})=\pm q\) for all \(q\in\operatorname{Hom}(H^{\prime}_{k}(X,A;\mathbb{Z});\mathbb{R})\). Hence \(f_{h}^{*}=\pm\operatorname{id}\).
It follows for the Universal Coefficient Formula for cohomology [23, Chapter VI, Section 7] that there exists the natural isomorphism \(\kappa:H^{k}(X,A)\to\operatorname{Hom}(H^{\prime}_{k}(X,A;\mathbb{Z}); \mathbb{R})\). The naturalness means commutativity of the diagram
\[\begin{CD}H^{k}(X,A)@>{\kappa}>{}>\operatorname{Hom}(H^{\prime}_{k}(X,A; \mathbb{Z});\mathbb{R})\\ @V{f^{*}}V{}V@V{}V{f_{h}^{*}}V\\ H^{k}(X,A)@>{\kappa}>{}>\operatorname{Hom}(H^{\prime}_{k}(X,A;\mathbb{Z}); \mathbb{R}).\end{CD} \tag{13}\]
It follows from (13) and the equation \(f_{h}^{*}=\pm\operatorname{id}\) that \(f^{*}=\kappa^{-1}\circ f_{h}^{*}\circ\kappa=\kappa^{-1}\circ(\pm\operatorname {id})\circ\kappa=\pm\operatorname{id}\).
**Lemma 4.5**.: _Let \(X\) be a topological space with a finite number of path-connected components, \(f:X\to X\) be a homeomorphism, and \(f^{*}:H^{0}(X)\to H^{0}(X)\) be an induced automorphism. Then any eigenvalue \(\lambda\) for \(f^{*}\) satisfies the equality \(\lambda^{2}=1\)._
Proof.: Firstly consider a case when \(X_{1}\) and \(X_{2}\) are path-connected topological spaces and \(f_{2}:X_{1}\to X_{2}\) is a homeomorphism. All elements of groups \(H^{0}(X_{j})\) are constant functions \(c_{j}:X_{j}\to\mathbb{R}\). Therefore, the formula \(\nu_{j}(c_{j})=\operatorname{im}c_{j}\) defines isomorphisms \(\nu_{j}:H^{0}(X_{j})\to\mathbb{R}\), \(j=1,2\). The induced isomorphism \(f_{2}^{*}:H^{0}(X_{2})\to H^{0}(X_{1})\) is defined by the formula \(f_{2}^{*}(c_{2})=c_{2}\circ f_{2}\). Since values of the functions \(c_{2}\) and \(c_{2}\circ f_{2}\) are equal, then
\[\nu_{1}\circ f_{2}^{*}=\nu_{2}. \tag{14}\]
Now suppose that \(X\) consists of path-connected components \(X_{1},\ldots,X_{m}\). Then there exists a permutation \(\sigma\in S_{m}\) such that \(f\) maps the component \(X_{j}\) onto the component \(X_{\sigma(j)}\) homeomorphically. Thus, setting \(f_{\sigma(j)}(x)=f(x)\) for all \(x\in X_{j}\), we obtain homeomorphisms \(f_{\sigma(j)}:X_{j}\to X_{\sigma(j)}\), \(j=1,\ldots,m\). Moreover, the induced homomorphisms \(f_{j}^{*}:H^{0}(X_{j})\to H^{0}(X_{\tau(j)})\) are defined, where \(\tau=\sigma^{-1}\). By virtue of (14)
\[\nu_{\tau(j)}\circ f_{j}^{*}=\nu_{j},\quad j=1,\ldots,m. \tag{15}\]
For each element \(c\in H^{0}(X)\) we set \(c_{j}=c|_{X_{j}}\). Then \(c_{j}\in H^{0}(X_{j})\). Define isomorphisms \(\mu:H^{0}(X)\to H^{0}(X_{1})\times\cdots\times H^{0}(X_{m})\) and \(\nu:H^{0}(X_{1})\times\cdots\times H^{0}(X_{m})\to\mathbb{R}^{m}\) by the formulas \(\mu(c)=(c_{1},\ldots,c_{m})\) and \(\nu((c_{1},\ldots,c_{m}))=(\nu_{1}(c_{1}),\ldots,\nu_{m}(c_{m}))\). We construct the automorphism \(p:\mathbb{R}^{m}\to\mathbb{R}^{m}\) such that the diagram is commutative
\[\begin{CD}H^{0}(X)@>{\mu}>{}>H^{0}(X_{1})\times\cdots\times H^{0}(X_{m})@>{ \nu}>{}>\mathbb{R}^{m}\\ @V{}V{f^{*}}V@V{}V{(f_{1}^{*},\ldots,f_{m}^{*})}V@V{}V{p}V\\ H^{0}(X)@>{\mu}>{}>H^{0}(X_{1})\times\cdots\times H^{0}(X_{m})@>{\nu}>{}> \mathbb{R}^{m}.\end{CD} \tag{16}\]
For all \(y=(y_{1},\ldots,y_{m})\in\mathbb{R}^{m}\) we set \(\|y\|=\sqrt{y_{1}^{2}+\cdots+y_{m}^{2}}\). Since \(f_{j}^{*}\) maps \(H^{0}(X_{j})\) onto \(H^{0}(X_{\tau(j)})\), then it follows from the equality (15) and the diagram (16) that \(p(y)=(y_{\tau(1)},\ldots,y_{\tau(1)})\). Moreover, \(\|p(y)\|=\|y\|\).
Finally, let \(\lambda\in\mathbb{R}\), \(c\in H^{0}(X)\), \(c\neq 0\) and \(f^{*}(c)=\lambda c\). We set \(y=\nu\circ\mu(c)\). Then by virtue of (16) \(p(y)=\mu\circ\nu(f^{*}(c))=\mu\circ\nu(\lambda c)=\lambda y\). Hence, according to what was proved above, we obtain \(\|y\|^{2}=\|p(y)\|^{2}=\lambda^{2}\|y\|^{2}\). Hence, \(\lambda^{2}=1\)
On oriented two-fold covering
Let \(M\) be a non-orientable connected smooth \(n\)-manifold, \(a\in M\) and \(x:I\to M\) be a loop based at a point \(a\). Let us consider continuous vector fields \(X_{1},\ldots,X_{n}\) along \(x\) such that \(X_{1}(t),\ldots,X_{n}(t)\) linearly independent for each \(t\in I\). Then there is a matrix \(A=(a_{i}^{j})\in\operatorname{GL}_{n}(\mathbb{R})\) such that
\[X_{i}(1)=a_{i}^{j}X_{j}(0),\quad i,j=1,\ldots,n. \tag{17}\]
Let \(\omega_{a}(x)=\operatorname{sign}\det A\). If \(y\) is a loop which based at the same starting point and \(x\sim y\) then \(\omega_{a}(x)=\omega_{a}(y)\). Therefore the formula \(\omega_{a}([x])=\omega_{a}(x)\) defines a homeomorphism \(\omega_{a}:\pi_{1}(M,a)\to G\), where \(G=\{1,-1\}\). The manifold \(M\) is orientable if and only if \(\ker\omega_{a}=\pi_{1}(M,a)\).
Let \(a,b\in M\), \(z:I\to M\) be a path which starts in \(z(0)=a\) and ends in \(z(1)=b\) and \(T_{z}:\pi_{1}(M,a)\to\pi_{1}(M,b)\) be the isomorphism defined by the formula \(T_{z}([x])=[z^{-1}xz]\). Then \(zz^{-1}\sim 1_{a}\) and \(z^{-1}z\sim 1_{b}\) implies commutativity of the diagram
\[\begin{CD}\pi_{1}(M,a)@>{\omega_{a}}>{}>\mathbb{R}\\ @V{T_{z}}V{}V@V{}V{\operatorname{id}}V\\ \pi_{1}(M,b)@>{\omega_{b}}>{}>\mathbb{R}.\end{CD} \tag{18}\]
**Lemma 5.1**.: _Let \(M,N\) be connected smooth manifolds, \(f:M\to N\) be a local diffeomorphism, \(a\in M\), \(b=f(a)\) and \(f_{*}:\pi_{1}(M,a)\to\pi_{1}(N,b)\) be an induced homeomorphism. Then the following diagram is commutative_
\[\begin{CD}\pi_{1}(M,a)@>{\omega_{a}}>{}>\mathbb{R}\\ @V{f_{*}}V{}V@V{}V{\operatorname{id}}V\\ \pi_{1}(N,b)@>{\omega_{b}}>{}>\mathbb{R}.\end{CD} \tag{19}\]
Proof.: Let \([x]\in\pi_{1}(M,a)\), \(X_{1},\ldots,X_{n}\) be continuous vector fields along \(x\), linearly independent at each point \(x(t)\), and the equality (17) is satisfied. Let \(y=f\circ x\) and \(Y_{i}(t)=df_{x(t)}(X_{i}(t))\) for every \(i=1,\ldots,n\) and \(t\in I\). Then \([y]\in\pi_{1}(N,b)\), \([y]=f_{*}([x])\) and \(Y_{1},\ldots,Y_{n}\) are continuous vector fields along the loop \(y\). According to the condition, \(df_{x(t)}:T_{x(t)}M\to T_{y(t)}N\) are isomorphisms. Therefore \(Y_{1}(t),\ldots,Y_{n}(t)\) are linearly dependent for all \(t\in I\). But
\[Y_{i}(1)=df_{a}(X_{i}(1))=df_{a}(a_{i}^{j}X_{j}(0))=a_{i}^{j}df_{a}(X_{j}(0))= a_{i}^{j}Y_{j}(0).\]
from (17) and the linearity of the differential \(df_{a}:T_{a}M\to T_{b}N\). Thus, \(\omega_{b}([y])=\operatorname{sign}\det A=\omega_{a}([x])\).
**Lemma 5.2**.: _Let \(M\) be a non-orientable connected smooth manifold and \(f:M\to M\) be a diffeomorphism. Then there exists a connected smooth orientable manifold \(\bar{M}\), a smooth two-fold cover \(p:\bar{M}\to M\) and a diffeomorphism \(\bar{f}:\bar{M}\to\bar{M}\) for which the diagram is commutative_
\[\begin{CD}\bar{M}@>{\bar{f}}>{}>\bar{M}\\ @V{p}V{}V@V{}V{p}V\\ M@>{f}>{}>M.\end{CD} \tag{20}\]
Proof.: Let \(a\in M\). Then \(\ker\omega_{a}\) is the normal divisor of the group \(\pi_{1}(M,a)\). By the theorem of the existence of covers, there will be a connected smooth manifold \(\bar{M}\), a regular smooth cover \(p:\bar{M}\to M\) and a point \(u\in\bar{M}\) such that \(p(u)=a\) and the induced homomorphism \(p_{*}^{u}:\pi_{1}(\bar{M},u)\to\pi_{1}(M,a)\) has the image \(\operatorname{im}p_{*}^{u}=\ker\omega_{a}\). As the manifold \(M\) is non-orientable then \(\pi_{1}(M,a)/\ker\omega_{a}\cong G\). Therefore \(p\) is a two-fold covering. As \(p_{*}^{u}:\pi_{1}(\bar{M},u)\to\ker\omega_{a}\) is an isomorphism then by Lemma 5.1 we get \(\ker\omega_{u}=\pi_{1}(\bar{M},u)\). That means \(\bar{M}\) is an orientable manifold.
Let \(b=f(a)\) and \(v\in p^{-1}(b)\). As the manifold \(\bar{M}\) is connected then there is a path \(\bar{z}:I\to\bar{M}\) with the starting in \(\bar{z}(0)=u\) and the end in \(\bar{z}(1)=v\). Let \(z=p\circ\bar{z}\). Then \(z(0)=a\), \(z(1)=b\) and
\[\operatorname{im}p_{*}^{v}=T_{z}(\operatorname{im}p_{*}^{u}). \tag{21}\]
As \(T_{z}:\pi_{1}(M,a)\to\pi_{1}(M,b)\) is an isomorphism then (18) implies
\[\ker\omega_{b}=T_{z}(\ker\omega_{a}). \tag{22}\]
Finitely, as \(f_{*}:\pi_{1}(M,a)\to\pi_{1}(M,b)\) is an isomorphism, (19) implies the equality
\[\ker\omega_{b}=f_{*}(\ker\omega_{a}). \tag{23}\]
If follows from (21), (22), (23) and the equality \(\operatorname{im}p_{*}^{u}=\ker\omega_{a}\) that
\[\operatorname{im}\left(f\circ p\right)_{*}^{u}=f_{*}(\operatorname{im}p_{*}^{ u})=f_{*}(\ker\omega_{a})=\ker\omega_{b}=T_{z}(\ker\omega_{a})=T_{z}( \operatorname{im}p_{*}^{u})=\operatorname{im}p_{*}^{v}.\]
According to a theorem from the theory of covering, in such a situation there is a map \(\bar{f}:\bar{M}\to\bar{M}\) such that \(\bar{f}(u)=v\) and the diagram (20) is
commutative. This mapping is uniquely defined and is smooth. Similarly, it is proved that for the inverse diffeomorphism \(f^{-1}:M\to M\) there is a smooth map \(\overline{f^{-1}}:\bar{M}\to\bar{M}\) such that \(\overline{f^{-1}}(v)=u\) and the following diagram is commutative
\[\begin{CD}\bar{M}@>{\overline{f^{-1}}}>{}>\bar{M}\\ @V{p}V{}V@V{}V{p}V\\ M@>{f^{-1}}>{}>M.\end{CD} \tag{24}\]
Adding (24) to (20) on the right and on the left, we get the equality \(\overline{f^{-1}}\circ\bar{f}=\mathrm{id}\) and \(\bar{f}\circ\overline{f^{-1}}=\mathrm{id}\). Therefore \(\overline{f^{-1}}=\bar{f}^{-1}\) and \(\bar{f}\) is a diffeomorphism.
**Lemma 5.3**.: _Let \(M\) be a smooth closed non-orientable connected 3-manifold and \(W^{1},\,W^{2}\subset M\) be immersions of open balls \(D^{1},\,D^{2}\) accordingly, such that \(\mathrm{Ind}_{x}(W^{1},W^{2})=\mathrm{Ind}_{y}(W^{1},W^{2})\) for every points \(x,\,y\in(W^{1}\cap W^{2})\). If \(p:\bar{M}\to M\) is an oriented double covering then \(\bar{W}^{1}=p^{-1}(W^{1}),\,\bar{W}^{2}=p^{-1}(W^{2})\) be immersions of two copies of open balls \(D^{1},\,D^{2}\) accordingly, \(\bar{W}^{1}=\bar{W}^{1}_{1}\sqcup\bar{W}^{1}_{2}\), \(\bar{W}^{2}=\bar{W}^{2}_{1}\sqcup\bar{W}^{2}_{2}\), and \(\mathrm{Ind}_{\bar{x}}(\bar{W}^{1}_{i},\bar{W}^{2}_{j})=\mathrm{Ind}_{\bar{y}} (\bar{W}^{1}_{i},\bar{W}^{2}_{j})\) for every points \(\bar{x},\,\bar{y}\in(\bar{W}^{1}_{i}\cap\bar{W}^{2}_{j})\), \(i,\,j=1,2\)._
Proof.: Consider a tubular neighborhood \(U^{k}\) of the submanifolds \(W^{k}\). Since the open subsets \(U^{k}\subset M\), \(k=1,2\), are contractible, they are regular covered neighborhoods. That is \(p^{-1}(U^{k})=\bar{U}^{k}_{1}\cup\bar{U}^{k}_{2}\), where \(\bar{U}^{k}_{1}\cap\bar{U}^{k}_{2}=\emptyset\) and \(p|_{\bar{U}^{k}_{i}}:\bar{U}^{k}_{i}\to U^{k}\) are diffeomorphisms, \(i=1,2\). Then the sets \(\bar{U}^{k}_{i}\) are tubular neighborhoods of smooth submanifolds \(\bar{W}^{k}_{i}\subset\bar{M}\), and the differences \(\bar{U}^{2}_{i}\setminus\bar{W}^{2}_{i}\) consist of the connected components \(\bar{U}^{2}_{i+}\) in \(\bar{U}^{2}_{i-}\).
Let \(\bar{\sigma}_{i}:\bar{U}^{2}_{i+}\cup\bar{U}^{2}_{i-}\to\mathbb{Z}\) be a function such that \(\bar{\sigma}(\bar{x})=1\) for \(\bar{x}\in\bar{U}^{2}_{i+}\) and \(\bar{\sigma}(\bar{x})=0\) for \(\bar{x}\in\bar{U}^{2}_{i-}\). As \(\bar{W}^{1}_{i}=(p|_{\bar{W}^{1}_{i}})^{-1}(J^{1}(D^{1}))\) then the intersection index in \(\bar{x}\in(\bar{W}^{1}_{i}\cap\bar{W}^{2}_{j})\) is equal to \(\mathrm{Ind}_{\bar{x}}(\bar{W}^{1}_{i},\bar{W}^{2}_{j})=\bar{\sigma}(t+ \delta)-\bar{\sigma}(t-\delta)\), where \(\delta\) is a small enough positive number. Then \(\mathrm{Ind}_{x}(W^{1},W^{2})=\mathrm{Ind}_{\bar{x}}(\bar{W}^{1}_{i},\bar{W}^{ 2}_{j})\) and \(\mathrm{Ind}_{y}(W^{1},W^{2})=\mathrm{Ind}_{\bar{y}}(\bar{W}^{1}_{i},\bar{W}^{ 2}_{j})\). So if \(\mathrm{Ind}_{x}(W^{1},W^{2})=\mathrm{Ind}_{y}(W^{1},W^{2})\) for every points \(x,\,y\in(W^{1}\cap W^{2})\) then \(\mathrm{Ind}_{\bar{x}}(\bar{W}^{1}_{i},\bar{W}^{2}_{j})=\mathrm{Ind}_{\bar{y}} (\bar{W}^{1}_{i},\bar{W}^{2}_{j})\) for every points \(\bar{x},\,\bar{y}\in(\bar{W}^{1}_{i}\cap\bar{W}^{2}_{j})\), \(i,\,j=1,2\).
Example of a diffeomorphism with a non-orientable expanding 2-dimensional attractor
Let us construct an example of an \(\Omega\)-stable diffeomorphism of a closed connected 3-manifold \(M^{3}\) the non-wandering set of which consists of trivial sources, saddles, and a non-orientable expanding 2-dimensional attractor \(\Lambda\).
We will start with hyperbolic toral automorphism \(L_{A}:\mathbb{T}^{3}\to\mathbb{T}^{3}\) induced by linear map of \(\mathbb{R}^{3}\) with a hyperbolic matrix \(A\in GL(3,\mathbb{Z})\), eigenvalues \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{3}\) of which such that \(0<\lambda_{1}<1<\lambda_{2}\leqslant\lambda_{3}\). The involution \(J:\mathbb{T}^{3}\to\mathbb{T}^{3}\) defined by the formula \(J(x)=-x\pmod{1}\) has 8 fixed points in the 3-torus of the form \((a,b,c)\), where \(a,b,c\in\{0,\frac{1}{2}\}\). Notice that these points are also fixed for \(L_{A}^{k}\) for some \(k\in\mathbb{N}\). Let us "blow up" these points like to the classical Smale surgery and such that the surgery commutes with the involution. We will obtain generalized DA-diffeomorphism \(f_{GDA}:\mathbb{T}^{3}\to\mathbb{T}^{3}\) with 8 fixed sources \(\alpha_{i}\), \(i\in\{1,2,\ldots,8\}\) and one 2-dimensional expanding attractor obtained from the diffeomorphism \(L_{A}^{k}\).
After that we will remove all sources and factorize the basin of the attractor to obtain a new manifold \(\tilde{M}\), i.e. \(\tilde{M}=(\mathbb{T}^{3}\setminus\bigcup\limits_{i=1}^{8}\alpha_{i})/_{x \sim-x}\). The natural projection \(p:\mathbb{T}^{3}\setminus\bigcup\limits_{i=1}^{8}\alpha_{i}\to\tilde{M}\) is a 2-fold cover. As \(f_{GDA}J=Jf_{GDA}\) then \(f_{GDA}\) is projected to \(\tilde{M}\) by the diffeomorphism \(\tilde{f}=pf_{GDA}p^{-1}:\tilde{M}\to\tilde{M}\) with one 2-dimensional expanding attractor \(\Lambda\) and \(\tilde{M}\) is its basin. The set \(\tilde{M}\setminus\Lambda\) consists of 8 connected components \(\tilde{N}_{i}\) each of which is diffeomorphic to \(\mathbb{R}P^{2}\times\mathbb{R}\), where \(\mathbb{R}P^{2}\) is the real projective plane.
To obtain a fundamental domain \(\tilde{D}_{i}\) of \(\tilde{f}|_{\tilde{N}_{i}}\) we can consider a local coordinates \((x,y,z):U_{i}\to\mathbb{R}^{3}\) in a neighborhood \(U_{i}\) of \(\alpha_{i}\) in which the diffeomorphism \(f_{GDA}\) has a form \(f_{GDA}(x,y,z)=(2x,2y,2z)\). A fundamental domain of \(f_{GDA}|_{W^{u}_{\alpha_{i}}\setminus\{\alpha_{i}\}}\) is \(D_{i}=\{(x,y,z)\in\mathbb{R}^{3}\,|\,1\leqslant x^{2}+y^{2}+z^{2}\leqslant 4\}\) and then the desired fundamental domain \(\tilde{D}_{i}=p(D_{i})\). By the construction it is homeomorphic to \(RP^{2}\times[0,1]\). The orbit space of \(f_{GDA}|_{W^{u}_{\alpha_{i}}\setminus\{\alpha_{i}\}}\) is homeomorphic to \(S^{2}\times S^{1}\) since each orientation preserving diffeomorphism of \(S^{2}\) is homotopic to identity. Then the orbit space \(\tilde{N}_{i}/\tilde{f}\) can be obtained as \(S^{2}\times S^{1}|_{\tilde{J}}\), where \(\tilde{J}\) is involution of \(S^{2}\times S^{1}\) induced by \(J\). Since \(\tilde{N}_{i}/\tilde{f}\) is non-orientable, it follows from [24] that \(\tilde{N}_{i}/\tilde{f}\) is either \(S^{2}\tilde{\times}S^{1}\), \(RP^{2}\times S^{1}\), or \(RP^{3}\#RP^{3}\). The orbit space \(\tilde{N}_{i}/\tilde{f}\) can also be obtained from the fundamental domain \(\tilde{D}_{i}\) as a mapping torus \(RP^{2}\times[0,1]|_{(x,0)\sim(\tilde{f}(x),1)}\). Hence a fundamental group of the
orbit space \(\pi_{1}(\tilde{N}_{i}/\tilde{f})=\mathbb{Z}_{2}\rtimes_{\tilde{f}}\mathbb{Z}\) and then it can be only \(RP^{2}\times S^{1}\).
Consider a gradient-like diffeomorphism \(g_{1}:\mathbb{R}P^{2}\to\mathbb{R}P^{2}\) with exactly 3 fixed points: a source \(\alpha\), a sink \(\omega\) and a saddle \(\sigma\) (see Fig. 6).
Let \(g_{2}:\mathbb{R}\to\mathbb{R}\) be a diffeomorphism given by the formula \(g_{2}(x)=2x\) and \(g(w,x)=(g_{1}(w),g_{2}(x)):\mathbb{R}P^{2}\times\mathbb{R}\to\mathbb{R}P^{2} \times\mathbb{R}\). Let us denote \(N_{1},N_{2}\) the connected components of \(\mathbb{R}P^{2}\times(\mathbb{R}\setminus\{0\})\). Analogically with cases with \(\tilde{N}_{i}\) the orbit spaces \(N_{j}/g\) are diffeomorphic to \(\mathbb{R}P^{2}\times\mathbb{S}^{1}\).
As \(\tilde{N}_{i}/\tilde{f}\) are diffeomorphic to \(N_{j}/g\) then there is a diffeomorphism \(h:\tilde{N}_{i}\to N_{j}\) conjugating \(\tilde{f}\) with \(g\). Let \(h_{i}:\tilde{N}_{i}\to N_{1},\,i=1,3,5,7\) and \(h_{i}:\tilde{N}_{i}\to N_{2},\,i=2,4,6,8\) be such diffeomorphisms. For \(\tilde{N}=\bigcup\limits_{i=1}^{8}\tilde{N}_{i}\) denote by \(h:\tilde{N}\to(N_{1}\sqcup N_{2})\times\mathbb{Z}_{4}\) a diffeomorphism composed by \(h_{i},\,i\in\{1,\dots,8\}\). Let \(\tilde{P}=\mathbb{R}P^{2}\times\mathbb{R}\times\mathbb{Z}_{4}\) and \(G:\tilde{P}\to\tilde{P}\) be a diffeomorphism composed by \(g\) on every copy of \(\mathbb{R}P^{2}\times\mathbb{R}\). Finitely, let \(M^{3}=\tilde{M}\cup_{h}\tilde{P}\). Denote by \(q:\tilde{M}\sqcup\tilde{P}\to M^{3}\) the natural projection. Then the desired diffeomorphism \(f:M^{3}\to M^{3}\) coincides with the diffeomorphism \(q\tilde{f}q^{-1}|_{q(\tilde{M})}\) on \(q(\tilde{M})\) and with the diffeomorphism \(qGq^{-1}|_{q(\tilde{P})}\) on \(q(\tilde{P})\).
|
2309.12921 | Boundary Representations of Locally Compact Hyperbolic Groups | We develop the theory of Patterson-Sullivan measures on the boundary of a
locally compact hyperbolic group, associating to certain left invariant metrics
on the group measures on the boundary. We later prove that for second
countable, non-elementary, unimodular locally compact hyperbolic groups the
associated Koopman representations are irreducible and their isomorphism type
classifies the metric on the group up to homothety and bounded additive
changes, generalizing a theorem of Garncarek on discrete hyperbolic groups. We
use this to answer a question of Caprace, Kalantar and Monod on type I
hyperbolic groups in the unimodular case. | Michael Glasner | 2023-09-22T15:20:42Z | http://arxiv.org/abs/2309.12921v1 | # Boundary representations of locally compact hyperbolic groups
###### Abstract.
We develop the theory of Patterson-Sullivan measures on the boundary of a locally compact hyperbolic group, associating to certain left invariant metrics on the group measures on the boundary. We later prove that for second countable, non-elementary, unimodular locally compact hyperbolic groups the associated Koopman representations are irreducible and their isomorphism type classifies the metric on the group up to homothety and bounded additive changes, generalizing a theorem of Garncarek on discrete hyperbolic groups. We use this to answer a question of Caprace, Kalantar and Monod on type I hyperbolic groups in the unimodular case.
Key words and phrases:Locally compact hyperbolic groups, Boundary representations, Patterson Sullivan measures, Type I groups, Double ergodicity 2020 Mathematics Subject Classification: 22D10,20F67,43A65,37A40
###### Contents
* 1 Introduction
* 1.1 Koopman and boundary representations
* 1.2 Type I hyperbolic groups
* 1.3 The space \(\partial^{2}G\)
* 1.4 Non unimodular groups
* 1.5 Structure of the paper
* 1.6 Acknowledgments
* 2 Preliminaries
* 2.1 Notation and conventions
* 2.2 Maps between metric spaces
* 2.3 Hyperbolic metric spaces
* 2.4 Locally compact hyperbolic groups
* 2.5 Non-singular actions and Koopman representations
* 3 Patterson-Sullivan Measures
* 4 Some Growth Estimates
* 5 Operators on \(L^{2}(\mu)\)
* 5.1 Estimates on \(\pi(g)\)
* 5.2 Kernel operators
* 6 Irreducibility of Boundary Representations
* 7 Rough Equivalence of Metrics
* 8 Type I Hyperbolic Groups
* 9 The Geodesic Flow
* 10 Double Ergodicity
## 1. Introduction
### Koopman and boundary representations
A locally compact group \(G\) is called hyperbolic if for some (hence any) compact generating set the coresponding word metric is Gromov hyperbolic. Equivalently \(G\) is hyperbolic if it admits a proper, cocompact, isometric action on a proper, geodesic, Gromov hyperbolic metric space \(X\). Examples are:
1. Rank one simple Lie groups with finite center.
2. Minimal parabolic subgroups of rank one simple Lie groups with finite center.
3. Groups acting properly and co-compactly on locally finite trees.
One of the most useful tools to study \(G\) is the action of \(G\) on its Gromov boundary \(\partial G\). Given a left invariant metric \(d\) on \(G\) which is quasi isometric to a word metric and satisfies some mild assumptions we will construct a measure \(\mu\) on \(\partial G\) called the Patterson-Sullivan measure associated to \(d\). The measure class \([\mu]\) is invariant and \(\mu\) is quasi conformal, i.e there exists \(C\geq 1\) such that for all \(g\in G\):
\[\frac{1}{C}e^{-h([g]-2(g,\xi))}\leq\frac{dg_{*}\mu}{d\mu}(\xi)\leq Ce^{-h([g] -2(g,\xi))}\]
where \(h\) is the critical exponent of \(G\) with respect to \(d\) and \((\cdot,\cdot)\) is the Gromov product.
To such a nonsingular action one can associate a unitary representation of \(G\) on \(L^{2}(\mu)\) called the Koopman representation, defined by:
\[[\pi(g)f](\xi)=\sqrt{\frac{dg_{*}\mu}{d\mu}(\xi)}f(g^{-1}\xi)\]
The Koopman representation depends up to unitary equivalence only on the measure class \([\mu]\).
Irreducibility of the Koopman representation implies ergodicity and can be thought of as a mixing property of the action.
We will call the Koopman representations associated to the Patterson-Sullivan measures boundary representations or Patterson-Sullivan representations.
Our first main theorem is the following:
**Theorem 1.1**.: _Let \(G\) be a second countable, unimodular, non-elementary locally compact hyperbolic group. For any Borel measurable, left invariant metric \(d\) on \(G\) which is Gromov hyperbolic and quasi isometric to a word metric, the associated Patterson-Sullivan representation \(\pi\) is irreducible._
Our second main theorem provides a classification of the Patterson-Sullivan representations in terms of the metric \(d\).
**Theorem 1.2**.: _Let \(G\) be a second countable, unimodular, non-elementary locally compact hyperbolic group. Given two Borel measurable, left invariant metrics \(d_{1},d_{2}\) on \(G\) which are Gromov hyperbolic and quasi isometric to a word metric, the corresponding Patterson-Sullivan representations \(\pi_{1}\) and \(\pi_{2}\) are unitarily equivalent if and only if \(d_{1}\) and \(d_{2}\) are roughly
similar, that is, there exist constants \(L,C>0\) such that for all \(g,h\in G\), \(Ld_{1}(g,h)-C\leq d_{2}(g,h)\leq Ld_{1}(g,h)+C\). (Note this is much stronger than a quasi-isometry since the multiplicative constant is the same in the upper and lower bounds!)_
These theorems generalize earlier works by Bader, Muchnik [3] and Garncarek [15] which deal with the cases of fundamental groups of negatively curved manifolds and discrete hyperbolic groups respectively. Many of our methods are similar to those in [15] although some constructions are altered to fit the locally compact setting. It is interesting to notice throughout the paper the key lemmas in which we use the unimodularity assumption from theorems 1.1 and 1.2. Specifically note lemma 4.3, lemma 5.6 and appendix B.
Theorem 1.1 can be envisioned in the wider context of a general conjecture by Bader and Muchnik:
**Conjecture** (Bader, Muchnik [3]).: Let \(G\) be a locally compact group and \(\nu\) a spread out probability measure on \(G\). For any \(\nu\)-boundary of \(G\) the associated Koopman representation is irreducible.
### Type I hyperbolic groups
Our main application of theorems 1.1 and 1.2 will be to the study of type I hyperbolic groups, generalizing results of Caprace, Kalantar and Monod from [11].
**Definition 1.3**.: A locally compact group \(G\) is of type I if any two irreducible unitary representations of \(G\) which are weakly equivalent (see [4, Appendix F.1]) are unitarily equivalent.
Type I groups have a relatively simple unitary representation theory. In a certain sense they are the groups for which there is hope of obtaining a complete classification of all unitary representations. For an introduction on type I groups see [14, Section 7.2].
In section 8 we use theorems 1.1 and 1.2 to deduce the following generalization of [11, Theorem B]:
**Theorem 1.4**.: _Let \(G\) be a second countable unimodular locally compact hyperbolic group. If \(G\) is of type I then \(G\) has a co-compact amenable subgroup._
By a theorem of Thoma [20][21], discrete groups are of type I if and only if they are virtually abelian. Therefore discrete hyperbolic groups of type I are elementary. As a result the above theorem is interesting only for non discrete groups.
In [10, Theorem D] there is a classification of non amenable hyperbolic locally compact groups containing a cocompact amenable subgroup. We can use this to deduce:
**Corollary 1.5**.: _Let \(G\) be a second countable unimodular locally compact hyperbolic group. Recall \(G\) contains a unique maximal compact normal subgroup \(W\). If \(G\) is of type I then exactly one of the following holds:_
1. \(G/W\) _is a rank one adjoint Lie group._
2. \(G/W\) _is a closed subgroup of the automorphism group of a locally finite, non elementary tree_ \(T\)_, acting without inversions, with exactly two orbits of vertices and_ \(2\)_-transitively on the boundary._
3. \(G/W\) _is trivial or isomorphic to_ \(\mathbb{Z},\mathbb{R},\mathbb{Z}\rtimes\{\pm 1\}\) _or_ \(\mathbb{R}\rtimes\{\pm 1\}\)_._
It is already pointed out in [11, Remark 5.6] that generalizing the works of Garncarek [15] as in theorems 1.1 and 1.2 would imply the theorems above.
Theorem 1.4 fits in to a much wider structural conjecture of Caprace Kalantar and Monod about type I groups:
**Conjecture** (Caprace, Kalantar, Monod [11]).: Every second countable locally compact group of type I admits a cocompact amenable subgroup.
### The space \(\partial^{2}G\)
Let \(G\) be a second countable locally compact hyperbolic group, \(d\) a Borel measurable left invariant metric on \(G\) which is quasi isometric to a word metric and Gromov hyperbolic. Denote by \(\mu\) the corresponding Patterson Sullivan measure.
During the proof of theorem 1.2 we will establish several results of independent interest about the space \(\partial^{2}G\) of distinct pairs in \(\partial G\).
First we shall construct an invariant (infinite) measure \(m\) in the measure class of \(\mu^{2}\). This measure will be the analogue of the Bowen-Margulis-Sullivan measure in our case.
It will then be shown that if \(G\) is unimodular, then the action of \(G\) on \((\partial^{2}G,m)\) is weakly mixing in the following sense:
**Theorem 1.6**.: _For any ergodic p.m.p action \(G\curvearrowright(\Omega,\omega)\) the diagonal action of \(G\) on \((\partial^{2}G\times\Omega,m\times\omega)\) is ergodic._
This implies in particular that the action of \(G\) on \((\partial^{2}G,m)\) is ergodic and therefore that \(m\) is the unique invariant measure in the measure class of \([\mu^{2}]\). This will be the critical fact about \(\partial^{2}G\) used in the proof of theorem 1.2.
In order to prove theorem 1.6 we define a cocycle \(\tau:G\times\partial^{2}G\to\mathbb{R}\):
\[\tau(g,\xi,\eta)=\frac{1}{h}\left(ln\frac{dg_{*}^{-1}\mu}{d\mu}(\eta)-ln\frac {dg_{*}^{-1}\mu}{d\mu}(\xi)\right)\]
where \(h\) is the critical exponent of \((G,d)\).
\(G\) then acts on \(\partial^{2}G\times\mathbb{R}\times\Omega\) via:
\[g(\xi,\eta,t,w)=(g\xi,g\eta,t+\tau(g,\xi,\eta),gw)\]
and this action commutes with the \(\mathbb{R}\)-flow defined by:
\[\Phi^{s}(\xi,\eta,t,w)=(\xi,\eta,t+s,w)\]
This flow provides an analogue of the geodesic flow of a negatively curved simply connected manifold in our general context. Since it commutes with the \(G\) action \(\Phi\) descends to an \(\mathbb{R}\)-flow \(\phi\) on the space \(X=(\partial^{2}G\times\mathbb{R}\times\Omega)//G\) of \(G\)-ergodic components. We will show that \(X\) supports a \(\phi\)-invariant probability measure \(\nu\) in its canonical measure class.
Using \(\phi\) we will prove the following ergodic theorem for \(\partial^{2}G\):
**Theorem 1.7**.: _Suppose \(G\) is unimodular and let \(G\curvearrowright(\Omega,\omega)\) be an ergodic p.m.p action. For any \(f\in L^{1}(\partial^{2}G\times\Omega,m\times\omega)\), for almost every \((\xi,\eta,w)\) and any \(a\in\mathbb{R}\):_
\[\lim_{b\to\infty}\frac{1}{b-a}\int_{\{g\in G|\tau(g,\xi,\eta)\in[a,b]\}}f(g \xi,g\eta,gw)d\lambda(g)=\int f(\xi,\eta,w)dm\times\omega\]
We will then use this to prove theorem 1.6. Our methods will be closely based on a paper by Bader and Furman [2], generalizing some of their results from discrete to unimodular locally compact hyperbolic groups. While the proofs are similar, the non-discreteness of the group makes the measure theoretic aspects much more delicate.
### Non unimodular groups
It is interesting to consider what happens in theorems 1.1 and 1.2 when the unimodularity assumption is dropped. The following example shows that in general theorem 7.2 dose not hold:
**Example 1.8**.: Consider the minimal parabolic subgroup \(P\) of \(SL_{2}(\mathbb{R})\), i.e the group of \(2\) by \(2\) upper triangular matrices with determinant \(1\). \(P\) is non-unimodular and non-elementary. This group is isomorphic to the group of affine transformations of the real line with positive leading coefficient. The Gromov boundary of this group is the boundary of the hyperbolic plane. In the upper half plane model \(P\) fixes \(\infty\) and acts affinly on \(\mathbb{R}\). We shall see in 3.7 that any Patterson-Sullivan measure corresponding to any metric on \(P\) has no atoms and therefore is supported entirely on \(\mathbb{R}\). Since the action of \(P\) on \(\mathbb{R}\) is transitive there is a unique \(P\)-invariant measure class on \(\mathbb{R}\), the Lebesgue class. Thus the corresponding representation is independent of the metric \(d\) and equivalent to the Koopman representation of \(P\) on \(L^{2}(\mathbb{R})\). Because the leading coefficient of any affine map in \(P\) is positive the action on \(L^{2}(\mathbb{R})\) preserves the subspaces of functions whose Fourier transforms are supported on \(\mathbb{R}_{+}\) and \(\mathbb{R}_{-}\) so this representation is not irreducible. For a full description of the representation theory of \(P\) see [14, Section 6.7].
Amenable hyperbolic locally compact groups are unimodular if and only if they are elementary([10, Theorem 7.3]). Since \(P\) is amenable it is very natural to ask whether the unimodularity assumption in theorems 1.1 and 1.2 can be replaced by the assumption that the group is non-amenable. We do not know the answer to this question. Next we give an example of a non-amenable, non-unimodular, hyperbolic locally compact group.
**Example 1.9**.: Let \(T\) be a regular tree of degree \(n<\infty\). Choose an orientation \(O\) on the edges of \(T\) such that around each vertex, \(k\) edges are oriented inward and \(n-k\) outward. consider the group of automorphisms of the tree preserving the orientation, \(G=Aut(T,O)\). If \(k\neq n-k\) then \(G\) is non-unimodular, since the stabilizer of an edge has a different index in the stabilizers of the corresponding vertices, even though the stabilizers of the two vertices are conjugate. As long as \(k,n-k\neq 1\), \(G\) is non-amenable since it is non-elementary and does not fix any end of the tree. As an example of the non-unimodular non-amenable case it is interesting to ask whether \(G\) satisfies theorems 1.1 and 1.2. As far as we are aware it is not even known whether the Koopman representation of \(G\) corresponding to the standard measure on \(\partial T\) is irreducible.
### Structure of the paper
In section 2 we fix the notation and our conventions for hyperbolic metric spaces and we give an overview of the theory of locally compact hyperbolic groups. In section 3 we develop the theory of Patterson-Sullivan measures for locally compact hyperbolic groups. The theory is very similar to the discrete case but has never been written down before. The main parts of the proof of theorem 1.1 are given in section 4 and section 5. Section 4 Contains many of the geometric aspects of the proof while section 5 includes the representation theoretic sides. In section 6 we complete the proof of theorem 1.1 and in section 7 we prove theorem 1.2 on rough equivalence of metrics. Section 8 contains applications to hyperbolic groups of type I. In section 9 and section 10 we construct a measurable version of a geodesic flow for \(G\) and use it to prove the ergodicity of the action on \(\partial^{2}G\), a result which is needed in section 7. In the appendices A and B we deal with measure theoretic technicalities arising from the fact that our groups are non discrete.
### Acknowledgments
I would like to thank my advisor, Uri Bader, for his constant support and guidance. Every conversation with him leaves me in a better mood and in awe
of some amazing mathematics. Without him this project would never have come to fruition. I would also like to thank my father, Yair Glasner, who has taught me so much both in math and in general through the years and my mother, Shalvia Glasner, for listening to me and helping me when things seemed hard and unapproachable. Last but not least, I would like to thank my teammates at the Weizmann institute: Alon Dogon, Itamar Vigdorovitch, Sheve Leibtag, Aviv Taller, Benny Bachner, Paul Vollrath, Raz Slutsky, Gil Goffer, Tal Cohen, Guy Salomon, Omer Lavi, Guy Kapon, Yuval Salant, Yuval Gorfine, Idan Pazi and Peleg Bar-Sever for their constant friendship and support.
## 2. Preliminaries
### Notation and conventions
Throughout the paper we will use estimates involving additive and multiplicative constants. In order to stop these constants from snowballing we introduce the following notation. Given functions \(f\) and \(g\) with a common domain, if there exists \(0\leq c\) such that \(f\leq g+c\) we write \(f\lesssim g\). If \(f\lesssim g\) and \(g\lesssim f\) we write \(f\approx g\). Similarly, if there exists \(0<C\) such that \(f\leq Cg\) we write \(f\prec g\) and if \(f\prec g\) and \(g\prec f\) we write \(f\prec g\). If the constants in an estimate depend on a parameter we write the parameter as subscript in the inequality to denote the fact that the estimate is not uniform in that parameter. For example \(f(x,y)\prec_{y}g(x,y)\) means that there exists \(0<C=C(y)\), possibly depending on \(y\) but not depending on \(x\) such that \(f(x,y)\leq C(y)g(x,y)\). The same conventions are used in [15].
### Maps between metric spaces
Given two metric spaces \((X,d_{X})\) and \((Y,d_{Y})\) we call a function \(f:X\to Y\) an (L,c)-quasi-isometric embedding if
\[\frac{1}{L}d_{X}(x_{1},x_{2})-c\leq d_{Y}(f(x_{1}),f(x_{2})\leq Ld_{X}(x_{1},x _{2})+c\]
A (L,c)-quasi-isometric embedding is called a (L,c)-quasi-isometry if there exists \(0\leq D\) such that for any \(y\in Y\) there exists \(x\in X\) such that \(d_{Y}(f(x),y)\leq D\). A (1,c)-quasi-isometric embedding is called a c-rough embedding and a (1,c)-quasi-isometry is called a c-rough isometry. A map \(f:X\to Y\) is a (L,c)-rough similarity if \(Ld_{X}(x_{1},x_{2})-c\leq d_{Y}(f(x_{1},x_{2})\leq Ld_{X}(x_{1},x_{2})+c\). When the constants are irrelevant we will omit them and say "f is a quasi-isometry" or "f is a rough embedding".
A geodesic in a metric space X is an isometric embedding of \(\mathbb{R}\). We define quasi-geodesics and rough geodesics as quasi-isometric embeddings and rough embeddings of \(\mathbb{R}\). We make the same definitions for rays and segments replacing \(\mathbb{R}\) by closed rays and closed intervals in \(\mathbb{R}\). A metric space is called geodesic if every two points can be connected by a geodesic segment. Similarly a metric space is called (L,c)-quasi-geodesic if any two points can be connected by a (L,c)-quasi-geodesic segment and c-roughly geodesic if it is (1,c)-quasi-geodesic. Here also we will omit the constants if they are irrelevant. Given a quasi-geodesic or roughly geodesic space we will implicitly assume that all quasi-geodesics or rough geodesics used have uniformly bounded constants compatible with the constants of the space.
### Hyperbolic metric spaces
A good introduction to the theory of hyperbolic metric spaces can be found in [7, Chapter 3.H]. A more general theory for non-proper non-geodesic spaces is developed in [13]. We provide here a brief survey.
Let (X,d) be a metric space. Given \(x,y,z\in X\) define the **Gromov product** of \(x\) and \(y\) with respect to \(z\):
\[(x,y)_{z}=\frac{1}{2}(d(z,x)+d(z,y)-d(x,y))\]
Note that \(|(x,y)_{z}-(x,y)_{w}|\leq d(w,z)\). If \(X\) is equipped with a base point \(o\) we will denote \((x,y)=(x,y)_{o}\)
**Definition 2.1**.: Let \(0\leq\delta\). A metric space (X,d) is called \(\delta\)-hyperbolic if for any \(x,y,z,o\in X\)
\[(x,z)_{o}\geq\min\{(x,y)_{o},(y,z)_{o}\}-\delta\]
A metric space is (Gromov)-hyperbolic if it is \(\delta\)-hyperbolic for some \(0\leq\delta\).
Given a hyperbolic space \(X\) fix a base point \(o\). A sequence \((x_{n})\) is called **Cauchy-Gromov** if \(\lim_{n,m\to\infty}(x_{n},x_{m})=\infty\). We say such a sequence converges to infinity. Two such sequences \((x_{n}),(y_{n})\) are said to be equivalent if \(\lim_{n\to\infty}(x_{n},y_{n})=\infty\). Using hyperbolicity of the space one sees that this is an equivalence relation. These notions don't depend on \(o\).
**Definition 2.2**.: The Gromov boundary of a hyperbolic space \(X\) is
\[\partial X=\{[(x_{n})]|(x_{n})\text{is Cauchy-Gromov}\}\]
We denote \(\bar{X}=X\cup\partial X\).
We now extend the definition of the Gromov product to points in \(\bar{X}\). Given \(\xi,\eta\in\partial X\) and \(z,o\in X\) define:
\[(\xi,\eta)_{o}=\inf\liminf_{n\to\infty}(x_{n},y_{n})_{o}\]
\[(\xi,z)_{o}=\inf\liminf_{n\to\infty}(x_{n},z)_{o}\]
where the infimum is taken over sequences representing the boundary points. After replacing \(\delta\) by \(2\delta\) the extension of the Gromov product still satisfies the condition given in the definition of hyperbolicity. It follows from hyperbolicity that
\[|\sup\limsup_{n\to\infty}(x_{n},y_{n})_{o}-\inf\liminf_{n\to\infty}(x_{n},y_{ n})_{o}|<2\delta\]
\[|\sup\limsup_{n\to\infty}(x_{n},z)_{o}-\inf\liminf_{n\to\infty}(x_{n},z)_{o}| <2\delta\]
Thus up to \(2\delta\) one can compute the Gromov product by any choice of sequences representing the points. We will only care about the Gromov product up to an additive constant so this will be useful.
\(\bar{X}\) can be given a topology as follows. Choose a base point \(o\) for \(X\). Open sets in \(X\) are just the open sets in the metric topology on \(X\). A family of basic (not necessarily open) neighborhoods around a point \(\xi\in\partial X\) are given by \(\{x\in\bar{X}|(x,\xi)>M\}\) where \(M\in[0,\infty)\). In this topology \(x_{n}\) converges to \(\xi\in\partial X\) if and only if \((x_{n},\xi)\to\infty\). This topology does not depend on the base point \(o\) and if \(X\) is a proper geodesic metric space then \(\partial X\) and \(\bar{X}\) are compact.
**Lemma 2.3**.: _The Gromov product is lower semi continuous as a function \((\cdot,\cdot).:\bar{X}\times\bar{X}\times X\to\mathbb{R}\cup\{\infty\}\)._
The proof can be found in [13, Lemma 3.4.23]
**Definition 2.4**.: Let \(X\) be a hyperbolic metric space. A visual metric \(d_{\varepsilon}\) with parameter \(\varepsilon>0\) on \(\partial X\) is a metric satisfying:
\[d_{\varepsilon}(\xi,\eta)\asymp e^{-\varepsilon(\xi,\eta)} \tag{1}\]
For small enough \(\varepsilon\) a visual metric always exists (see [13, Proposition 3.6.8], [7, Chapter 3.H.3]). Such a metric always generates the standard topology on \(\partial X\). The hyperbolicity implies that a visual metric \(d_{\varepsilon}\) is almost an ultra-metric in the following sense, for all \(\xi_{1}\), \(\xi_{2}\), \(\xi_{3}\):
\[d_{\varepsilon}(\xi_{1},\xi_{3})\prec max\{d_{\varepsilon}(\xi_{1},\xi_{2}), d_{\varepsilon}(\xi_{2},\xi_{3})\}\]
**Definition 2.5**.: Let X be a hyperbolic metric space, \(x,o\in X\) and \(\sigma>0\). The shadow cast by \(x\) from \(o\) with parameter \(\sigma\) on \(\partial X\) is
\[\Sigma_{o}(x,\sigma)=\{\xi\in\partial X|(x,\xi)_{o}+\sigma>d(x,o)\}\]
If o is a given base point of X we will omit it from the notation.
We now mention three lemmas we will need in order to work with hyperbolic locally compact groups. These lemmas are standard for geodesic hyperbolic spaces but we will need them in a more general setting. The proofs of these lemmas are based on the ones given in [15, Subsection 3.1] and are basically a restating of results from [5] and [6]
**Lemma 2.6**.: _Let \(X\) be a geodesic hyperbolic metric space and \(Y\) a metric space which is quasi-isometric to \(X\), then \(Y\) is hyperbolic if and only if it is roughly geodesic._
Proof.: We follow the proof in [15, Subsection 3.1]. By [5, Theorem A.1]\(Y\) is hyperbolic if and only if \(Y\) is quasi-ruled in the sense defined in [5]. By [5, Lemma A.2] a quasi-ruled space is roughly geodesic, on the other hand a roughly geodesic space is obviously quasi-ruled.
**Lemma 2.7**.: _Let X be a roughly geodesic hyperbolic space then:_
1. _For any_ \(x\in X\) _and_ \(\xi\in\partial X\) _there is a roughly geodesic ray_ \(\gamma\) _with_ \(\gamma(0)=x\)_,_ \(\gamma(\infty)=\lim_{t\to\infty}\gamma(t)=\xi\)_._
2. _For any_ \(\xi,\eta\in\partial X\) _there is a rough geodesic_ \(\gamma\) _with_ \(\gamma(-\infty)=\xi\)_,_ \(\gamma(\infty)=\eta\)_._
Proof.: This follows from [6, Proposition 5.2] and from the obvious fact that roughly geodesic spaces are almost geodesic in the sense defined in [6]. Note that as a consequence of [6, Proposition 5.2] the almost geodesic hyperbolic spaces are precisely the roughly geodesic ones.
**Lemma 2.8**.: _If \(f:X\to Y\) is a quasi-isometry of roughly geodesic hyperbolic spaces, \(o\in X\), then there exists \(L,C>0\) such that \(\frac{1}{L}(x,y)-C\leq(f(x),f(y))_{f(o)}\leq L(x,y)_{o}+C\). As a result \(f\) canonically extends to a homeomorphism \(\partial f:\partial X\to\partial Y\)._
Proof.: The estimate on Gromov products is exactly [6, Proposition 5.5]. It follows that \(f\) respects the notion of being Cauchy-Gromov so \(f\) extends to the boundary, furthermore the estimate implies this extension is continuous. Applying the same argument to a quasi-inverse of \(f\) shows the map between boundaries is a homeomorphism.
Finally we will need one last lemma.
**Lemma 2.9**.: _Let \(X\) be a roughly geodesic hyperbolic space with base point \(o\), fix a visual metric \(d_{\epsilon}\) on \(\partial X\). Let \(C\geq 0\). For large enough \(\sigma\) (depending on \(C\)), for any \(\xi\in\partial X\), \(R>0\), any roughly geodesic ray \(\gamma\) with \(\gamma(0)=o,\gamma(\infty)=\xi\) and for any \(g\) such that \(d(\gamma(R),g)\leq C\):_
\[B_{e^{-\epsilon R}}(\xi)\subseteq\Sigma_{o}(g,\sigma)\]
_In particular for large enough \(\sigma\):_
\[B_{e^{-\epsilon R}}(\xi)\subseteq\Sigma_{o}(\gamma(R),\sigma)\]
Proof.: On the one hand \((\gamma(R),\xi)\approx|\gamma(R)|\approx R\). On the other hand if \(d_{\epsilon}(\xi,\eta)<e^{-\epsilon R}\) then \((\xi,\eta)\gtrsim R\) and since \(d(\gamma(R),g)<C\), \((\gamma(R),g)\approx_{C}R\). Now we get \((g,\eta)\gtrsim min\{(g,\gamma(R)),(\gamma(R),\xi),(\xi,\eta)\}\approx_{C}R\). Thus if \(\sigma\) is chosen large enough (independently of \(o,\xi,R,\gamma\) but depending on \(C\)) we get \(\eta\in\Sigma_{o}(\gamma(R),\xi)\). Therefore \(B_{e^{-\epsilon R}}(\xi)\subseteq\Sigma_{o}(g,\sigma)\) as needed.
### Locally compact hyperbolic groups
This subsection will present the framework of locally compact hyperbolic groups in which we will work. A good introduction to the concept of a locally compact hyperbolic group can be found in [10]. We will use the theory developed there.
**Definition 2.10**.: A locally compact group \(G\) is called hyperbolic if it is compactly generated and for some (hence any) compact generating set the induced word metric on \(G\) is hyperbolic.
In the discrete case this definition recovers the standard notion of hyperbolicity for groups. By [10, Corollary 2.6]\(G\) is hyperbolic if and only if there exists a proper geodesic metric space \(X\) on which \(G\) admits a continuous proper co-compact isometric action. The action of G extends continuously to \(\bar{X}\). We obtain the following examples:
1. Rank one simple Lie groups with finite center are hyperbolic.
2. Minimal parabolic subgroups of rank one simple Lie groups with finite center are hyperbolic.
3. Groups acting properly and co-compactly on locally finite trees are hyperbolic.
4. Generalizing the previous examples, locally compact groups acting isometrically properly and co-compactly on proper CAT(-1) spaces are hyperbolic.
5. Totally disconnected compactly generated groups are hyperbolic if and only if some (and hence every) associated Cayley-Abels graph is hyperbolic.
Let \(G\) be a locally compact hyperbolic group. Denote by \(\mathcal{D}(G)\) the set of all left invariant hyperbolic metrics on \(G\) which are quasi-isometric to a word metric on G and are Borel measurable as functions \(d:G\times G\to\mathbb{R}\). If \(d\in\mathcal{D}(G)\) we will always consider \((G,d)\) to be equipped with base point \(1\in G\). Since word metrics are roughly isometric to the corresponding Cayley graph which is a geodesic space, it follows from 2.6 that all the metrics in \(\mathcal{D}(G)\) are roughly geodesic. In addition by 2.8 we see that all such metrics give rise to the same boundary which we shall denote \(\partial G\). Similarly if \(G\) acts on \(X\) as above, \(\partial G\cong\partial X\). The following are examples of metrics in \(\mathcal{D}(G)\):
1. Word metrics corresponding to compact generating sets.
2. If \(G\) acts on \((X,d_{X})\) as above, for any \(x\in X\) one can define \(d_{0}(g,h)=d_{X}(gx,hx)\). In general this is only a pseudo-metric but any left invariant Borel measurable metric roughly similar to it will be in \(\mathcal{D}(G)\). There always exists such a metric, for example one can define \(d(g,g)=0,d(g,h)=d_{0}(g,h)+1\) for \(g\neq h\). Alternatively one can adjust the theory to work with pseudo-metrics.
Note that we do not require the metrics in \(\mathcal{D}(G)\) to be continuous, the reason for this is that we will only be interested in the coarse geometry of the group and won't care about local properties.
**Lemma 2.11**.: _Let \(G\) be a locally compact hyperbolic group, then \(\partial G\) is compact and the action of \(G\) is continuous._
Proof.: By [10, Proposition 2.1] there exists a space \(X\) with a \(G\) action as above. Since \(X\) is proper and geodesic \(\partial X\) is compact and the action on \(\partial X\) is continuous, but \(\partial G\cong\partial X\).
The following lemma is very important and expresses algebraically the contracting dynamics of \(G\) on \(\partial G\):
**Lemma 2.12**.: _Let \(G\) be a locally compact hyperbolic group and \(d\in\mathcal{D}(G)\) then for \(g\in G\) and \(\xi\in\overline{G}\):_
\[(g,g\xi)\approx|g|-(g^{-1},\xi)\]
_uniformly in \(g\) and \(\xi\)._
Proof.: The proof is taken from [15, Lemma 4.1]. If \(h\in G\) then a direct calculation shows that \((g,gh)=|g|-(g^{-1},h)\). Letting \(h\) converge to \(\xi\in\bar{G}\) we get the desired result.
We get the following corollary:
**Lemma 2.13**.: _([15, Lemma 4.1]) Let \(G\) be a locally compact hyperbolic group and \(d\in\mathcal{D}(G)\) then for \(g\in G\):_
\[|g|\approx\sup_{\xi\in\partial G}(g,\xi)\]
_uniformly in \(g\)._
Proof.: The proof is taken from [15, Lemma 4.1]. Obviously \((g,\xi)\leq|g|\) for all \(g\). Let \(\xi_{1},\xi_{2}\in\partial G\) be two distinct points. By 2.12 for any \(g\in G\):
\[\max_{i}(g,g\xi_{i})\approx|g|-\min_{i}(g^{-1},\xi_{i})\]
But \((\xi_{1},\xi_{2})\gtrsim\min\{(g^{-1},\xi_{1}),(g^{-1},\xi_{2})\}\), so for some \(i\), \((g,g\xi_{i})\approx|g|\).
Let \(d\in\mathcal{D}(G)\). Even though the metric topology induced on \(G\) by \(d\) need not be proper or geodesic the space \(\bar{G}=G\cup\partial G\) can be given a compact topology agreeing with the locally compact topology on \(G\). Furthermore this topology does not depend on \(d\). To do this one sets a base of (not necessarily open) neighborhoods of \(\xi\in\partial G\) to be \(\{x\in\bar{G}|(x,\xi)>M\}\). So the topology is the regular topology on \(G\) and \(x_{n}\to\xi\) if and only if \((x_{n},\xi)\to\infty\).
**Lemma 2.14**.: _The topology described above is independent of \(d\) and compact._
Proof.: Let \(\{U_{i}\}_{i\in I}\) be an open cover of \(\bar{G}\). The topology on \(\partial G\) is the same as the standard one because the topology described is easily seen to be the one induced by the visual metric. Since \(\partial G\) is compact there exists a finite set \(\{U_{i}\}_{i\in J}\) covering \(\partial G\). For each \(i\in J\) there exists \(\xi_{i}\in\partial G\) and \(M_{i}>0\) such that \(\{x\in\bar{G}|(x,\xi_{i})>M_{i}\}\subseteq U_{i}\). If now \(g\notin U_{i}\) and \(\eta\in U_{i}\) then \(M_{i}\gtrsim(g,\eta)\) by hyperbolicity. Therefore if \(g\notin U_{i}\) for all \(i\in J\) then there exists \(M>0\) such that \((g,\eta)<M\) for all \(\eta\in\partial G\). Now by 2.13\(|g|\approx\sup_{\xi\in\partial G}(g,\xi)\) so \(G\backslash\bigcup_{i\in J}U_{i}\) is bounded in the metric \(d\) and therefore has compact closure in \(G\). Thus the closure of \(G\backslash\bigcup_{i\in J}U_{i}\) can be covered by finitely many of the remaining \(U_{i}\) as needed.
To see that the topology is independent of \(d\) notice that if \(\xi\in\partial G\) then \(g_{n}\to\xi\) if and only if \((g_{n},\xi)\to\infty\) and by 2.8 this condition is independent of \(d\).
By [13, Theorem 6.1.4],[10, Section 3] elements in \(G\) can be classified into three types:
1. Elliptic elements: elements with bounded orbits.
2. Parabolic elements: non-elliptic elements with one fixed point in \(\partial G\).
3. Hyperbolic elements: non-elliptic elements with two fixed points in \(\partial G\). One of the fixed points is attracting and the other is repelling. If \(\xi\in\bar{G}\) is not the repelling fixed point of a hyperbolic element \(g\) then \(g^{n}\xi\) converges to the attracting fixed point.
In the discrete case there are no parabolic elements but in the locally compact case they can appear.
By [13, Section 3][1, Theorem 6.8] locally compact hyperbolic groups can be classified in to three types:
1. Elementary: groups with finite boundary. In this case either \(\partial G=\phi\) and \(G\) is compact or \(|\partial G|=2\) and \(G\) contains an infinite cyclic co-compact subgroup.
2. Non-elementary amenable: non-elementary groups stabilizing a point in \(\partial G\). These groups have uncountable boundary and are amenable by [1, Theorem 6.8].
3. General type: groups not stabilizing any point in \(\partial G\). These groups have uncountable boundary and are never amenable.
Being of general type is equivalent to being non-amenable. By [10, Theorem 7.3] if a non-elementary hyperbolic locally compact group is amenable then it is never unimodular. Thus unimodular locally compact hyperbolic groups are either elementary or of general type.
Non-elementary amenable groups do not exist in the discrete case but in the locally compact case they can appear. Examples of such groups are minimal parabolic subgroups of rank-1 simple Lie groups with finite center.
**Theorem 2.15**.: _If \(G\) is hyperbolic locally compact of general type then the action on \(\partial G\) is minimal._
Proof.: By [13, Theorem 7.4.1] any closed invariant subset of \(\partial G\) containing at least two points is all of \(\partial G\) but since \(G\) is of general type there is no fixed point in \(\partial G\).
Finally we will need the following lemma:
**Lemma 2.16**.: _Let \(G\) be hyperbolic locally compact of general type. Any \(G\)-equivariant continuous map \(f:\partial G\to\partial G\) is trivial._
Proof.: Since \(G\) is of general type, \(G\) contains hyperbolic elements ([10, Section 3]). The set of all fixed points in \(\partial G\) of hyperbolic elements is invariant and non-empty so since the action is minimal it is dense. If \(f:\partial G\to\partial G\) is \(G\)-equivariant and continuous then \(f\) must stabilize the attracting fixed point and the repelling fixed point of any hyperbolic element, since continuity and equivariance imply that if \(\xi\) is attracting or repelling for \(g\) so is \(f(\xi)\). Thus \(f\) fixes a dense set.
We add one last remark on hyperbolic locally compact groups. It is not clear to us that given \(d\in\mathcal{D}(G)\) the corresponding Gromov product \((\cdot,\cdot):\bar{G}\times\bar{G}\to\mathbb{R}\) is measurable, however one can always slightly modify it to be measurable. We will only be interested in the values of the Gromov product up to a uniform constant since we are only interested in the coarse geometric structure of \(d\). Therefore we will change the definition of the Gromov product in
a way that changes the values only by a bounded amount and ensures that it is measurable. Choose a countable subset \(S\subseteq G\) which is co-bounded, i.e any element is a uniformly bounded distance away from an element of \(S\). The space \((S,d)\) is hyperbolic and roughly isometric to \((G,d)\) so they give rise to the same boundary. Since the Gromov product is lower semi-continuous on \(\bar{S}\) it is Borel measurable (this is also true on \(G\) but the Borel \(\sigma\)-algebra will be the one corresponding to the metric \(d\) and not to the standard topology on \(G\), this is the reason for this entire remark). Denote the Gromov product on \(\bar{S}\) by \(\langle\cdot,\cdot\rangle\). Choose a measurable closest point projection \(P:G\to S\) and extend it to the boundary via the identity. Now if \(x,y,z\in\bar{G}\) the function \(\langle P(x),P(y)\rangle_{P(z)}\) is Borel measurable on \(G\) and at bounded distance from the original definition of the Gromov product on \(G\), so we can take it to be the new Gromov product. We will make one more change for convenience: we assume that the formula \((g,h)_{o}=\frac{1}{2}(|g|+|h|-d(g,h))\) still holds when \(g,h,o\in G\).
### Non-singular actions and Koopman representations
Consider a locally compact group \(G\) with a measurable non-singular (i.e. measure class preserving) action on a measure space \((X,\mu)\). One can define an associated unitary representation of \(G\) called the Koopman representation on \(L^{2}(X,\mu)\) by:
\[[\pi(g)f](\xi)=\sqrt{\frac{dg_{*}\mu}{d\mu}(\xi)}f(g^{-1}\xi)\]
The Koopman representation depends only on the measure class \([\mu]\) and not on the actual measure. If \(\nu\) is in the same measure class as \(\mu\) then the map \(L^{2}(X,\nu)\to L^{2}(X,\mu)\) defined by \(f\mapsto\sqrt{\frac{d\mu}{d\nu}}f\) is an isomorphism of the Koopman representations corresponding to \(\mu,\nu\).
By [4, Proposition A.6.1], if \(G\) is \(\sigma\)-compact and \((X,\mu)\) is a standard measure space this representation is continuous with respect to the strong operator topology. If \(G\) is locally compact hyperbolic then \(G\) is compactly generated hence \(\sigma\)-compact so the above theorem holds.
## 3. Patterson-Sulivan Measures
In this section we will construct from a metric \(d\in\mathcal{D}(G)\) an invariant measure class \([\mu]\) on \(\partial G\) called the Patterson-Sullivan measure class associated to \(d\). The original construction of such measures in due to Patterson in the case of Fuchsian groups [17]. Sullivan later extended this to the case of discrete subgroups of \(SO(n,1)\)[19]. Further generalizations were made by Coornaert for discrete hyperbolic groups [12], and by Burger and Mozes for non-discrete groups acting on CAT(-1) spaces [8]. We give a construction for general hyperbolic locally compact groups.
Fix a locally compact hyperbolic group \(G\), \(d\in\mathcal{D}(G)\), a left Haar measure \(\lambda\) on \(G\) and a visual metric \(d_{\varepsilon}\).
**Definition 3.1**.: The critical exponent of (G,d) is
\[h=\inf\{s|\int\!e^{-s|g|}d\lambda(g)<\infty\}=\limsup_{r\to\infty}\frac{1}{r} \lambda(B_{r}(1))\]
\(h\) is the exponential growth rate of \(G\) with respect to \(d\). If \(G\) is non-elementary then by [10, Section 3]\(G\) contains a discrete free quasi isometrically embedded (Schottky) subgroup or subsemigroup and thus \(h>0\). On the other hand, if \(G\) is elementary then obviously \(h=0\). We denote \(D=\frac{h}{\varepsilon}\).
**Definition 3.2**.: A finite measure \(\mu\) on \(\partial G\) is called quasi-conformal of dimension \(\frac{\alpha}{\epsilon}\) if for all \(g\in G\), for \(\mu\) almost every \(\xi\)
\[\frac{dg_{*}\mu}{d\mu}(\xi)\asymp e^{-\alpha(|g|-2(g,\xi))}\]
The quantity \(|g|-2(g,\xi)\) in the above definition is the analogue of the Busemann function (see [7, Definition II.8.17]) corresponding to \(\xi\), i.e. it roughly measures the "distance from \(\xi\)" normalized to be \(0\) at \(1\). We will not use this formally but it might help understand the geometric picture.
**Theorem 3.3**.: _If G is non-compact, there exists a quasi-conformal measure of dimension \(D=\frac{h}{\epsilon}\) on \(\partial G\)._
We will later see that If G is non-elementary, the measure class of such a measure is uniquely determined. This measure class \([\mu]\) is also known as the Patterson-Sullivan measure class. After showing the measure class is unique we will fix such a measure \(\mu\) and call it the Patterson-Sullivan measure of \((G,d)\).
Proof.: There are many similar proofs in the literature. We imitate the proof in [12, Theorem 5.4], another example is [8, Section 1]. Recall that the space \(\bar{G}\) has a natural compact topology on it described in 2.14. By [8, Lemma 1.2] There exists an increasing continuous function \(H:\mathbb{R}_{+}\longrightarrow\mathbb{R}_{+}\) satisfying
1. \(\int_{G}H(e^{|x|})e^{-s|x|}d\lambda(x)\) converges for \(s>h\) and diverges at \(s=h\).
2. \(\forall\alpha>0,\exists t_{0}>0\) such that \(\forall t>t_{0}\) and \(k>1\), \(H(kt)\leq k^{\alpha}H(t)\).
If one assumes that \(\int e^{-h|x|}d\lambda(x)\) diverges one can take \(H=1\) and the proof becomes simpler. Denote
\[\mathcal{P}(s)=\int_{G}H(e^{|x|})e^{-s|x|}d\lambda(x)\]
and define probability measures on \(\bar{G}\) by:
\[d\mu_{s}=\frac{1}{\mathcal{P}(s)}H(e^{|x|})e^{-s|x|}d\lambda(x)\]
Let \(\mu\) be a weak-\(*\) limit point of \(\mu_{s}\) as \(s\searrow h\). We claim \(\mu\) is supported on \(\partial G\) and is quasi-conformal of dimension \(D\).
To see that \(\mu(G)=0\) notice that since \(\mathcal{P}(s)\rightarrow\infty\) when \(s\searrow h\), \(\mu_{s}(K)\to 0\) for every compact \(K\subset G\).
In order to show that \(\mu\) is quasi-conformal of dimension \(D\) it is enough to show that for every \(g\in G\) and \(\xi\in\partial G\) there exists a neighborhood \(V\) of \(\xi\) in \(\bar{G}\) such that for any continuous function \(f\) supported in \(V\):
\[\int f(x)dg_{*}\mu(x)\asymp e^{-h(|g|-2(g,\xi))}\int f(x)d\mu(x)\]
uniformly in \(g,\xi\). Indeed let \(g\in G\), we have:
\[\frac{dg_{*}\mu_{s}}{d\mu_{s}}(x)\asymp\frac{H(e^{|g^{-1}x|})}{H(e^{|x|})}e^{ -s(|g^{-1}x|-|x|)}=\frac{H(e^{|g^{-1}x|})}{H(e^{|x|})}e^{-s(|g|-2(g,x))}\]
Let \(\xi\in\partial G\). If \(x\in G\) such that \((x,\xi)>|g|\) then by hyperbolicity \((g,x)\approx(g,\xi)\) uniformly in \(g,\xi\). In addition since \(||x|-|g^{-1}x||<|g|\) it follows from the second property of \(H\) that there exists \(C=C(|g|)>0\) such that if \(|x|>C\) then \(\frac{H(e^{|g^{-1}x|})}{H(e^{|x|})}\asymp 1\). Let now \(M=max\{C,|g|\}\)
and consider \(V=\{x\in\bar{G}|(x,\xi)>M\}\). By definition \(V\) contains an open neighborhood of \(\xi\). If \(x\in V\) then in particular \(|x|>M\). Therefore by the formula for \(\frac{dg_{*}\mu_{s}}{d\mu_{s}}\) and since \((g,x)\approx(g,\xi)\) we get that for \(x\in V\), \(\frac{dg_{*}\mu_{s}}{d\mu_{s}}(x)\asymp e^{-h(|g|-2(g,\xi))}\). Integrating any continuous function \(f\) supported on \(V\) we get:
\[\int f(x)dg_{*}\mu_{s}=\int f(x)\frac{dg_{*}\mu_{s}}{d\mu_{s}}(x)d\mu_{s}\asymp \int f(x)e^{-h(|g|-2(g,\xi))}d\mu_{s}\]
Taking a partial limit \(s\searrow h\) we get that the same holds for \(\mu\) and thus \(\mu\) is quasi-conformal of dimension \(D\) as needed.
**Lemma 3.4** (Sullivan's shadow lemma).: _If \(\mu\) is quasi-conformal of dimension \(D>0\), is not supported on a single atom and \(\sigma\) is large enough then_
\[\mu(\Sigma(g,\sigma))\asymp_{\sigma}e^{-h|g|}\]
Note that if \(\mu\) is supported on a single atom then it must be invariant and this can only happen if \(h=0\) i.e. if \(G\) is elementary.
Proof.: Let \(g\in G\), we have
\[\mu(g^{-1}\Sigma(g,\sigma))= \int_{\Sigma(g,\sigma)}\frac{dg_{*}\mu}{d\mu}(\xi)d\mu(\xi)\] \[\asymp \int_{\Sigma(g,\sigma)}e^{-h(|g|-2(g,\xi))}d\mu(\xi)\] \[\asymp_{\sigma}e^{h|g|}d\mu(\xi)\] \[= \mu(\Sigma(g,\sigma))e^{h|g|}\]
Now in order to finish the proof we must bound \(\mu(g^{-1}\Sigma(g,\sigma))\) independently of \(g\). Let \(t<1\) be larger than the size of the largest atom of \(\mu\). There exists \(\delta>0\) such that the measure of any ball of radius less then \(\delta\) is less than \(t\). Indeed, if \(B_{r_{n}}(\xi_{n})\) is a sequence of balls with \(r_{n}\to 0\) and \(\mu(B_{r_{n}}(\xi_{n}))\geq t\) after passing to a convergent sub-sequence we can assume \(\xi_{n}\to\xi\). For any \(r>0\), \(B_{r}(\xi)\) contains a ball of the form \(B_{r_{n}}(\xi_{n})\) so \(\mu(B_{r}(\xi))\geq t\) and thus \(\mu(\{\xi\})\geq t\) which is a contradiction.
Now the diameter of \(\partial G\backslash g^{-1}\Sigma(g,\sigma)\) converges to \(0\) as \(\sigma\to\infty\) uniformly in g. To see this, notice that if \(\xi\notin g^{-1}\Sigma(g,\sigma)\) then \(g\xi\notin\Sigma(g,\sigma)\) so \((g,g\xi)\leq|g|-\sigma\). By lemma 2.12 we see that \((\xi,g^{-1})\gtrsim\sigma\). So if \(\xi,\eta\notin g^{-1}\Sigma(g,\sigma)\) then:
\[(\xi,\eta)\gtrsim min\{(\xi,g^{-1})(\eta,g^{-1})\}\gtrsim\sigma\]
Therefore \(d_{\varepsilon}(\xi,\eta)\prec e^{-\sigma}\) as needed. So for \(\sigma\) large enough we have that \(1\geq\mu(g^{-1}\Sigma(g,\sigma))>1-t\) and thus \(\mu(\Sigma(g,\sigma))\asymp_{\sigma}e^{-h|g|}\mu(g^{-1}\Sigma(g,\sigma))\asymp e ^{-h|g|}\) finishing the proof.
We assume from now on that \(G\) is non-elementary so \(h>0\) and the shadow lemma holds. Denote \(A_{R}(\alpha)=\{g\in G|R-\alpha<|g|<R+\alpha\}\), the annulus of radius \(R\) and thickness \(\alpha\) around the identity in \(G\). For our purposes the parameter \(\alpha\) should be thought of as an arbitrary parameter which we will take to be as large as we need and then keep it fixed.
**Lemma 3.5**.: _For \(\alpha\), \(\sigma\) large enough and any \(\xi\in\partial G\), \(\lambda(\{g\in A_{R}(\alpha)|\xi\in\Sigma(g,\sigma)\})\asymp_{\sigma,\alpha}1\). i.e, The shadows of elements of the annuli \(A_{R}(\alpha)\) cover \(\partial G\) with "bounded multiplicity"._
Proof.: Let \(\xi\in\partial G\) and \(\gamma\) a roughly geodesic ray with \(\gamma(0)=1,\gamma(\infty)=\xi\). Fix \(r>0\) large enough so that the measure of an \(r\)-ball in G is positive, and choose \(\alpha\) large enough so that \(B_{r}(\gamma(R))\subseteq A_{R}(\alpha)\). For any \(g\in B_{r}(\gamma(R))\), \((g,\xi)\approx_{r}|g|\) and thus for \(\sigma\) large enough, and independent of \(g,\xi,R\) we have \(\xi\in\Sigma(g,\sigma)\). Thus \(\lambda(\{g\in A_{R}(\alpha)|\xi\in\Sigma(g,\sigma)\}\geq\lambda(B_{r}(\gamma( R)))\).
On the other hand if \(g,h\in A_{R}(\alpha)\) and \(\xi\in\Sigma(g,\sigma)\cap\Sigma(h,\sigma)\) then since \(|g|\approx_{\alpha}|h|\approx_{\alpha}R\):
\[(g,h)\gtrsim_{\alpha,\sigma}min\{(g,\xi),(\xi,h)\}\approx_{\alpha,\sigma}R\]
Using again that \(|g|\approx_{\alpha}|h|\approx_{\alpha}R\) and plugging this in to the definition of the Gromov product we deduce that \(d(g,h)\approx_{\alpha,\sigma}0\). Therefore \(\{g\in A_{R}(\alpha)|\xi\in\Sigma(g,\sigma)\}\) is contained in a ball with radius in independent of \(g,R,\xi\) giving the other inequality.
We will also need the following discrete version of lemma 3.5:
**Lemma 3.6**.: _Fix \(C>0\). For large enough \(\alpha\), large enough \(\sigma\) (perhaps depending on \(C\)) and any \(R>0\), there exists a finite subset \(F\subseteq A_{R}(\alpha)\) such that the shadows \(\{\Sigma(g,\sigma)|g\in F\}\) cover \(\partial G\) and for any \(g,h\in F\), \(d(g,h)>C\)._
Proof.: Choose \(\alpha\) large enough that for any roughly geodesic ray \(\gamma\) with \(\gamma(0)=1\), \(\gamma(R)\in A_{R}(\alpha)\). Since \(\partial G\) is compact there exists a finite set of points \(\xi_{i}\in\partial G\) such that the balls of radius \(e^{-\epsilon R}\) cover \(\partial G\). Choose roughly geodesic rays \(\gamma_{i}\) such that \(\gamma_{i}(0)=1\) and \(\gamma_{i}(\infty)=\xi_{i}\). By lemma 2.9, for some \(\sigma>0\) (depending on \(C\)), if \(d(\gamma_{i}(R),\gamma_{j}(R))\leq C\) then \(B_{e^{-\epsilon R}}(\xi_{j})\subseteq\Sigma(\gamma_{i}(R),\sigma)\). Let \(F\) be a maximal subset of the \(\gamma_{i}(R)\) with the property that for all \(g,h\in F\), \(d(g,h)>C\) then \(\{\Sigma(g,\sigma)|g\in F\}\) covers \(\partial G\) and has the desired properties.
The only use we will make of the parameter \(\sigma\) is in lemma 2.9, lemma 3.4, lemma 3.5 and in lemma 3.6, furthermore we will only use lemma 2.9 and lemma 3.6 for a specific constant \(C\) in the proof of lemma 5.6 depending only on \((G,d)\). All three lemmas only hold for \(\sigma\) which is large enough. We fix such a \(\sigma\) and denote \(\Sigma(g)=\Sigma(g,\sigma)\) for every \(g\in G\). After this we will have no more need to mention the parameter \(\sigma\) explicitly again. We state what we will use from lemma 2.9, lemma 3.4, lemma 3.5 and lemma 3.6:
* For every \(g\in G\), \(R>0\) and roughly geodesic ray \(\gamma\) with \(\gamma(0)=1,\gamma(\infty)=\xi\): \(B_{e^{-\epsilon R}}(\xi)\subseteq\Sigma(g)\).
* For every \(g\in G\): \(\mu(\Sigma(g))\asymp e^{-h|g|}\).
* For large enough \(\alpha\) and any \(\xi\in\partial G\), \(\lambda(\{g\in A_{R}(\alpha)|\xi\in\Sigma(g)\})\asymp_{\alpha}1\).
* For a fixed constant \(C\) from the proof of 5.6 which depends only on \((G,d)\), for large enough \(\alpha\) and for any \(R>0\) there exists a finite \(F\subseteq A_{R}(\alpha)\) such that the shadows \(\{\Sigma(g)|g\in F\}\) cover \(\partial G\) and for any \(g,h\in F\), \(d(g,h)>C\).
An immediate consequence of the shadow lemma (or alternatively lemma 2.12) is that for all \(g\), \(\Sigma(g)\neq\phi\). We fix \(\hat{g}\in\Sigma(g)\) for every \(g\) and denote \(\tilde{g}=\widehat{g^{-1}}\). We have \((g,\hat{g})\approx|g|\) uniformly in \(g\). Perhaps the function \(g\rightarrow\hat{g}\) can be chosen to be measurable but this will not affect us. Since \((g,\hat{g})\approx|g|\) and \((g,x)\leq|g|\) for any \(x\) we get that \((x,\hat{g})\gtrsim min\{(x,g),(g,\hat{g})\}\approx(x,g)\). On the other hand, we get that \(|g|\geq(g,x)\gtrsim min\{(g,\hat{g}),(\hat{g},x)\}\approx min\{|g|,(\hat{g},x)\}\) so:
\[(g,x)\approx min\{|g|,(\hat{g},x)\} \tag{2}\]
**Lemma 3.7**.: _Suppose \(\mu\) is quasi-conformal of dimension \(D\). The measure \(\mu\) is Ahlfors regular of dimension \(D=\frac{h}{\varepsilon}\) with respect to the metric \(d_{\varepsilon}\), i.e for any \(\xi\in\partial G\) and \(\rho\leq diam(\partial G)\)_
\[\mu(B_{\rho}(\xi))\asymp\rho^{D}\]
Proof.: Let \(\xi,\rho\) be as above and let \(\gamma:[0,\infty)\to G\) be a roughly geodesic ray with \(\gamma(0)=1,\gamma(\infty)=\xi\). We know that \(B_{\rho}(\xi)\subseteq\Sigma(\gamma(\frac{-1}{\epsilon}ln(\rho)))\). Taking the measures of both sides we get by the shadow lemma that \(\mu(B_{\rho}(\xi))\prec\rho^{D}\).
For the other inequality suppose \(\eta\in\Sigma(\gamma(t))\), then \((\eta,\xi)\gtrsim min\{(\eta,\gamma(t)),(\gamma(t),\xi)\}\approx t\). Therefore \(d_{\varepsilon}(\xi,\eta)\asymp e^{-\varepsilon(\xi,\eta)}\prec e^{-\varepsilon t}\), so there exists \(C>0\) independent of \(\xi,\rho\) such that if \(t\geq\frac{-1}{\varepsilon}ln(\rho)+C\) then \(d_{\varepsilon}(\xi,\eta)<\rho\). We conclude that \(\Sigma(\gamma(\frac{-1}{\varepsilon}ln(\rho)+C))\subseteq B_{\rho}(\xi)\). Taking the measures of these sets and using lemma 3.4 we get that \(\rho^{D}\prec\mu(B_{\rho}(\xi))\).
As a result, since \(h\neq 0\), \(\mu\) has no atoms. Because \(\mu\) is Radon we also see that the measure class \([\mu]\) is determined uniquely by the assumption that \(\mu\) is quasi-conformal of dimension \(D\).
**Lemma 3.8**.: _If \(\mu\) is quasi-conformal of dimension \(D\) then \(\mu\) and the Hausdorff measure \(H^{D}\) of dimension \(D\) on \(\partial G\) are in the same measure class and the corresponding Radon-Nikodym derivative is bounded and bounded away from \(0\)._
Proof.: The proof is taken from [9, Corollary 2.5.10]. Let \(A\) be a measurable set, and \(U_{i}\) a countable cover of \(A\) by sets of diameters \(\rho_{i}<\delta\). Each \(U_{i}\) is contained in a ball \(B_{i}\) of radius \(\rho_{i}\) so \(\mu(A)\prec\sum_{i}\rho_{i}^{D}\). Taking \(\delta\to 0\) we get \(\mu(A)\prec H^{D}(A)\) uniformly in A.
For the other direction, let \(A\) be measurable. The measures \(\mu\) and \(H^{D}\) are Radon so for every \(\epsilon>0\) there exist compact \(K\) and open \(U\) with \(K\subset A\subset U\) and \(H^{D}(U\backslash K),\mu(U\backslash K)<\epsilon\). Since \(K\) is compact there exists \(\delta\) such that any ball centered at a point in \(K\) of radius less than \(\delta\) is contained in \(U\). Let \(U_{i}\) be a countable cover of \(K\) with sets of diameters \(\rho_{i}<\delta\), ordered such that \(\rho_{i}\) is non-increasing. We can assume without loss of generality that \(U_{i}\cap(K\backslash\bigcup_{j<i}U_{j})\neq\phi\). Under this assumption each \(U_{i}\) is contained in a ball \(B_{i}\) of radius \(\rho_{i}\) centered at a point in \(K\backslash\bigcup_{j<i}U_{j}\). Therefore \(B_{i}\subset U\) and if \(C_{i}\) denote the balls radius \(\frac{\rho_{i}}{2}\) centered at the same point as \(B_{i}\) then \(C_{i}\) are disjoint. Thus \(\sum\rho_{i}^{D}\prec\mu(U)\) and taking \(\delta\) to \(0\) we get that \(H^{D}(K)\prec\mu(U)\) uniformly in \(A,\epsilon\). Now taking \(\epsilon\) to \(0\) we get \(H^{D}(A)\prec\mu(A)\) uniformly in \(A\).
Since the Hausdorff measure class is independent of \(\mu\) there is a unique measure class containing a quasi-conformal measure of dimension \(D\). In addition since \(\frac{d\mu}{dH^{D}}\asymp 1\), \(H^{D}\) is quasi-conformal of dimension \(D\). We now fix a quasi-conformal probability measure \(\mu\) of dimension \(D\) and call it the Patterson Sullivan measure of \((G,d)\). As a corollary we see that \(\mu\) is ergodic. Indeed if, \(E\) is a \(G\) invariant subset and \(\mu(E)\neq 0\) then the restriction of \(\mu\) to \(E\) is quasi-conformal of dimension \(D\) and is thus equivalent to \(\mu\) so E is co-null.
Finally we will need the following strengthening of the shadow lemma:
**Lemma 3.9**.: _There exists a constant \(0<C\) such that for any \(g\in G\) and any \(s<|g|-C\):_
\[\mu(\{\xi\in\partial G|(g,\xi)>s\})\asymp e^{-hs}\]
Proof.: Let \(s>0\) and set \(\rho=e^{-\epsilon s}\). If \((\xi,g)>s\) then by estimate 2, \((\hat{g},\xi)\gtrsim s\). Thus by estimate 1 there exists \(L>0\), independent of \(g,s\), such that \(\{\xi\in\partial G|(g,\xi)>s\}\subset B_{L\rho}(\hat{g})\)
Using Ahlfors regularity we get one inequality. In the other direction, by estimate 2 there exists \(C>0\), independent of \(g,\xi\), such that if \((\hat{g},\xi)>s+C\) then \((g,\xi)>min\{s,|g|-C\}\), so if \(s<|g|-C\) then \((g,\xi)>s\). Thus by estimate 1 there exists \(L>0\), independent of \(g,s\), satisfying \(B_{\frac{\rho}{L}}(\hat{g})\subset\{\xi\in\partial G|(g,\xi)>s\}\). The second inequality follows again by Ahlfors regularity.
## 4. Some Growth Estimates
In this section we will use the shadow lemma to obtain growth estimates on the group \(G\). We generalize many of the results of Garncarek ([15, Section 4]) to the locally compact case, many of the proofs are similar. The main difference occurs in lemma 4.3 which requires the assumption of unimodularity which holds trivially for discrete groups. We keep the notation of the previous section.
First we deduce the following corollary from lemma 3.5.
**Theorem 4.1**.: _If \(G\) is non-elementary and \(\alpha\) is large enough, \(\lambda(A_{R}(\alpha))\asymp_{\alpha}\lambda(B_{R}(1))\asymp e^{hR}\)._
Proof.: The proof is essentially a double counting argument. Suppose \(\alpha\) is large enough for lemma 3.5 to hold. Consider the set \(A=\{(g,\xi)|\xi\in\Sigma(g)\}\). Using Fubini's theorem we see that on the one hand by lemma 3.5:
\[\lambda\times\mu(A)=\int\lambda(\{g\in A_{R}(\alpha)|\xi\in\Sigma(g)\})d\mu( \xi)\asymp_{\alpha}\int 1d\mu(\xi)=1\]
On the other hand by lemma 3.4:
\[\lambda\times\mu(A)=\int_{A_{R}(\alpha)}\mu(\Sigma(g))d\lambda(g)\asymp_{ \alpha}\int_{A_{R}(\alpha)}e^{-hR}d\lambda(g)=\lambda(A_{R}(\alpha))e^{-hR}\]
Comparing the two we get the result for annuli. The result for balls follows by covering the ball with annuli and using the formula for the sum of a geometric series.
We will need one more lemma estimating the size of certain subsets of annuli.
**Lemma 4.2**.: _For any \(s,R,\alpha>0\) and \(\xi\in\partial G\):_
\[\lambda(\{g\in A_{R}(\alpha)|(g,\xi)>s\})\prec_{\alpha}e^{h(R-s)}\]
Proof.: Let \(\gamma\) be a roughly geodesic ray with \(\gamma(0)=1,\gamma(\infty)=\xi\) and \(g\) as above. Note that \((\gamma(s),\xi)\approx s\), so \((g,\gamma(s))\gtrsim min\{(g,\xi),(\xi,\gamma(s))\}\approx s\), but \(|\gamma(s)|\approx s\) so \(s\gtrsim(g,\gamma(s))\) and therefore \((g,\gamma(s))\approx s\). Opening up the definition of the Gromov product and plugging in \(|g|\approx_{\alpha}R,|\gamma(s)|\approx s\) we get \(d(g,\gamma)\approx R-s\). Thus \(\{g\in A_{R}(\alpha)|(g,\xi)>s\}\) is contained in a ball of radius \(r\approx R-s\) centred at \(\gamma(s)\). Applying lemma 4.1 we obtain the result.
The following lemma is one of the key geometric ideas in the proof of irreducibility of boundary representations. It generalizes the fact that given any two words \(w_{1},w_{2}\) in a free group one can find a generator \(x\) such that the product \(w_{1}xw_{2}\) has no cancellation. Interestingly the proof requires the added assumption of unimodularity, which holds trivially in the discrete case.
**Lemma 4.3**.: _Suppose G is unimodular and non-elementary. There exists \(\tau\) such that for all \(g,h\in G\), \(\mu(\{g^{\prime}\in B_{\tau}(g)|\big{|}g^{\prime}|+|h|-2\tau\leq|g^{\prime}h| \})\asymp 1\)._
Proof.: Let \(g,h\in G\), and for every \(s\in G\) fix a roughly geodesic segment \(\gamma_{s}:[0,|s|]\to G\) in G with \(\gamma_{s}(0)=1,\gamma_{s}(|s|)=s\). Let \(g^{\prime}\in B_{\tau}(g)\) such that \(|g^{\prime}|+|h|-2\tau>|g^{\prime}h|\), this is equivalent to \((g^{\prime}h,1)_{g^{\prime}}>\tau\). We get:
\[(\gamma_{g^{\prime}}(|g^{\prime}|-\tau),g^{\prime}\gamma_{h}(\tau))_{g^{\prime }}\gtrsim min\{(\gamma_{g}^{\prime}(|g^{\prime}|-\tau),1)_{g^{\prime}},(1,g^{ \prime}h)_{g^{\prime}},(g^{\prime}h,g^{\prime}\gamma_{h}(\tau))_{g^{\prime}}\}\approx\tau\]
Since \(d(g^{\prime},\gamma_{g^{\prime}}(|g^{\prime}|-\tau)),d(g^{\prime},g^{\prime} \gamma_{h}(\tau))\approx\tau\), we conclude that \(d(\gamma_{g^{\prime}}(|g^{\prime}|-\tau),g^{\prime}\gamma_{h}(\tau))\approx 0\). Similarly we have that:
\[(\gamma_{g}(|g^{\prime}|-\tau),\gamma_{g^{\prime}}(|g^{\prime}|-\tau))\gtrsim min \{(\gamma_{g}(|g^{\prime}|-\tau),g),(g,g^{\prime}),(g^{\prime},\gamma_{g^{ \prime}}(|g^{\prime}|-\tau))\}\gtrsim|g^{\prime}|-\tau\]
So since \(|\gamma_{g^{\prime}}(|g^{\prime}|-\tau)|,|\gamma_{g}(|g^{\prime}|-\tau)|\approx |g^{\prime}|-\tau\) we conclude that \(d(\gamma_{g^{\prime}}(|g^{\prime}|-\tau),\gamma_{g}(|g^{\prime}|-\tau))\approx 0\).
Putting everything together we get that \(d(g^{\prime}\gamma_{h}(\tau),\gamma_{g}(|g^{\prime}|-\tau))\approx 0\). Notice that \(\gamma_{g}(|g^{\prime}|-\tau)\in\gamma_{g}(|g|-2\tau,|g|])\), so if \(|g^{\prime}|+|h|-2\tau>|g^{\prime}h|\), then \(g^{\prime}\gamma_{h}(\tau)\) is in a tubular neighborhood of constant distance (independent of \(\tau\)) from the roughly geodesic segment \(\gamma_{g}(|g|-2\tau,|g|])\). Denote this neighborhood by \(N(\gamma_{g}([|g|-2\tau,|g|]))\).
Consider the map \(g^{\prime}\mapsto g^{\prime}\gamma_{h}(\tau)\). This map is measure preserving since \(G\) is unimodular. Because the measure of \(B_{\tau}(g)\) grows exponentially with \(\tau\) and the measure of \(N(\gamma_{g}([|g|-2\tau,|g|]))\) grows linearly with \(\tau\), for \(\tau\) large enough this map must send more than half of the measure of \(B_{\tau}(g)\) outside of \(N(\gamma_{g}([|g|-2\tau,|g|]))\). The elements not being sent to \(N(\gamma_{g}([|g|-2\tau,|g|]))\) satisfy \(|g^{\prime}|+|h|-2\tau\leq|g^{\prime}h|\).
## 5. Operators on \(L^{2}(\mu)\)
In this section we use the tools we have developed to construct operators in the von Neuman algebra of the representation \(\pi\). These operators will later be used to show irreducibility of the representation. The results generalize [15, Section 5] and many of the core ideas come from there, although some of the constructions become more subtle and need to be changed.
### Estimates on \(\pi(g)\)
To simplify calculations we assume without loss of generality that \(D>1\). Denote \(P_{g}(\xi)=\sqrt{\frac{dg_{*}\mu}{d\mu}}(\xi)\), \(\widetilde{P}_{g}=\frac{P_{g}}{\|P_{g}\|_{1}}\) and \(\widetilde{\pi}(g)=\frac{\pi(g)}{\|P_{g}\|_{1}}\). Note that \(\|P_{g}\|_{1}=\langle\pi(g)1,1\rangle=\langle 1,\pi(g^{-1})1\rangle=\|P_{g^{-1}} \|_{1}\) so that \(\widetilde{\pi}(g)^{*}=\widetilde{\pi}(g^{-1})\). We will call the weak operator closure of the image of the positive \(L^{1}\) functions under \(\pi\) the **positive cone** of \(\pi\).
We will use the following lemma several times during calculations.
**Lemma 5.1**.: _Let \((X,\mu)\) be a \(\sigma\)-finite measure space and \(f\) a positive measurable function on \(X\) then:_
\[\int_{X}f(\xi)d\mu=\int\limits_{0}^{\infty}\mu(\{\xi|f(\xi)>t\})dt\]
Proof.: The proof follows from Fubini's theorem:
\[\int_{X}f(\xi)d\mu=\int_{X}\int\limits_{0}^{\infty}\mathbb{1}_{\{(t,\xi)|t<f( \xi)\}}dtd\mu=\int\limits_{0}^{\infty}\int_{X}\mathbb{1}_{\{(t,\xi)|t<f(\xi)\}}d \mu dt=\int\limits_{0}^{\infty}\mu(\{\xi|f(\xi)>t\})dt\]
**Lemma 5.2**.: _The following estimates are satisfied uniformly in \(g,\xi\):_
\[\|P_{g}\|_{1}\asymp e^{-\frac{h|g|}{2}}(1+|g|)\]
\[\widetilde{P}_{g}\asymp\frac{e^{h(g,\xi)}}{1+|g|}\]
Proof.: By definition \(P_{g}(\xi)\asymp e^{h((g,\xi)-\frac{|g|}{2})}\). Let \(C\) be the constant from lemma 3.9. If \(|g|<C\) then \(\|P_{g}\|_{1}\) is bounded by a constant. Assume \(|g|\geq C\). By lemma 3.9\(\mu(\{\xi\in\partial G|(g,\xi)>\frac{ln(t)}{h}\})\asymp\frac{1}{t}\) for \(t<|g|-C\). we get:
\[e^{\frac{|g|}{2}}\int_{\partial G}P_{g}(\xi)d\mu \asymp\int_{\partial G}e^{h(g,\xi)}d\mu\] \[=\int_{0}^{\infty}\mu(\{\xi\in\partial G|(g,\xi)>\frac{ln(t)}{h}\})dt\] \[\asymp 1+\int_{1}^{e^{h(|g|-C)}}\mu(\{\xi\in\partial G|(g,\xi)> \frac{ln(t)}{h}\})dt\] \[\asymp 1+\int_{1}^{e^{h(|g|-C)}}\frac{dt}{t}\] \[=1+h(|g|-C)\] \[\asymp 1+|g|\]
Which gives the first estimate. The second estimate now follows from the first and from \(P_{g}(\xi)\asymp e^{h((g,\xi)-\frac{|g|}{2})}\).
**Lemma 5.3**.: _For every \(R\),\(\alpha\) and for almost every \(\xi\in\partial G\)_
\[\int_{A_{R}(\alpha)}\widetilde{P}_{g}(\xi)d\lambda(g)\prec_{\alpha}e^{hR}\]
Proof.: Using theorem 4.1, lemma 4.2 and the previous lemma we see that:
\[(1+R)\int_{A_{R}(\alpha)}\widetilde{P}_{g}(\xi)d\lambda\] \[\asymp_{\alpha}\int_{A_{R}(\alpha)}e^{h(g,\xi)}d\lambda\] \[=\int_{0}^{\infty}\lambda\{g\in A_{R}(\alpha)|(g,\xi)>\frac{ln(t )}{h}\}dt\] \[\asymp_{\alpha}e^{hR}+\int_{1}^{e^{h(R+\alpha)}}\lambda\{g\in A_ {R}(\alpha)|(g,\xi)>\frac{ln(t)}{h}\}dt\] \[\prec_{\alpha}e^{hR}+\int_{1}^{e^{h(R+\alpha)}}e^{h(R-\frac{ln(t )}{h})}dt\] \[=e^{hR}(1+R+\alpha)\]
Giving the required estimate.
We now obtain estimates on matrix coefficients of the representation \(\pi\). Denote the space of \(d_{\varepsilon}\)-Lipschitz functions on \(\partial G\) by \(Lip(\partial G)\), and for \(\phi\in Lip(\partial G)\) denote the Lipschitz constant by \(L(\phi)\). \(Lip(\partial G)\) is dense in \(L^{2}(\mu)\). To see this notice that by the Lebesgue differentiation theorem, which holds for Ahlfors regular measures by [16, Theorem 1.8], every non-zero \(L^{2}\) function has non-zero inner product with the characteristic function of some ball. Thus characteristic functions of balls span a dense subspace of \(L^{2}(\mu)\). Since these can be approximated by Lipschitz functions, \(Lip(\partial G)\) is dense in \(L^{2}(\mu)\).
**Lemma 5.4**.: _Let \(g\in G\). For any \(\phi,\psi\in Lip(\partial G)\) we have:_
\[|\langle\widetilde{\pi}(g)\phi,\psi\rangle-\phi(\check{g})\overline{\psi( \hat{g})}|\prec\frac{L(\phi)\|\psi\|_{\infty}+L(\psi)\|\phi\|_{\infty}}{(1+|g |)^{\frac{1}{D}}}\]
Proof.: By the choice of normalization \(\langle\widetilde{\pi}(g)1,1\rangle=1\). Therefore:
\[|\langle\widetilde{\pi}(g)\phi,\psi\rangle-\phi(\check{g}) \overline{\psi(\hat{g})}|\] \[= |\langle\widetilde{\pi}(g)\phi,\psi\rangle-\langle\widetilde{\pi} (g)\phi(\check{g}),\psi(\hat{g})\rangle|\] \[\leq |\langle\widetilde{\pi}(g)\phi,\psi-\psi(\hat{g})\rangle|+| \langle\widetilde{\pi}(g)(\phi-\phi(\check{g})),\psi(\hat{g})\rangle|\] \[= |\langle\widetilde{\pi}(g)\phi,\psi-\psi(\hat{g})\rangle|+| \langle\phi-\phi(\check{g}),\widetilde{\pi}(g^{-1})\psi(\hat{g})\rangle|\]
We estimate the first term, the second term can be estimated similarly after replacing \(g\) by \(g^{-1}\). Let \(\rho>0\) and consider the ball \(B_{\rho}(\hat{g})\). We will decompose the integral over \(B_{\rho}(\hat{g})\) and the complement.
\[|\langle\widetilde{\pi}(g)\phi,\psi-\psi(\hat{g})\rangle|\] \[\leq \int_{\partial G}\widetilde{P}_{g}(\xi)|\phi(g^{-1}\xi)||\psi(\xi )-\psi(\hat{g})|d\mu(\xi)\] \[= \int_{B_{\rho}(\hat{g})}\widetilde{P}_{g}(\xi)|\phi(g^{-1}\xi)|| \psi(\xi)-\psi(\hat{g})|d\mu(\xi)+\] \[\int_{\partial G\backslash B_{\rho}(\hat{g})}\widetilde{P}_{g}( \xi)|\phi(g^{-1}\xi)||\psi(\xi)-\psi(\hat{g})|d\mu(\xi)\]
For \(\xi\in B_{\rho}(\hat{g})\), \(|\psi(\xi)-\psi(\hat{g})|\leq L(\psi)\rho\) so the first integral is bounded by \(\|\phi\|_{\infty}L(\psi)\rho\). In order to estimate the second integral recall that for any \(\xi\), \((\hat{g},\xi)\geq(g,\xi)\), so that \(\widetilde{P}_{g}\asymp\frac{e^{h(g,\xi)}}{1+|g|}\prec\frac{e^{h(g,\xi)}}{1+|g |}\asymp\frac{d_{\varepsilon}(\hat{g},\xi)^{-D}}{1+|g|}\). Thus (using the assumption \(D>1\)):
\[\int_{\partial G\backslash B_{\rho}(\hat{g})}\widetilde{P}_{g}( \xi)|\phi(g^{-1}\xi)||\psi(\xi)-\psi(\hat{g})|d\mu(\xi)\] \[\prec \int_{\partial G\backslash B_{\rho}(\hat{g})}\frac{\|\phi\|_{ \infty}L(\psi)d_{\varepsilon}(\hat{g},\xi)^{1-D}}{1+|g|}d\mu(\xi)\] \[\leq \int_{\partial G\backslash B_{\rho}(\hat{g})}\frac{\|\phi\|_{ \infty}L(\psi)\rho^{1-D}}{1+|g|}d\mu(\xi)\leq\frac{\|\phi\|_{\infty}L(\psi) \rho^{1-D}}{1+|g|}\]
All in all we get a bound on the first term proportional to \(\|\phi\|_{\infty}L(\psi)(\rho+\frac{\rho^{1-D}}{1+|g|})\) and similarly a bound on the second term proportional to \(\|\psi\|_{\infty}L(\phi)(\rho+\frac{\rho^{1-D}}{1+|g|})\). Finally:
\[|\langle\widetilde{\pi}(g)\phi,\psi\rangle-\phi(\tilde{g})\overline{\psi( \tilde{g})}|\prec(\|\phi\|_{\infty}L(\psi)+\|\psi\|_{\infty}L(\phi))(\rho+ \frac{\rho^{1-D}}{1+|g|})\]
One can now differentiate with respect to \(\rho\) in order to find the best bound. To obtain the claim it is enough to take \(\rho=(1+|g|)^{-\frac{1}{D}}\).
### Kernel operators
Given a function \(K\in L^{\infty}(\partial^{2}G)\) we can define a bounded operator \(T_{K}\) on \(L^{2}(\mu)\) by \(T_{K}f(\xi)=\int K(\xi,\eta)f(\eta)d\mu(\eta)\). These operators are called **kernel operators**. The key theorem is that kernel operators with non-negative kernel \(K\) are in the positive cone of \(\pi\). This will give us a rich enough family of operators to show the representation is irreducible.
Assume now that \(G\) is unimodular so lemma 4.3 holds and fix and a positive kernal \(K\geq 0\) in \(L^{\infty}(\partial^{2}G)\). Given \(R\), consider a family \(F\subset A_{R}(\alpha)\) as in lemma 3.6, for some fixed large enough \(\alpha\). Fix \(\tau\) as in lemma 4.3 and denote \(B=B_{\tau}(1)\). We partition \(\partial^{2}G\) as follows, choose a linear order on \(F^{2}\), the sets \(\Sigma(g)\times\Sigma(h)\) cover \(\partial^{2}G\) for \(g,h\in F\). Using the order on \(F^{2}\) we define \(V_{g,h}=\Sigma(g)\times\Sigma(h)\backslash\bigcup_{(s,t)<(g,h)}\Sigma(s) \times\Sigma(t)\). After discarding empty sets \(\{V_{g,h}|(g,h)\in F^{2}\}\) is a partition of \(\partial^{2}G\). Denote \(U_{g,h}=\{k\in gBh^{-1}||k|>|g|+|h|-3\tau\}\), by lemma 4.3 and since \(G\) is unimodular \(\lambda(U_{g,h})\asymp 1\). Note that \(U_{g,h}\subseteq A_{2R}(2\alpha+3\tau)\). Define:
\[w_{g,h}=\frac{\int_{V_{g,h}}Kd\mu^{2}}{\lambda(U_{g,h})}\]
Denote also \(w(k)=\sum_{F^{2}}w_{g,h}1_{U_{g,h}}(k)\) and \(S_{R,g,h}=\int_{U_{g,h}}\widetilde{\pi}(k)d\lambda(k)\). Finaly define:
\[S_{R}=\int_{G}w(k)\widetilde{\pi}(k)d\lambda(k)=\sum_{F^{2}}w_{g,h}\int_{U_{g, h}}\widetilde{\pi}(k)d\lambda(k)=\sum_{F^{2}}w_{g,h}S_{R,g,h}\]
We now prove that if \(G\) is a non-elementary unimodular locally compact hyperbolic group then \(S_{R}\underset{R\rightarrow\infty}{\longrightarrow}T_{K}\) in the weak operator topology. This follows easily from the next two lemmas.
**Lemma 5.5**.: _If \(G\) is a non-elementary unimodular locally compact hyperbolic group and \(\phi,\psi\in Lip(\partial G)\) then \(\langle S_{R}\phi,\psi\rangle\underset{R\rightarrow\infty}{\longrightarrow} \langle T_{K}\phi,\psi\rangle\)._
Proof.: If \(k\in U_{g,h}\) then \(|k|\approx|g|+|h|\), since \(d(g,k)\lesssim|h|\) we get that \((g,k)\approx|g|\). Thus we have \((\hat{g},\hat{k})\gtrsim min\{(\hat{g},g)(g,k)(k,\hat{k})\}\approx|g|\approx R\) so \(d_{\varepsilon}(\hat{g},\hat{k})\prec e^{-\varepsilon R}\). Similarly \(d_{\varepsilon}(\hat{h},\hat{k})\prec e^{-\varepsilon R}\). Therefore \(|\phi(\hat{h})\overline{\psi(\hat{g})}-\phi(\hat{k})\overline{\psi(\hat{k})}| \prec(L(\phi)\|\psi\|_{\infty}+L(\psi)\|\phi\|_{\infty})e^{-\varepsilon R}\), so by lemma 5.4:
\[|\langle\widetilde{\pi}(k)\phi,\psi\rangle-\phi(\hat{h})\overline {\psi(\hat{g})}|\] \[< |\langle\widetilde{\pi}(k)\phi,\psi\rangle-\phi(\hat{k})\overline {\psi(\hat{k})}|+|\phi(\hat{k})\overline{\psi(\hat{k})}-\phi(\hat{h}) \overline{\psi(\hat{g})}|\] \[\prec (L(\phi)\|\psi\|_{\infty}+L(\psi)\|\phi\|_{\infty})\left(\frac{1} {(1+R)^{\frac{1}{D}}}+e^{-\varepsilon R}\right)\]
As a result we get:
\[\left|\frac{1}{\lambda(U_{g,h})}\left\langle S_{R,g,h}\phi,\psi\right\rangle-\phi (\hat{h})\overline{\psi(\hat{g})}\right|\prec(L(\phi)\|\psi\|_{\infty}+L(\psi) \|\phi\|_{\infty})\left(\frac{1}{(1+R)^{\frac{1}{D}}}+e^{-\varepsilon R}\right)\]
In addition, if \(\xi,\eta\in V_{g,h}\) then \(d_{\varepsilon}(\xi,\hat{g}),d_{\varepsilon}(\eta,\hat{h})\prec e^{- \varepsilon R}\) so \(|\phi(\hat{h})\overline{\psi(\hat{g})}-\phi(\eta)\overline{\psi(\xi)}|\prec(L (\phi)\|\psi\|_{\infty}+L(\psi)\|\phi\|_{\infty})e^{-\varepsilon R}\) and thus:
\[\int_{V_{g,h}}K(\xi,\eta)|\phi(\eta)\overline{\psi(\xi)}-\phi(\hat {h})\overline{\psi(\hat{g})}|d\mu^{2}(\eta,\xi)\] \[\prec \int_{V_{g,h}}K(\xi,\eta)d\mu^{2}(L(\phi)\|\psi\|_{\infty}+L(\psi )\|\phi\|_{\infty})e^{-\varepsilon R}\]
Finally, using the previous two estimates:
\[|\langle S_{R}\phi,\psi\rangle-\langle T_{K}\phi,\psi\rangle|\] \[\leq \sum_{F^{2}}\left|w_{g,h}\langle S_{R,g,h}\phi,\psi\rangle-\int_{ V_{g,h}}K(\xi,\eta)\phi(\eta)\overline{\psi(\xi)}d\mu^{2}(\xi,\eta)\right|\] \[\leq \sum_{F^{2}}\left(\int_{V_{g,h}}Kd\mu^{2}\right)\left|\frac{1}{ \lambda(U_{g,h})}\langle S_{R,g,h}\phi,\psi\rangle-\phi(\hat{h})\overline{ \psi(\hat{g})}\right|\] \[+ \sum_{F^{2}}\int_{V_{g,h}}K(\xi,\eta)|\phi(\hat{h})\overline{\psi( \hat{g})}-\phi(\eta)\overline{\psi(\xi)}|d\mu^{2}(\eta,\xi)\] \[\prec \sum_{F^{2}}\left[\left(\int_{V_{g,h}}Kd\mu^{2}\right)(L(\phi)\| \psi\|_{\infty}+L(\psi)\|\phi\|_{\infty})\left(\frac{1}{(1+R)^{\frac{1}{D}}}+2 e^{-\varepsilon R}\right)\right]\] \[= \|K\|_{1}(L(\phi)\|\psi\|_{\infty}+L(\psi)\|\phi\|_{\infty})\left( \frac{1}{(1+R)^{\frac{1}{D}}}+2e^{-\varepsilon R}\right)\]
The right hand side tends to \(0\) as \(R\to\infty\) as needed.
**Lemma 5.6**.: _If \(G\) is a non-elementary unimodular locally compact hyperbolic group then the norms \(\|S_{R}\|_{op}\) are bounded uniformly in \(R\)._
Proof.: We first show \(w(x)\prec e^{-2hR}\). For every \((g,h)\in F^{2}\), \(V_{g,h}\subseteq\Sigma(g)\times\Sigma(h)\), so by lemma 3.4\(\int_{V_{g,h}}Kd\mu^{2}\prec\|K\|_{\infty}e^{-2hR}\prec e^{-2hR}\). Since \(\lambda(U_{g,h})\asymp 1\) we get that \(w_{g,h}\prec e^{-2hR}\). Now all we need to show is that the \(U_{g,h}\) are disjoint. Indeed if \(U_{g,h}\cap U_{g^{\prime},h^{\prime}}\neq\phi\) there exist \(s\in B_{\tau}(g),s^{\prime}\in B_{\tau}(g^{\prime})\) such that \(|sh^{-1}|\geq|s|+|h|-4\tau,|s^{\prime}h^{\prime-1}|\geq|s^{\prime}|+|h^{\prime} |-4\tau\) and \(sh^{-1}=s^{\prime}h^{\prime-1}\). \(\tau\) is a fixed constant so \((sh^{-1},s)\gtrsim|s|\approx R,(s^{\prime}h^{\prime-1},s^{\prime})\gtrsim|s^{ \prime}|\approx R\). Since \(|g|,|g^{\prime}|,|s|,|s^{\prime}|\approx R\) and \(d(g,s),d(g^{\prime},s^{\prime})\approx 0\), we get \((g,s),(g^{\prime},s^{\prime})\approx R\). Thus using \(sh^{-1}=s^{\prime}h^{\prime-1}\), \((g,g^{\prime})\gtrsim min\{(g,s),(s,sh^{-1})(s^{\prime}h^{\prime-1},s^{\prime}), (s^{\prime},g^{\prime})\}\approx R\). Since \(|g|,|g^{\prime}|\approx R\) we get that \(d(g,g^{\prime})\approx 0\). Taking inverses we similarly see that \(d(h,h^{\prime})\approx 0\). As a result there exists a constant \(C\) depending only on \((G,d)\) such that if \(U_{g,h}\cap U_{g^{\prime},h^{\prime}}\neq\phi\) then \(d(g,g^{\prime}),d(h,h^{\prime})<C\). By the definition of \(F\) (and our assumptions on the choice of \(\sigma\) defining \(\Sigma(g)=\Sigma(g,\sigma)\)) we know that for this specific \(C\), lemma 3.6 holds and thus since
\(d(g,g^{\prime}),d(h,h^{\prime})<C\) we conclude that \(g=g^{\prime}\) and \(h=h^{\prime}\) as needed. Note that there is no circular reasoning in the proof since the constant \(C\) which we obtained depends only on \((G,d)\) so our choice of the parameter \(\sigma\) is independent of \(R\).
Using \(w(x)\prec e^{-2hR}\) and the fact that \(w(k)\) is supported in \(A_{2R}(2\alpha+3\tau)\) it follows from lemma 5.3 that \(\|S_{R}\mathbb{1}_{\partial G}\|_{\infty}\prec 1\) because:
\[S_{R}\mathbb{1}_{\partial G}=\int w(k)\widetilde{\pi}(k)\mathbb{1}_{\partial G }d\lambda(k)=\int w(k)\widetilde{P}_{g}d\lambda(k)\prec\int_{A_{2R}(2\alpha+3 \tau)}e^{-2hR}\widetilde{P}_{g}d\lambda(k)\prec 1\]
Since \(G\) is unimodular,
\[S_{R}^{*}=\int w(g)\widetilde{\pi}(g^{-1})d\lambda(g)=\int w(g^{-1})\pi(g)d \lambda(g)\]
(otherwise we would need right Haar measure). Using this formula one can now similarly show that \(\|S_{R}^{*}\mathbb{1}_{\partial G}\|_{\infty}\prec 1\).
Let \(\phi,\psi\in L^{4}(\mu)\) and consider the measure \(d\alpha=w(g)d\lambda(g)\). Using Cauchy-Shwartz and our estimates on \(\|S_{R}\mathbb{1}_{\partial G}\|_{\infty},\|S_{R}^{*}\mathbb{1}_{\partial G}\| _{\infty}\) we have:
\[|\langle S_{R}\phi,\psi\rangle|^{2}=\left|\int_{G\times\partial G }\widetilde{P}_{g}(\xi)\phi(g^{-1}\xi)\overline{\psi(\xi)}d\alpha\times\mu(g, \xi)\right|^{2}\] \[\leq \left|\int_{G\times\partial G}\widetilde{P}_{g}(\xi)|\phi(g^{-1} \xi)|^{2}d\alpha\times\mu(g,\xi)\right|\left|\int_{G\times\partial G} \widetilde{P}_{g}(\xi)|\psi(\xi)|^{2}d\alpha\times\mu(g,\xi)\right|\] \[= |\langle S_{R}|\phi|^{2},\mathbb{1}_{\partial G}\rangle||\langle S _{R}\mathbb{1}_{\partial G},|\psi|^{2}\rangle|=|\langle|\phi|^{2},S_{R}^{*} \mathbb{1}_{\partial G}\rangle||\langle S_{R}\mathbb{1}_{\partial G},|\psi|^{2 }\rangle|\] \[\leq \|S_{R}^{*}\mathbb{1}_{\partial G}\|_{\infty}\|S_{R}\mathbb{1}_{ \partial G}\|_{\infty}\|\phi^{2}\|_{1}\|\psi^{2}\|_{1}\prec\|\phi\|_{2}\|\psi\|_ {2}\]
The claim now follows by density of \(L^{4}(\mu)\) in \(L^{2}(\mu)\).
**Theorem 5.7**.: _Let \(G\) be a unimodular, non-elementary, locally compact hyperbolic group. Let \(d\in\mathcal{D}(G)\) and let \(\mu\), \(\pi\) be the corresponding measure and boundary representation. The kernel operators with non-negative kernels are in the positive cone of \(\pi\)._
Proof.: Since the operators \(S_{R}\) have uniformly bounded norms, in order to check convergence in the weak operator topology it is enough to check that \(\langle S_{R}\phi,\psi\rangle\underset{R\to\infty}{\longrightarrow}\langle T_ {K}\phi,\psi\rangle\) for \(\phi,\psi\) in a dense subset of \(L^{2}(\mu)\). When \(K\geq 0\) we saw that this convergence holds for \(\phi,\psi\in Lip(\partial G)\).
We now prove one more lemma we will need later.
**Lemma 5.8**.: _For any Borel set \(E\subseteq\partial G\) the projection \(P_{E}\) on to \(L^{2}(E)\) is in the positive cone of \(\pi\)._
Proof.: Consider the kernels \(K_{\rho}(\xi,\eta)=\frac{1}{\mu(B_{\rho}(\xi))}\mathbb{1}_{\{(\xi,\eta)|\xi\in E,d(\xi,\eta)<\rho\}}\) with corresponding kernel operators \(T_{\rho}\) which are in the positive cone. We claim that \(T_{\rho}\to P_{E}\) in the weak operator topology. As before we check convergence on Lipschitz functions first. Let \(\phi,\psi\in Lip(\partial G)\) and \(\xi\in\partial G\)
\[\left|\int K_{\rho}(\xi,\eta)\phi(\eta)d\mu(\eta)-P_{E}\phi(\xi)\right|\] \[= \frac{1}{\mu(B_{\rho}(\xi))}\left|\int_{B_{\rho}(\xi)}(\phi(\eta)- \phi(\xi))d\mu(\eta)\right|\] \[\leq \frac{1}{\mu(B_{\rho}(\xi))}\int_{B_{\rho}(\xi)}L(\phi)\rho d\mu \leq L(\phi)\rho\]
Using this estimate we see that:
\[|\langle(T_{\rho}-P_{E})\phi,\psi\rangle|\leq\|\psi\|_{\infty}L(\phi)\rho \underset{\rho\to 0}{\longrightarrow}0\]
Now since \(Lip(\partial G)\) is dense in \(L^{2}(\partial G)\) all that remains is to check that \(\|T_{\rho}\|_{op}\) are bounded independently of \(\rho\). For any non negative kernel \(K\) it follows from the Cauchy-Shwartz inequality on \((\partial G^{2},\mu^{2})\) that for any \(\phi,\psi\in L^{2}(\mu)\)
\[\langle T_{K}\phi,\psi\rangle\leq\|\sqrt{K(\xi,\eta)}\phi(\eta)\|_{2}\|\sqrt{ K(\xi,\eta)\psi(\xi)}\|_{2}\leq\left\|\int K(\xi,\eta)d\mu(\xi)\right\|_{\infty}^{ \frac{1}{2}}\left\|\int K(\xi,\eta)d\mu(\eta)\right\|_{\infty}^{\frac{1}{2}} \|\phi\|_{2}\|\psi\|_{2}\]
But for \(K=K_{\rho}\) it is easily seen that:
\[\int K_{\rho}(\xi,\eta)d\mu(\eta),\int K_{\rho}(\xi,\eta)d\mu(\xi)\leq 1\]
So \(\|T_{\rho}\|\leq 1\) as needed.
## 6. Irreducibility of Boundary Representations
We now prove irreducibility of the representation \(\pi\).
**Lemma 6.1**.: _Let \(\pi\) be a representation of \(G\), if the weak operator closure of \(\pi(L^{1}(G))\) contains a projection on a cyclic vector then \(\pi\) is irreducible._
Proof.: We show that the commutant of \(\pi\) is trivial, the result then follows by Schur's lemma. Let \(P\) be a projection on to a cyclic vector \(v\). If \(P\) is in the weak operator closure of \(\pi(L^{1}(G))\) then so is \(\pi(g)P\pi(g)^{-1}\), which is the projection on \(\pi(g)v\). So if \(T\) is an operator commuting with \(\pi(L^{1}(G))\) then \(\pi(g)v\) is an eigenvector of \(T\) for every \(g\in G\). Since the vectors \(\pi(g)v\) span a dense subspace we conclude that \(T\) is a scalar operator.
If G is a unimodular non-elementary locally compact hyperbolic group and \(\pi\) is the boundary representation corresponding to some \(d\in\mathcal{D}(G)\) then all the kernel operators are in the weak operator closure of \(\pi(L^{1}(G))\). Let \(\phi\in L^{\infty}(\mu)\) and consider the kernel operator \(T_{K}\) corresponding to the kernel \(K(\xi,\eta)=\phi(\xi)\). Note that \(T_{K}(1_{\partial G})=\phi\). Therefore since \(L^{\infty}(\mu)\) is dense in \(L^{2}(\mu)\), \(1_{\partial G}\) is cyclic. Taking \(\phi=1_{\partial G}\) we see that the projection on \(1_{\partial G}\) is in the weak operator closure of \(\pi(L^{1}(G))\). Thus we get by the lemma:
**Theorem 6.2**.: _Let \(G\) be a unimodular, non-elementary, hyperbolic locally compact group. Let \(d\in\mathcal{D}(G)\) and let \(\mu,\pi\) be the the corresponding Patterson-Sullivan measure and boundary representation then \(\pi\) is irreducible._
Finishing the proof of theorem 1.1.
We remark that unimodularity of \(G\) only entered the proof of irreducibility in two places: lemma 4.3 and lemma 5.6, although the entire construction of the previous section would not work without these two lemmas.
## 7. Rough Equivalence of Metrics
This section is dedicated to the proof that two metrics \(d_{1},d_{2}\in\mathcal{D}(G)\) are roughly similar if and only if the corresponding boundary representations are unitarily equivalent. The proof will use the fact that the action of \(G\) on \(\partial^{2}G\) is erogodic which we will show later (theorem 10.5), independently of this section.
We fix throughout the section metrics \(d_{1},d_{2}\in\mathcal{D}(G)\). We denote objects corresponding to \(d_{i}\) by a subscript \(i\). For example the boundary representations will be \(\pi_{i}\) and the visual metrics will have parameters \(\epsilon_{i}\) and be denoted \(d_{\epsilon_{i}}\). From this point forward we add the assumption that \(G\) is second countable to the standing assumption that \(G\) is non elementary and unimodular.
**Lemma 7.1**.: _If the representations \(\pi_{1}\) and \(\pi_{2}\) are unitarily equivalent then there exists an almost everywhere defined measure class preserving isomorphism \(F:\partial G\to\partial G\) conjugating \(\pi_{1}\) to \(\pi_{2}\)._
Proof.: Denote the (\(G\) equivariant) unitary isomorphism \(T:L^{2}(\mu_{1})\to L^{2}(\mu_{1})\). \(T\) induces an isomorphism of the von Neumann algebras of bounded operators on these Hilbert spaces \(\hat{T}:B(L^{2}(\mu_{1}))\to B(L^{2}(\mu_{2}))\) by \(\hat{T}(A)=TAT^{-1}\). We claim that \(\hat{T}\) restricts to an isomorphism between the von Neumann algebras \(L^{\infty}(\mu_{i})\) considered as multiplication operators inside \(B(L^{2}(\mu_{i}))\).
The projections in \(L^{\infty}(\mu_{i})\) can be characterized amongst the projections of \(B(L^{2}(\mu_{i}))\) as those \(P\) such that both \(P\) and \(I-P\) preserve the partial order on \(L^{2}(\mu_{i})\). Obviously all projections in \(L^{\infty}(\mu_{i})\) satisfy this. In the other direction if \(P\) and \(I-P\) both preserve the partial order on \(L^{2}(\mu_{i})\) then \(P\mathbb{1}_{\partial G},(I-P)\mathbb{1}_{\partial G}\) are positive orthogonal functions, so they must have disjoint supports. Since \(P\mathbb{1}_{\partial G}+(I-P)\mathbb{1}_{\partial G}=\mathbb{1}_{\partial G}\) we conclude that \(P\mathbb{1}_{\partial G}\) is the indicator function of some set \(E\). If \(\phi\in L^{\infty}(\mu_{i})\) is positive then \(P\phi\) is supported on \(E\) since \(0\leq P\phi<P\|\phi\|_{\infty}\mathbb{1}_{\partial G}=\|\phi\|_{\infty} \mathbb{1}_{E}\). Using linearity and density of \(L^{\infty}(\mu_{i})\) in \(L^{2}(\mu_{i})\) we deduce that \(Im(P)\subseteq L^{2}(E)\). Similarly \(Im(I-P)\subseteq L^{2}(E^{c})\), so since \(P,I-P\) are projections which sum up to \(I\) we deduce that \(Im(P)=L^{2}(E)\) as needed.
By lemma 5.8 we know that all the projections in \(L^{\infty}(\mu_{1})\) are in the positive cone of \(\pi_{1}\). Using the \(G\)-equivariance we see that \(\hat{T}\) takes the positive cone of \(\pi_{1}\) in to the positive cone of \(\pi_{2}\). Since elements of the positive cone of \(\pi_{2}\) preserve the partial order on \(L^{2}(\mu_{2})\) we conclude that \(\hat{T}\) takes projections in \(L^{\infty}(\mu_{1})\) to projections in \(L^{\infty}(\mu_{2})\). Finally because the projections generate \(L^{\infty}(\mu_{1})\) we conclude that \(\hat{T}(L^{\infty}(\mu_{1}))\subseteq L^{\infty}(\mu_{2})\). Applying the same line of reasoning to \(\hat{T}^{-1}\) one gets the opposite inclusion showing that \(\hat{T}\) restricts to a \(G\)-equivariant isomorphism of von Neumann algebras \(\hat{T}:L^{\infty}(\mu_{1})\to L^{\infty}(\mu_{2})\)
Since maps between commutative von Neumann algebras correspond bijectively to almost everywhere defined maps between the spectra we conclude that there exists a \(G\)-equivariant isomorphism \(F:(\partial G,\mu_{1})\to(\partial G,\mu_{2})\).
Our strategy now will be first to show that \(F\) agrees almost everywhere with a continuous map and then to show that \(F\) is the identity so that \(\mu_{1}\) and \(\mu_{2}\) are actually in the same measure class. In order to do this we will take a slight detour.
Denote by \(\partial^{2}G\) the space of distinct pairs of elements in \(\partial G\). It will be slightly more convenient for us to consider only distinct pairs but since the measure \(\mu\) is purely non-atomic this has no measure theoretic significance.
**Theorem 7.2**.: _For any Patterson-Sullivan measure \(\mu\), the measure space \((\partial^{2}G,\mu^{2})\) contains an invariant (infinite) measure \(m\) satisfying:_
\[dm(\xi,\eta)\asymp e^{2h(\xi,\eta)}d\mu^{2}(\xi,\eta)\asymp d_{\epsilon}(\xi, \eta)^{-2D}d\mu^{2}(\xi,\eta)\]
To prove this theorem we will need two lemmas.
**Lemma 7.3**.: _Let \(G\) be a second countable locally compact group acting continuously on a locally compact space \(X\). If \(G\) has a dense subgroup \(H\) preserving a (possibly infinite) Radon measure \(m\) then \(G\) preserves \(m\)._
Proof.: Let \(f:X\to\mathbb{R}\) be a continuous compactly supported function. Let \(g\in G\) and \(g_{n}\in H\) such that \(g_{n}\to g\). If \(U\) is some compact neighborhood of \(g\) then the functions \(f(g^{-1}x)\) are all supported on some compact subset of \(X\). Since \(g_{n}\) preserve \(m\) we see by dominated convergence that:
\[\int f(x)dg_{*}m=\int f(g^{-1}x)dm=\lim_{n\to\infty}\int f(g_{n}^{-1}x)dm=\lim _{n\to\infty}\int f(x)d(g_{n})_{*}m=\int f(x)dm\]
Radon measures are defined by their values on continuous compactly supported functions so since \(f\) is arbitrary we conclude that \(g_{*}m=m\) as needed.
**Lemma 7.4**.: _Let \(G\) a locally compact second countable group acting continuously on a locally compact space \(X\) and preserving the measure class of a Radon measure \(\nu\) such that \(\frac{dg_{*}\nu}{d\nu}\asymp 1\) uniformly in \(g\). There exists an invariant measure \(m\in[\nu]\) satisfying \(\frac{dm}{d\nu}\asymp 1\)._
Proof.: We start with the case of a countable group. Consider the function:
\[\alpha(x)=\sup_{h\in G}\frac{dh_{*}\nu}{d\nu}\]
Since \(G\) is countable \(\alpha\) is measurable and well defined almost everywhere. By the chain rule for Radon Nikodym derivatives:
\[\frac{d(gh)_{*}\nu}{d\nu}(x)=\frac{dh_{*}\nu}{d\nu}(g^{-1}x)\frac{dg_{*}\nu} {d\nu}(x)\]
Taking the supremum over \(h\in G\) we get that for every \(g\in G\):
\[\frac{dg_{*}\nu}{d\nu}(x)=\frac{\alpha(x)}{\alpha(g^{-1}x)}\]
If we define a measure \(m\) by \(dm=\alpha(x)d\nu\) then for any \(g\in G\):
\[\frac{dg_{*}m}{dm}(x)=\frac{dg_{*}m}{dg_{*}\nu}(x)\frac{dg_{*}\nu}{d\nu}(x) \frac{d\nu}{dm}(x)=\frac{\alpha(g^{-1}x)}{\alpha(x)}\frac{dg_{*}\nu}{d\nu}(x)=1\]
So \(m\) is invariant under \(G\). By assumption \(\alpha(x)\asymp 1\).
If now we assume that \(G\) is locally compact and second countable then then \(G\) contains a dense countable subgroup \(H\). By the countable case there is an \(H\) invariant measure on \(X\) satisfying the conditions of the theorem and by the previous lemma this measure is actually \(G\) invariant as needed.
We are now ready to prove theorem 7.2.
proof of theorem 7.2.: Consider the measure \(d\nu=e^{2h(\xi,\eta)}d\mu^{2}\) on \(\partial^{2}G\). Since we are only considering distinct pairs in \(\partial G\) this measure is a Radon measure (it would not be a Radon measure on \(\partial G\times\partial G\)). The theorem will now follow from lemma7.4 if we show \(\frac{dg_{*}\nu}{d\nu}\asymp 1\) independently of \(g\). Indeed:
\[ln\left(\frac{dg_{*}\nu}{d\nu}(\xi,\eta)\right)= ln\left(\frac{dg_{*}\nu}{dg_{*}\mu^{2}}(\xi,\eta)\frac{dg_{*}\mu^{2}} {d\mu^{2}}(\xi,\eta)\frac{d\mu^{2}}{d\nu}(\xi,\eta)\right)\] \[\approx 2h((g^{-1}\xi,g^{-1}\eta)+(g,\xi)+(g,\eta)-|g|-(\xi,\eta))\approx 0\]
Recall that given a metric space \((X,d)\) the cross ratio of four distinct points \(x,y,z,w\) is:
\[[x,y;z,w]=\frac{d(x,z)d(y,w)}{d(x,w)d(y,z)}\]
We will show in the following chapters (theorem 10.5) that the action of \(G\) on \((\partial^{2}G,\mu^{2})\) is ergodic. The proof is independent rest of this chapter so we can use theorem 10.5 and not run in to any circular logic. An ergodic measure class can not contain more then one invariant measure up to scalars. Therefore, denoting \(F^{2}(\xi,\eta)=(F(\xi),F(\eta))\), we conclude that \(F_{*}^{2}m_{1}=\beta m_{2}\) for some constant \(\beta>0\). As a result:
\[\frac{dF_{*}^{2}\mu_{1}^{2}}{d\mu_{2}^{2}}(\xi,\eta)\asymp\frac{dF_{*}^{2}\mu _{1}^{2}}{dF_{*}^{2}m_{1}}(\xi,\eta)\frac{dm_{2}}{d\mu_{2}^{2}}(\xi,\eta) \asymp d_{\epsilon_{1}}(F^{-1}(\xi),F^{-1}(\eta))^{2D_{1}}d_{\epsilon_{2}}( \xi,\eta)^{-2D_{2}}\]
One can now calculate the cross ratio in \((\partial G,d_{\epsilon_{1}})\) directly and see that all the Radon-Nikodym derivatives cancel out giving:
\[[F(\xi_{1}),F(\xi_{2});F(\eta_{1}),F(\eta_{2})]_{2}\asymp[\xi_{1},\xi_{2}; \eta_{1},\eta_{2}]_{1}^{\frac{D_{1}}{D_{2}}}\]
which holds on a subset \(E\) of full measure in \(\partial G^{4}\).
**Lemma 7.5**.: _There exists a full measure subset of \(E^{\prime\prime}\subseteq\partial G\) such that for \(\xi,\eta\in E^{\prime\prime}\):_
\[d_{\epsilon_{2}}(F(\xi),F(\eta))\prec d_{\epsilon_{1}}(\xi,\eta)^{\frac{D_{1} }{D_{2}}}\]
Proof.: Let \((\xi_{2},\eta_{2})\) be such that the fiber \(\{(\xi_{1},\eta_{1})|(\xi_{1},\xi_{2},\eta_{1},\eta_{2})\in E\}\) has full measure in \(\partial G\times\partial G\) and let \(\rho>0\). Thus the estimate on the cross ratios implies that for a full measure subset of \((\xi_{1},\eta_{1})\in B_{\rho}(\eta_{2})^{c}\times B_{\rho}(\xi_{2})^{c}\):
\[d_{\epsilon_{2}}(F(\xi_{1}),F(\eta_{1}))\prec_{\xi_{2},\eta_{2},\rho}d_{ \epsilon_{1}}(\xi_{1},\eta_{1})^{\frac{D_{1}}{D_{2}}}\]
By Fubini's theorem \((\xi_{2},\eta_{2})\) can be chosen from a full measure subset which in particular is dense in \(\partial G\times\partial G\). By compactness, \(\partial G\times\partial G\) can be covered by finitely many sets of the form \(B_{\rho}(\eta_{2})^{c}\times B_{\rho}(\xi_{2})^{c}\) so that the above estimate actually holds uniformly over a full
measure subset of \(\partial G\times\partial G\). Since \(F\) is an almost everywhere defined isomorphism there is a full measure subset \(E^{\prime}\) of \(\partial G\times\partial G\) such that \(d_{\epsilon_{2}}(F(\xi),F(\eta))\prec d_{\epsilon_{1}}(\xi,\eta)^{\frac{D_{1}}{ D_{2}}}\).
Let \(E^{\prime\prime}\) be the full measure subset of \(\partial G\) such that the fibers over points in \(E^{\prime}\) have full measure. If \(\xi,\eta\in E^{\prime\prime}\) and \(\zeta\) is in the fiber over both \(\xi\) and \(\eta\) then:
\[d_{\epsilon_{2}}(F(\xi),F(\eta))\leq d_{\epsilon_{2}}(F(\xi),F(\zeta))+d_{ \epsilon_{2}}(F(\zeta),F(\eta))\prec d_{\epsilon_{1}}(\xi,\zeta)^{\frac{D_{1}}{ D_{2}}}+d_{\epsilon_{1}}(\zeta,\eta)^{\frac{D_{1}}{D_{2}}}\]
Since \(\zeta\) can be taken from a full measure subset we can let it converge to \(\eta\) and we get \(d_{\epsilon_{2}}(F(\xi),F(\eta))\prec d_{\epsilon_{1}}(\xi,\eta)^{\frac{D_{1}} {D_{2}}}\) for all \(\xi,\eta\in E^{\prime\prime}\) as needed.
As a result we see that \(F\) is uniformly continuous on \(E^{\prime\prime}\) and thus agrees almost everywhere with a uniformly continuous function \(H:\partial G\to\partial G\). Since \(F\) is almost everywhere \(G\)-equivariant we conclude that \(H\) is \(G\)-equivariant everywhere so by lemma 2.16\(H\) is trivial. Finally using the symmetry between \(d_{1}\) and \(d_{2}\), for all \(\xi,\eta\in\partial G\times\partial G\):
\[d_{\epsilon_{2}}(\xi,\eta)\asymp d_{\epsilon_{1}}(\xi,\eta)^{\frac{D_{1}}{D_{ 2}}}\]
Now we can prove the main theorem of this section:
**Theorem 7.6**.: _Let \(G\) be a second countable non elementary unimodular locally compact hyperbolic group and let \(d_{1},d_{2}\in\mathcal{D}(G)\). The following are equivalent:_
_(1) The metrics \(d_{1}\) and \(d_{2}\) are roughly similar._
_(2) The Patterson-Sullivan measures \(\mu_{1}\) and \(\mu_{2}\) are in the same measure class._
_(3) The boundary representations \(\pi_{1}\) and \(\pi_{2}\) are unitarily equivalent._
Proof.: If \(d_{2}\approx Ld_{1}\) are roughly similar then for a given small enough visual parameters \(\epsilon\) (chosen for simplicity to be the same for both metrics), the visual metrics \(d_{L\epsilon}^{1}\) and \(d_{\epsilon}^{2}\) are bi-Lipschitz and thus produce Hausdorff measures in the same measure class so by lemma 3.8\(\mu_{1}\) and \(\mu_{2}\) are in the same measure class. As explained in the preliminaries this implies that \(\pi_{1}\) and \(\pi_{2}\) are equivalent.
For the implication \((3)\implies(1)\), since \(d_{\epsilon_{2}}(\xi,\eta)\asymp d_{\epsilon_{1}}(\xi,\eta)^{\frac{D_{1}}{D_ {2}}}\) the corresponding \(D_{i}\) -dimensional Hausdorff measures are in the same measure class with Radon-Nikodym derivatives bounded away from \(0\) and \(\infty\). By lemma 3.8 the same is true for \(\mu_{1}\) and \(\mu_{2}\). Therefore for any \(g\in G\):
\[e^{h_{1}(2(g,\xi)_{1}-|g|_{1})}\asymp e^{h_{2}(2(g,\xi)_{2}-|g|_{2})}\]
By lemma 2.13, after taking suprema over \(\xi\) we get that \(e^{h_{1}|g|_{1}}\asymp e^{h_{2}|g|_{2}}\) so:
\[|g|_{1}\approx\frac{h_{2}}{h_{1}}|g|_{2}\]
as needed.
We have now finished the proof of theorem 1.2, assuming theorem 10.5 holds.
## 8. Type I Hyperbolic Groups
One of the original motivations for generalizing Garncarek's work [15] to the locally compact setting was that such a generalization would strengthen the results of Caprace, Klantar and Monod in [11]. This is pointed out explicitly in [11, Remark 5.6]. We describe the strengthened results here.
**Definition 8.1**.: A locally compact group \(G\) is of type I if any two weakly equivalent (see [4, Appendix F.1]) irreducible unitary representations are unitarily equivalent.
If \(G\) is a second countable totally disconnected unimodular hyperbolic locally compact group then it follows from theorem 1.1 that for any compact open subgroup \(U\) and any Cayley-Abels graph on \(G/U\) the Patterson-Sullivan representation corresponding to the induced (pseudo)metric on \(G\) is irreducible. Similarly by theorem 1.2 the identity on \(G/U\) is a rough similarity between two Cayley-Abels graphs if and only if the corresponding Patterson-Sullivan representations are unitarily equivalent. These are exactly conditions (G1) and (G2) mentioned in [11, Remark 5.6], thus it follows from [11, Remark 5.6] that [11, Theorem B],[11, Theorem K] hold in greater generality. Explicitly:
**Theorem 8.2**.: _Let \(G\) be a second countable unimodular hyperbolic locally compact group. If \(G\) is of type I then \(G\) admits a cocompact amenable subgroup._
**Theorem 8.3**.: _Let \(G\) be a non-amenable second countable unimodular hyperbolic locally compact group. Let \(\mu\) be a quasi invariant measure on \(\partial G\), \(\pi\) the corresponding Koopman representation and \(C^{*}_{\pi}(G)\) the image of the group \(C^{*}\)-algebra \(C^{*}(G)\) under \(\pi\). The following are equivalent:_
* \(C^{*}_{\pi}(G)\) _contains a non-zero CCR closed two sided ideal._
* \(C^{*}_{\pi}(G)\) _is GCR._
* \(C^{*}_{\pi}(G)\) _is CCR._
* \(C^{*}_{\pi}(G)\) _consists entirely of compact operators._
* \(G\) _has a cocompact amenable subgroup._
Note that the last item of the proposition is independent of \(\mu\).
It is an interesting question whether the unimodularity assumption in these theorems can be replaced by the weaker assumption of non-amenability. This would follow as above if one would show the conclusions of theorems 1.1 and 1.2 hold for non amenable hyperbolic groups.
## 9. The Geodesic Flow
We retain with the assumptions from the last sections that \(G\) is second countable locally compact hyperbolic and unimodular. In the special case when \(G\) is a discrete group acting freely properly and co-compactly on a negatively curved simply connected manifold \(M\) one can associate to this action the geodesic flow on the unit tangent bundle of the quotient \(T^{1}N=G\backslash T^{1}M\). In this restricted setting \(T^{1}M\) is canonically identified with the space of pointed geodesics in \(M\). By looking at the endpoints of each geodesic we see that this space is a line bundle over \(\partial^{2}G\). Measure theoretically we can identify this bundle with \(\partial^{2}G\times\mathbb{R}\) and the action of \(G\) on on this space is given by a certain cocycle \(\tau:G\times\partial^{2}G\to\mathbb{R}\). This point of view can be generalized to our setting and will allow us to define a geodesic flow for general \((G,d)\). In the case where \(G\) is a discrete group this is done in [15, Appendix A.1] and [2, Section 3]. We will encounter some added measure theoretic difficulties. The two major complications are the following:
* If \(G\) is uncountable not every null set is contained in a \(G\)-invariant null set so one must take care to ignore only invariant null sets.
* If \(G\) is uncountable a fundamental domain for a non-singular action of \(G\) will typically be a null set (which is of course not \(G\)-invariant).
Define \(\sigma,\rho:G\times\partial G\rightarrow\mathbb{R}\) and \(\tau:G\times\partial^{2}G\rightarrow\mathbb{R}\) by:
\[\sigma(g,\xi)=2(g^{-1},\xi)-|g|\] \[\rho(g,\xi)=\frac{1}{h}ln\frac{dg_{*}^{-1}\mu}{d\mu}(\xi)\]
\[\tau(g,\xi,\eta)=\frac{\rho(g,\eta)-\rho(g,\xi)}{2}\approx(g^{-1},\eta)-(g^{- 1},\xi)\]
By the chain rule for Radon-Nikodym derivatives \(\rho\) and \(\tau\) are cocycles, i.e. they satisfy the cocycle equation:
\[c(gh,x)=c(g,hx)+c(h,x)\]
By [22, Appendix B.9] we can assume that \(\rho\) and \(\tau\) are strict cocycles, i.e. that the cocycle equation is satisfied everywhere (as opposed to almost everywhere). \(\sigma\) is a strict almost cocycle in the sense that for all every \(g,h\) and \(\xi\):
\[\sigma(gh,\xi)\approx\sigma(g,h\xi)+\sigma(h,\xi)\]
uniformly in \(g,h,\xi\).
Finally, by quasi-conformality of \(\mu\) it follows that for every \(g\in G\), for almost every \(\xi\in\partial G\):
\[\rho(g,\xi)\approx\sigma(g,\xi) \tag{3}\]
By appendix A.1, after replacing \(\rho\) by a strictly cohomologous cocycle we can assume that estimate 3 holds on a \(G\)-invariant full measure subset \(A\subseteq\partial G\). Abusing notation we call will denote this new cocycle again by \(\rho\). We now get also that for all \(g\in G\) and \((\xi,\eta)\) in a full measure \(G\)-invariant set \(S\) (for example \(S=A^{2}-\Delta A\)) we have:
\[\tau(g,\xi,\eta)\approx\frac{\sigma(g,\eta)-\sigma(g,\xi)}{2}\]
We will need the following lemma in the next section:
**Lemma 9.1**.: _For every \(M>0\) there exists a constant \(C_{M}\) such that if \((\xi,\eta)\in S\) such that \((\xi,\eta),(g\xi,g\eta)<M\) then \(|\tau(g,\xi,\eta)-\sigma(g,\eta)|<C_{M}\)._
Proof.: Since \(\sigma(g,\eta)\approx\rho(g,\eta)\) we have that \(\tau(g,\xi,\eta)-\sigma(g,\eta)\approx-\frac{\rho(g,\eta)+\rho(g,\xi)}{2}\). Using the chain rule for Radon Nikodym derivatives:
\[1=\frac{dg_{*}^{-1}m}{dm}(\xi,\eta)=\frac{dm}{d\mu^{2}}(g\xi,g\eta)\frac{dg_ {*}^{-1}\mu^{2}}{d\mu^{2}}(\xi,\eta)\frac{d\mu^{2}}{dm}(\xi,\eta)\]
Taking the logarithm of both sides and recalling that \(\frac{dm}{d\mu^{2}}(\xi,\eta)\asymp e^{2h(\xi,\eta)}\) we see that \(|\frac{\rho(g,\eta)+\rho(g,\xi)}{2}|\approx|(\xi,\eta)-(g\xi,g\eta)|\leq 2M\) as needed.
Fix an ergodic p.m.p action of \(G\) on a standard probability space \((\Omega,\omega)\), we introduce this space to increase the generality of the next section but it will have very little effect on the discussion and for our purposes one can take \(\Omega\) to be trivial. Recall the invariant measure \(m\) constructed on \(\partial^{2}G\). Denote the Lebesgue measure on \(\mathbb{R}\) by \(\ell\). Using the cocycle \(\tau\) we can define an (infinite) measure preserving action of \(G\) on the space \((S\times\mathbb{R}\times\Omega,m\times\ell\times\omega)\) by:
\[g(\xi,\eta,t,w)=(g\xi,g\eta,t+\tau(g,\xi,\eta),gw)\]
This action commutes with the \(\mathbb{R}\)-flow defined by:
\[\Phi^{s}(\xi,\eta,t)=(\xi,\eta,t+s)\]
Denote:
\[D_{\theta,k}=\{(\xi,\eta,t,w)\in S\times(-k,k)\times\Omega|d_{\epsilon}(\xi,\eta )>\theta\}\]
We call sets contained in some \(D_{\theta,k}\) bounded. Being bounded is equivalent to having precompact projection in \(\partial^{2}G\times\mathbb{R}\). Bonded sets have finite \(m\times\ell\times\omega\) measure.
Similarly to the proofs in of [15, Appendix A.2,A.3] one proves the following two lemmas:
**Lemma 9.2**.: _For any \(\theta,k>0\), \(\{g\in G|gD_{\theta,k}\cap D_{\theta,k}\neq\phi\}\) is bounded in the metric \(d\) and precompact in \(G\)._
Proof.: Suppose that \(gD_{\theta,k}\cap D_{\theta,k}\neq\phi\). There exists \((\xi,\eta,t,w)\in D_{\theta,k}\) such that \((g\xi,g\eta,t+\tau(g,\xi,\eta),gw)\in D_{\theta,k}\). Therefore, \((\xi,\eta),(g\xi,g\eta)\approx 0\) and \(|(g^{-1},\eta)-(g^{-1},\xi)|\approx|\tau(g,\xi,\eta)|\approx 0\). Now \((\xi,\eta)\gtrsim\min\{(\xi,g^{-1}),(g^{-1},\eta)\}\) but \((\xi,\eta)\approx 0\) and \((g^{-1},\eta)\approx(g^{-1},\xi)\) so \((\xi,g^{-1})\approx(g^{-1},\eta)\approx 0\). Using lemma 2.12 we see that:
\[(g\xi,g\eta)\gtrsim\min\{(g\xi,g),(g,\eta)\}\approx|g|-\max\{(g^{-1},\xi),((g^ {-1},\eta)\}\approx|g|\]
But \((g\xi,g\eta)\approx 0\) so \(|g|\approx 0\) as needed.
**Lemma 9.3**.: _For large enough \(\theta\) and \(k\), \(D_{\theta,k}\) intersects every \(G\) orbit in \(S\times\mathbb{R}\times\Omega\)._
Proof.: Let \((\xi,\eta,t,w)\in S\times\mathbb{R}\times\Omega\) and let \(\gamma:(-\infty,\infty)\to G\) be a rough geodesic with \(\gamma(\infty)=\eta,\gamma(-\infty)=\xi\). For any \(r\in\mathbb{R}\) we have that \((\xi,\eta)_{\gamma(r)}\approx 0\) and thus \((\gamma(r)^{-1}\xi,\gamma(t)^{-1}\eta)\approx 0\). In addition:
\[\sigma(\gamma(r)^{-1},\eta) = 2(\gamma(r),\eta)-|\gamma(r)|\approx\liminf_{s\to\infty}|\gamma( r)|-(s-r)\] \[= r+\liminf_{s\to\infty}(|\gamma(s)|-s)=r+\sigma(\gamma(0)^{-1},\eta)\]
and similarly \(\sigma(\gamma(r)^{-1},\xi)\approx-r+\sigma(\gamma(0)^{-1},\xi)\) so:
\[\tau(\gamma(r)^{-1},\xi,\eta)\approx r+\tau(\gamma(0)^{-1},\xi,\eta)\]
Putting everything together we see that for \(r=-\tau(\gamma(0)^{-1},\xi,\eta)-t\): \((\gamma(r)^{-1}\xi,\gamma(r)^{-1}\eta)\approx 0\) and \(\tau(\gamma(r)^{-1},\xi,\eta)\approx 0\). Therefore \(\gamma(r)^{-1}(\xi,\eta,t,w)\) is in some \(D_{\theta,k}\) with \(\theta\) and \(k\) independent of \((\xi,\eta,t,w)\).
**Lemma 9.4**.: _If \(U\subset G\) is bounded then for any \(\theta\) and \(k\), \(UD_{\theta,k}\) is bounded._
Proof.: Given \(g\in G\) and \((\xi,\eta)\in\partial^{2}G\) we have that \(|\tau(g,\xi,\eta)|\approx|(g^{-1},\xi)-(g^{-1},\eta)|\lesssim 2|g|\) and \(|(g\xi,g\eta)-(\xi,\eta)|\approx|(\xi,\eta)_{g^{-1}}-(\xi,\eta)|\lesssim|g|\). Together with the boundedness of \(U\) this shows that for any \(\theta\) and \(k\) there exist \(\theta^{\prime}\) and \(k^{\prime}\) such that \(UD_{\theta,k}\subseteq D_{\theta^{\prime},k^{\prime}}\).
Intuitively one should think of lemma 9.2 and lemma 9.3 as saying that the \(G\) action on \(S\times\mathbb{R}\times\Omega\) is proper and co-compact, although formally this is a measurable action and not a continuous topological action. In the same spirit we have the following theorem:
**Theorem 9.5**.: _The \(G\) action on \(S\times\mathbb{R}\times\Omega\) admits a bounded Borel cross section, i.e. a bounded Borel subset \(\hat{X}\) intersecting each \(G\) orbit exactly once._
Proof.: By [10, Proposition 5.10]\(G\) has a maximal normal compact subgroup \(W\) and \(G/W\) is either a virtually connected rank one simple adjoint Lie group or \(G/W\) is totally disconnected.
In case \(G/W\) is connected the action of \(G\) on \(\partial G\) is transitive so there is a unique invariant measure class on \(\partial G\) and \(\mu\) is in this measure class so the cocycle \(\tau\) is independent of \(d\) up to cohomology and the actions on \(S\times\mathbb{R}\times\Omega\) given by different \(d\) are conjugate. For the specific (psudo)metric pulled back from the corresponding symmetric space this action is simply the action on the unit tangent bundle of the symmetric space which is known to be transitive so any point is a cross section.
In case \(G/W\) is totally disconnected, by van Dantzig's theorem \(G\) admits a compact open subgroup \(U\). Since \(U\) is compact it is bounded in the metric \(d\).
By [22, Corrolary 2.1.21] and [22, Appendix A.7] the \(U\) action on \(S\times\mathbb{R}\times\Omega\) admits a Borel cross section \(\Delta\). By lemma 9.3 there exists some \(D_{\theta^{\prime},k^{\prime}}\) intersecting every \(G\) orbit and by lemma 9.4 there exist \(\theta,k\) such that \(UD_{\theta^{\prime},k^{\prime}}\subseteq D_{\theta,k}\). Since every point has a unique point of \(\Delta\) in its \(U\) orbit every \(G\) orbit intersects \(D_{\theta,k}\cap\Delta\) non trivially. By lemma 9.2 there exists a compact set \(F\subset G\) such that if \(g\notin F\), \(gD_{\theta,k}\cap D_{\theta,k}=\phi\). We will importantly use the fact that this intersection is empty and not only null, this is the reason we are restricting ourselves from \(\partial^{2}G\) to \(S\). Enlarging \(F\) we can assume without loss of generality that \(F=\bigcup_{i}Ug_{i}\) is a finite union of right cosets of U. If \(x\in S\times\mathbb{R}\times\Omega\) and \(g\in G\) such that \(x,gx\in D_{\theta,k}\cap\Delta\) then \(g\in Ug_{i}\) for some \(i\), but \(|Ug_{i}x\cap\Delta|=1\). Therefore the intersection of each \(G\) orbit with \(D_{\theta,k}\cap\Delta\) is finite and non empty.
To sum up, every \(G\) orbit intersects \(D_{\theta,k}\cap\Delta\) and the restriction of the \(G\) orbit equivalence relation to \(D_{\theta,k}\cap\Delta\) has finite equivalence classes. Using the finite Borel selection theorem we deduce that there is a Borel subset \(\hat{X}\subseteq D_{\theta,k}\cap\Delta\) intersecting every \(G\) orbit exactly once as needed. This set is bounded since it is contained in \(D_{\theta,k}\).
We now get a Borel isomorphism between the the space \(X=(S\times\mathbb{R}\times\Omega)//G\) of \(G\)-ergodic components and \(\hat{X}\) by restricting the projection \(p:S\times\mathbb{R}\times\Omega\to X\) to \(\hat{X}\). This identifies the orbit space of \(G\) with the space of ergodic components. Since the flow \(\Phi\) on \((S\times\mathbb{R}\times\Omega)\) commutes with \(G\) it descends to a flow \(\phi\) on \(X\) such that \(p\circ\Phi^{s}=\phi^{s}\circ p\).
Since \(G\) is unimodular and for any compact \(U\subseteq G\), \(U\hat{X}\) is bounded, by theorems B.1 and B.2 there exists a unique measure \(\nu\) on \(X\) in the measure class of \(p_{*}(m\times\ell\times\omega)\) which is invariant under the flow \(\phi\) and such that for any \(f\in L^{1}(S\times\mathbb{R}\times\Omega,m\times\ell\times\omega)\):
\[\int f(z)dm\times\ell\times\omega(z)=\int\int f(gz)d\lambda(g)d\nu(p(z)) \tag{4}\]
(Note that the inner integral on the right hand side depends only on \(p(z)\) and not on \(z\) because \(G\) is unimodular.)
Since for any compact \(U\subset G\), \(U\hat{X}\) is bounded, it follows form equation 4 that \(\nu\) is a finite measure and after re-scaling \(m\) we can assume without loss of generality that \(\nu\) is a probability measure. If we take \(\Omega\) to be trivial then we call the associated flow \((X,\nu,\phi)\) the **geodesic flow** of \((G,d)\).
## 10. Double Ergodicity
We keep all the notation and assumptions from the previous section. In this section we will prove that the action of \(G\) on \(\partial^{2}G\) is ergodic and even weak mixing, meaning that for any ergodic p.m.p action \(G\curvearrowright(\Omega,\omega)\) the diagonal action on \((\partial^{2}G\times\Omega,m\times\omega)\) is ergodic. We
will use a classic argument of E. Hopf and adapt the proof in [2, Section 4] to the locally compact setting. The author would like to thank Uri Bader and Alex Furman for sending him a forthcoming unpublished version of [2] on which our arguments are very closely based.
The space \((S\times\mathbb{R}\times\Omega,m\times\ell\times\omega)\) is equipped with the commuting measure preserving actions of \(G\) and the \(\mathbb{R}\)-flow \(\Phi\) discussed in the previous section. The space of \(\mathbb{R}\)-ergodic components is \((S\times\Omega,[m\times\omega])\) and in its canonical measure class it supports the \(G\)-invariant infinite measure \(m\times\omega\). The space \((X,[\nu])\) of \(G\)-ergodic components has the natural \(\mathbb{R}\)-flow \(\phi\) and contains the finite measure \(\nu\) in its canonical measure class. \(\phi\) is \(\mathbb{R}\)-invariant and satisfies equation 4. After re-scaling \(m\) we can further assume that \(\nu\) is a probability measure. The \(\mathbb{R}\)-ergodic components of \((X,\nu)\), the \(G\)-ergodic components of \((S\times\Omega,m\times\omega)\) and the \(G\times\mathbb{R}\)-ergodic components of \((S\times\mathbb{R}\times\Omega,m\times\ell\times\omega)\) are all given by the same space \((Y,[\beta])\). In the measure class \([\beta]\) we choose the measure \(\beta\) which is the image of \(\nu\). We thus obtain the following commutative diagram:
(5)
The maps \(p\) and \(v\) are the \(G\)-ergodic components maps, the maps \(q\) and \(u\) are the \(\mathbb{R}\)-ergodic components maps and the map \(u\circ p=v\circ q\) is the \(G\times\mathbb{R}\)-ergodic components map. The maps \(p\) and \(q\) are \(\mathbb{R}\)-equivariant and \(G\)-equivariant respectively. Ergodicity of \(G\) on \((S\times\Omega,m\times\omega)\), ergodicity of \(\mathbb{R}\) on \((X,\mu)\) and ergodicity of \(G\times\mathbb{R}\) on \((S\times\mathbb{R}\times\Omega,m\times\ell\times\omega)\) are all equivalent to \(\beta\) being supported on a single point.
In the previous section we constructed a bounded cross section \(\hat{X}\) for the \(G\) action on \((S\times\mathbb{R}\times\Omega,m\times\ell\times\omega)\). The map \(\bar{p}:X\to S\times\mathbb{R}\times\Omega\) describes the corresponding section of \(p\). Unlike the case of discrete \(G\) the measure \(\bar{p}_{*}\nu\) might be singular to \(m\times\ell\times\omega\). Given \(x\in X\) we denote
\[\bar{p}(x)=(x_{-},x_{+},t_{x},\omega_{x})\]
The maps \(p,q,u\) and \(v\) can be used to push forward finite signed measure which are absolutely continuous with respect to the given measure classes. By the Radon-Nikodym theorem the space of finite signed measures absolutely continuous with respect to a given measure is identified with the space of integrable functions. Under these isomorphisms we obtain the operators \(P,Q,U,V\) in the following commutative diagram:
(6)
We will now give explicit descriptions of \(Q,P\) and \(U\). The operator \(Q\) is given by integration over the \(\mathbb{R}\) coordinate, For any \(f\in L^{1}(S\times\mathbb{R}\times\Omega,m\times\ell\times\omega)\):
\[Q(f)(\xi,\eta,w)=\int f(\xi,\eta,t,w)dt\]
By equation 4, \(P\) is given by integration over \(G\) orbits, or equivalently over fibers of \(p\). Given \(f\in L^{1}(S\times\mathbb{R}\times\Omega,m\times\ell\times\omega)\):
\[P(f)(p(\xi,\eta,t,w))=\int f(g(\xi,\eta,t,w))d\lambda(g)\]
Note that since \(G\) is unimodular the right hand side indeed depends only on \(p(\xi,\eta,t,w)\) (this fact is critical in the construction of the measure \(\nu\)).
Using Birkhoff's ergodic theorem we see that \(U\) is given by averaging over \(\mathbb{R}\) orbits in \(X\). For \(f\in L^{1}(X,\nu)\), for almost every \(x\in X\), for any \(a\leq b\in\mathbb{R}\):
\[U(f)(u(x))=\frac{1}{T}\int_{a}^{b+T}f(\phi^{-t}x)dt\]
Our next goal will be to obtain an explicit description of \(V\). For this we need to explain the doted arrow \(\mathbb{R}_{\theta}\) in diagram 6. If \(\theta\in L^{1}(\mathbb{R})\) is a positive kernel, i.e \(\theta:\mathbb{R}\to[0,\infty)\) and:
\[\int\theta(t)dt=1\]
We define \(R_{\theta}:L^{1}(S\times\Omega,m\times\omega)\to L^{1}(S\times\mathbb{R} \times\Omega,m\times\ell\times\omega)\) by:
\[\mathbb{R}_{\theta}(f)(\xi,\eta,t,w)=\theta(t)f(\xi,\eta,w)\]
Clearly \(Q\circ R_{\theta}=Id\). Since \(V\circ Q=U\circ P\) we can precompose with \(R_{\theta}\) to get that \(V=U\circ P\circ R_{\theta}\). Given \(f\in L^{1}(S\times\Omega,m\times\omega)\) denote:
\[\bar{f}_{\theta}(x)=P\circ R_{\theta}(f)=\int R_{\theta}f(g\bar{p}(x))d \lambda(g)=\int\theta(t_{x}+\tau(g,x_{-},x_{+}))f(gx_{-},gx_{+},gw_{x})d \lambda(g) \tag{7}\]
We also denote:
\[\bar{f}_{[0,1]}=\bar{f}_{1_{[0,1]}}\]
Using \(R_{\theta}\) we obtain the following description of \(V\). For any \(a\leq b\), for almost every \(y\in Y\), if \(y=u(x)\):
\[V(f)(y)=\frac{1}{T}\int_{a}^{b+T}\bar{f}_{[0,1]}(\phi^{-t}x)dt \tag{8}\]
While this formula is explicit it describes the function \(V(f)\) in terms of the parameter \(x\in X\). We would like to obtain a description of \(V\) in terms of a points \((\xi,\eta,w)\in S\times\Omega\). For this purpose we introduce the following averaging operators. For any \(a\leq b\) and \(f\in L^{1}(S\times\Omega,m\times\omega)\) define \(I^{b}_{a}(f),J^{b}_{a}(f):S\times\Omega\to[0,\infty]\) by:
\[I^{b}_{a}(f)(\xi,\eta,w)=\frac{1}{b-a}\int_{\{g\in G|\tau(g, \xi,\eta)\in[a,b]\}}f(g\xi,g\eta,gw)d\lambda(g)\] \[J^{b}_{a}(f)(\xi,\eta,w)=\frac{1}{b-a}\int_{\{g\in G|\sigma(g, \eta)\in[a,b]\}}f(g\xi,g\eta,gw)d\lambda(g)\]
We prove the following ergodic theorem describing \(V\):
**Theorem 10.1**.: _For any \(f\in L^{1}(S\times\Omega,m\times\omega)\), for almost every \((\xi,\eta,w)\) the values \(I^{b}_{a}(f)(\xi,\eta,w)\) and \(J^{b}_{a}(f)(\xi,\eta,w)\) are finite for any interval \([a,b]\) and if \(y=v(\xi,\eta,w)\) then:_
\[V(f)(y)=\lim_{T\to\infty}I^{b+T}_{a}(f)(\xi,\eta,w)\]
_If in addition \(f\) is supported on a bounded set then:_
\[V(f)(y)=\lim_{T\to\infty}J^{b+T}_{a}(f)(\xi,\eta,w)\]
The proof will require two lemmas.
**Lemma 10.2**.: _For any two positive kernels \(\theta_{1},\theta_{2}\) and \(f\in L^{1}(S\times\Omega,m\times\omega)\):_
\[\bar{f}_{\theta_{2}*\theta_{1}}=\int_{-\infty}^{\infty}\theta_{2}(s)\bar{f}_{ \theta_{1}}\circ\phi^{-s}ds\]
Proof.: Notice that:
\[R_{\theta_{2}*\theta_{1}}f(\xi,\eta,t,w)=\int_{-\infty}^{\infty}\theta_{2}(s) \theta_{1}(t-s)f(\xi,\eta,w)ds=\int_{-\infty}^{\infty}\theta_{2}(s)R_{\theta_ {1}}f\circ\Phi^{-s}(\xi,\eta,t,w)ds\]
We finish by applying \(P\) to both sides.
**Lemma 10.3**.: _There exists a constant \(c\geq 0\) such that for every \(m\times\omega\) integrable function \(f:S\times\Omega\to[0,\infty)\), for every interval \([a,b]\) with \(a+c+1\leq b\) and for every \(x\in X\):_
\[\frac{1}{b-a}\int_{a+c}^{b-c-1}\bar{f}_{[0,1]}\circ\phi^{-t}(x)dt\leq I^{b}_{ a}(f)(x_{-},x_{+},w_{x})\leq\frac{1}{b-a}\int_{a-c-1}^{b+c}\bar{f}_{[0,1]} \circ\phi^{-t}(x)dt\]
Proof.: Denote \(\theta^{b}_{a}=\mathbb{1}_{[a,b]}*\mathbb{1}_{[0,1]}\) and notice that if \(a+1\leq b\) then \(\theta^{b}_{a}\) interpolates linearly between the values \(0\) on \((-\infty,a]\cup[b+1,\infty)\) and \(1\) on \([a+1,b]\). In particular \(\mathbb{1}_{[a+1,b]}\leq\theta^{b}_{a}\leq\mathbb{1}_{[a,b+1]}\). Recall that the cross section \(\hat{X}\) is bounded, therefore there exists a constant \(c\) such that for any \(x\in X\), if \(\bar{p}(x)=(x_{-},x_{+},t_{x},w_{x})\) then \(|t_{x}|<c\). Let \(x\in X\) and \(a,b\in\mathbb{R}\) such that \(a+c+1\leq b\), using lemma 10.2 and equation 7 we see that:
\[\int_{a}^{b}\bar{f}_{[0,1]}\circ\phi^{-t}(x)dt\] \[= \int_{-\infty}^{\infty}\mathbb{1}_{[a,b]}(t)\bar{f}_{[0,1]}\circ \phi^{-t}(x)dt=\bar{f}_{\theta^{b}_{a}}(x)\] \[= \int_{G}\theta^{b}_{a}(t_{x}+\tau(g,x_{-},x_{+}))f(gx_{-},gx_{+},gw_{x})d\lambda(g)\]
Since \(|t_{x}|<c\) and \(\mathbb{1}_{[a+1,b]}\leq\theta^{b}_{a}\leq\mathbb{1}_{[a,b+1]}\) we conclude that:
\[\frac{b-a-2c-1}{b-a}I^{b-c}_{a+c+1}f(x_{-},x_{+},w_{x})\leq\frac{1}{b-a}\int_ {a}^{b}\bar{f}_{[0,1]}\circ\phi^{-t}(x)dt\leq\frac{b-a+2c+1}{b-a}I^{b+c+1}_{ a-c}f(x_{-},x_{+},w_{x})\]
Rewriting these inequalities we get the desired result.
proof of theorem 10.1.: We first prove the claim for the operator \(I\). Let \(f:S\times\Omega\to[0,\infty)\) be \(m\times\omega\) integrable. By Birkhoff's ergodic theorem there exists a full measure subset \(X_{0}\subseteq X\) such that for \(x\in X_{0}\) and any \(a<b\):
\[V(f)(u(x))=\lim_{T\to\infty}\frac{1}{T}\int_{a}^{b+T}\bar{f}_{[0,1]}(\phi^{-t}x )dt\]
\(X_{0}\) is an \(\mathbb{R}\)-invariant subset so \(p^{-1}(X_{0})\) is a \(G\times\mathbb{R}\)-invariant subset in \(S\times\mathbb{R}\times\Omega\) and \(A_{f}=q(p^{-1}(X_{0}))\) is a \(G\)-invariant Borel subset of \(S\times\Omega\). We will show that the desired convergence holds on \(A_{f}\).
Let \((\xi,\eta,w)\in A_{f}\), consider \(x=p(\xi,\eta,0,w)\in X_{0}\). There exists \(h\in G\) such that \((\xi,\eta,0,w)=h(x_{-},x_{+},t_{x},w_{x})=h\bar{p}(x)\). Denoting \(s=\tau(h,x_{-},x_{+})\) we have that \(\tau(g,\xi,\eta)=\tau(gh,x_{-},x_{+})-s\) so \(\tau(g,\xi,\eta)\in[a,b]\) if and only if \(\tau(gh,x_{-},x_{+})\in[a+s,b+s]\). Therefore:
\[I_{a}^{b}(f)(\xi,\eta,w)=\frac{1}{b-a}\int_{\{g\in G|\tau(g,\xi, \eta)\in[a,b]\}}f(ghx_{-},ghx_{+},ghw_{x})d\lambda(g)\] \[= \frac{1}{b-a}\int_{\{g\in G|\tau(gh,x_{-},x_{+})\in[a+s,b+s]\}}f( ghx_{-},ghx_{+},ghw_{x})d\lambda(g)=I_{a+s}^{b+s}(f)(x_{-},x_{+},w_{x})\]
Note that the last equality uses the assumption that \(G\) is unimodular. By lemma 10.3, for any \(a,b\) such that \(a+c+1<b\):
\[\frac{1}{b-a}\int_{a+s+c}^{b+s-c-1}\bar{f}_{[0,1]}\circ\phi^{-t}(x)dt\leq I_{a +s}^{b+s}(f)(x_{-},x_{+},w_{x})\leq\frac{1}{b-a}\int_{a+s-c-1}^{b+s+c}\bar{f}_ {[0,1]}\circ\phi^{-t}(x)dt\]
Taking \(b\) to \(\infty\) we see that as needed:
\[V(f)(u(x)) = \lim_{T\to\infty}\frac{1}{T}\int_{a}^{b+T}\bar{f}_{[0,1]}(\phi^{- t}x)dt\] \[= \lim_{T\to\infty}I_{a+s}^{b+T+s}(f)(x_{-},x_{+},w_{x})=\lim_{T\to \infty}I_{a}^{b+T}(f)(\xi,\eta,w)\]
Since \(f\) is non negative this also implies the finiteness of \(I_{a}^{b}(f)(\xi,\eta,w)\) for any interval \([a,b]\) and \((\xi,\eta,w)\in A_{f}\). The case of general \(f\) follows by linearity.
We now turn to the proof for the operator \(J\). Denote \(K_{n}=\{(\xi,\eta)\in S|(\xi,\eta)>n\}\). Let \(f:S\to[0,\infty)\) be a \(m\times\omega\) integrable function supported on bounded set. For large enough \(n\) the support of \(f\) is contained in \(K_{n}\). Recall that by lemma 9.1 there exist constants \(C_{n}\) such that if \((\xi,\eta),(g\xi,g\eta)\in K_{n}\) then:
\[|\tau(g,\xi,\eta)-\sigma(g,\eta)|\leq C_{n}\]
It follows that for large \(n\), any \((\xi,\eta,w)\in K_{n}\times\Omega\) and any \(a<b\):
\[\frac{b-a-2C_{n}}{b-a}I_{a+C_{n}}^{b-C_{n}}(f)(\xi,\eta,w)\leq J_{a}^{b}(f)( \xi,\eta,w)\leq\frac{b-a+2C_{n}}{b-a}I_{a-C_{n}}^{b+C_{n}}(f)(\xi,\eta,w)\]
Letting \(b\) tend to \(\infty\) we get that for every \(n\), \((\xi,\eta,w)\in A_{f}\cap K_{n}\times\Omega\) and \(a<b\):
\[\lim_{T\to\infty}J_{a}^{b+T}(f)(\xi,\eta,w)=V(f)(v(\xi,\eta,w))\]
Since \(n\) is arbitrary and \(K_{n}\) exhaust the space we see that the convergence actually holds on all of \(A_{f}\). This also implies that \(J^{b}_{a}(f)(\xi,\eta,w)\) is finite for any interval \([a,b]\). Finally the case of general \(f\) follows by linearity.
From this point onward it will be convenient to use the space \(\partial^{2}G\) and not \(S\). Since \(S\) is \(G\)-invariant and of full measure all the results proved thus far transfer immediately (although in the proof of the existence of \(\hat{X}\) it was important that certain relations hold everywhere and not only almost everywhere, and now \(\hat{X}\) will intersect only almost every \(G\)-orbit).
The last ingredient we are missing for the proof of ergodicity is the following contraction lemma from [2, Lemma 2.6]:
**Lemma 10.4**.: _([2, Lemma 2.6]) For any compact \(K\subset\partial^{2}G\) there exists a constant \(C_{K}\) such that for any \(\xi,\xi^{\prime},\eta\in\partial G\) and \(g\in G\) satisfying :_
\[(\xi,\eta),(\xi^{\prime},\eta),(g\xi,g\eta)\in K,\ \ \ \ \ \sigma(g,\eta)>0\]
_we have:_
\[\sigma(g,\xi),\sigma(g,\xi^{\prime})\in[-\sigma(g,\eta)-C_{K},-\sigma(g,\eta) +C_{K}],\ \ \ \ \ d_{\epsilon}(g\xi,g\xi^{\prime})<e^{-\epsilon\sigma(g,\eta)+C_{K}}\]
Proof.: Because \((\xi,\eta),(\xi^{\prime},\eta),(g\xi,g\eta)\in K\) and \(K\) is compact we have that \((\xi,\eta)\), \((\xi^{\prime},\eta)\), \((g\xi,g\eta)\approx_{K}0\). Since \(\sigma(g,\eta)>0\) we see that \((g^{-1},\eta)>\frac{|g|}{2}\) but \(\min\{(\xi,g^{-1}),(g^{-1},\eta)\}\lesssim(\xi,\eta)\approx_{K}0\) so we conclude that \((g^{-1},\xi)\approx_{K}0\). By lemma 2.12 we deduce:
\[0\approx_{K}(g\xi,g\eta)\gtrsim\min\{(g\xi,g),(g,g\eta)\}\approx|g|-\max\{(g^ {-1},\xi),(g^{-1},\eta)\}\]
Since \((g^{-1},\xi)\approx 0\) it follows that \((g^{-1},\eta)\approx_{K}|g|\). Now note that \(\min\{(\xi,g^{-1}),(g^{-1},\eta)\}\lesssim(\xi,\eta)\approx_{K}0\) so since \((g^{-1},\eta)\approx_{K}|g|\) it follows that \((\xi,g^{-1})\approx_{K}0\) and similarly \((\xi^{\prime},g^{-1})\approx_{K}0\). Therefore:
\[\sigma(g,\xi)\approx_{K}\sigma(g,\xi^{\prime})\approx_{K}-\sigma(g,\eta) \approx_{K}-|g|\]
Using 2.12 again we see that \((g\xi,g)\approx|g|-(g^{-1},\xi)\approx_{K}|g|\) and similarly \((g\xi^{\prime},g)\approx_{K}|g|\) so finally:
\[(g\xi,g\xi^{\prime})\gtrsim\min\{(g\xi,g),(g,g\xi^{\prime})\}\approx_{K}|g| \approx_{K}\sigma(g,\eta)\]
as needed.
Now we will finally prove ergodicity.
**Theorem 10.5**.: _For any ergodic p.m.p action \(G\!\curvearrow\!(\Omega,\omega)\) the diagonal action on \((\partial^{2}G\times\Omega,m\times\omega)\) is ergodic._
Proof.: To show ergodicity we will show that \(L^{1}(Y,\beta)\) is one dimensional. In order to do this we will show that for any \(f\in L^{1}(\partial^{2}\times\Omega,m\times\omega)\), \(V(f)\) is constant. Since \(V\) is a bounded operator it suffices to show this for \(f\) from a dense subspace of \(L^{1}(\partial^{2}\times\Omega,m\times\omega)\). We will use the image of the space \(C_{c}(\partial^{2}G,m)\otimes L^{1}(\Omega,\omega)\) under the injection \(\phi\otimes\psi\mapsto\phi\cdot\psi\). It is enough to show the image of pure tensors under \(V\) is constant. Let \(f=\phi\cdot\psi\) with \(\phi\in C_{c}(\partial^{2}G,m),\psi\in L^{1}(\Omega,\omega)\) and \(\bar{f}=V(f)\circ v\). We will show that \(\bar{f}\) is constant. Note that \(\bar{f}\) need not be in \(L^{1}(\partial^{2}G\times\Omega,m\times\omega)\). Our strategy will be to prove that \(\bar{f}(\xi,\eta,w)\)
is independent of \(\xi\) and by symmetry also independent of \(\eta\) so \(\bar{f}\) is actually given by a \(G\)-invariant function of \(w\) which is constant by ergodicity.
Fix a compact set \(K\subset\partial^{2}G\) with \(supp(\phi)\subset int(K)\) and \(\delta>0\). Denote \(h=\mathbb{1}_{K}\cdot|\psi|\in L^{1}(\partial^{2}\times\Omega,m\times\omega)\) and \(\bar{h}=V(h)\circ v\). We will show that for \(\mu\times\omega\)-almost every \((\eta,w)\) there exists a \(\mu\)-full measure subset such that if \(\xi,\xi^{\prime}\) are in this subset then:
\[(\xi^{\prime},\eta),(\xi,\eta)\in K\implies|f(\xi,\eta,w)-f(\xi^{\prime},\eta,w)|<\delta\cdot\bar{h}(\xi,\eta,w) \tag{9}\]
By theorem 10.1 there exists a full measure subset \(A\) of \(\partial^{2}G\times\Omega\) such that for all \((\xi,\eta,w)\in A\) and any \(a\in\mathbb{R}\):
\[\lim_{T\to\infty}J_{a}^{a+T}(f)(\xi,\eta,w)=\bar{f}(\xi,\eta,w)\]
\[\lim_{T\to\infty}J_{a}^{a+T}(h)(\xi,\eta,w)=\bar{h}(\xi,\eta,w)\]
By Fubini's theorem it is enough to prove implication 9 for \((\xi,\eta,w),(\xi^{\prime},\eta,w)\in A\).
Fix \(\xi,\xi^{\prime},\eta,w\) such that \((\xi,\eta,w),(\xi^{\prime},\eta,w)\in A\). By lemma 10.4 and using the uniform continuity of \(\phi\) there exists \(0<a_{1}\) such that for all \(g\) with \(\sigma(g,\eta)>a_{1}\), if \((\xi,\eta),(\xi^{\prime},\eta),(g\xi,g\eta)\in K\) then \(|\phi(g\xi,g\eta)-\phi(g\xi^{\prime},g\eta)|<\delta\), so:
\[|f(g\xi,g\eta,gw)-f(g\xi^{\prime},g\eta,gw)|<\delta\cdot h(g\xi,g\eta,gw)\]
On the other hand it follows from lemma 10.4 (after replacing the roles of \(\xi\) and \(\xi^{\prime}\)) and the fact that \(supp(\phi)\subset int(K)\), that there exists \(0<a_{2}\) such that for all \(g\) with \(\sigma(g,\eta)>a_{2}\) if \((\xi,\eta),(\xi^{\prime},\eta)\in K\) but \((g\xi,g\eta)\notin K\) then \((g\xi,g\eta)(g\xi^{\prime},g\eta)\notin supp(\phi)\). Therefore:
\[|f(g\xi,g\eta,gw)-f(g\xi^{\prime},g\eta,gw)|=\delta\cdot h(g\xi,g\eta,gw)=0\]
So if \(a=max\{a_{1},a_{2}\}\) then for any \(g\) with \(\sigma(g,\eta)>a\), if \((\xi,\eta),(\xi^{\prime},\eta)\in K\) then:
\[|f(g\xi,g\eta,gw)-f(g\xi^{\prime},g\eta,gw)|<\delta\cdot h(g\xi,g\eta,gw)\]
Now if \((\xi,\eta),(\xi^{\prime},\eta)\in K\) then:
\[|\bar{f}(\xi,\eta,w)-\bar{f}(\xi^{\prime},\eta,w)|=|\lim_{T\to \infty}J_{a}^{a+T}(f)(\xi,\eta,w)-\lim_{T\to\infty}J_{a}^{a+T}(f)(\xi^{\prime },\eta,w)|\] \[\leq \lim_{T\to\infty}\frac{1}{T}\int_{\{g\in G|\sigma(g,\eta)\in[a,a +T]\}}|f(g\xi,g\eta,gw)-f(g\xi^{\prime},g\eta,gw)|d\lambda(g)\] \[\leq \lim_{T\to\infty}\frac{1}{T}\int_{\{g\in G|\sigma(g,\eta)\in[a,a +T]\}}\delta\cdot h(g\xi,g\eta,gw)d\lambda(g)\] \[= \delta\cdot\lim_{T\to\infty}J_{a}^{a+T}(h)(\xi,\eta,w)=\delta \cdot\bar{h}(\xi,\eta,w)\]
Showing implication 9 as needed.
Since \(K\) and \(\delta\) are general we can take a sequence \(K_{n}\) exhausting \(\partial^{2}G\) and \(\delta_{n}\to 0\). Applying implication 9 to these sequences we conclude that for \(\mu\times\omega\) almost every \((\eta,w)\) there exists a \(\mu\) full measure subset such that if \(\xi,\xi^{\prime}\) are in this subset then:
\[\bar{f}(\xi,\eta,w)=\bar{f}(\xi^{\prime},\eta,w)\]
By symmetry the same statement holds reversing the roles of \(\xi\) and \(\eta\).
Using Fubini's theorem we conclude that for \(\omega\)-almost every \(w\) there exist \(\mu\)-full measure subsets \(A,B\subseteq\partial G\) such that for \(\xi\in A\), \(\bar{f}(\xi,\cdot,w)\) is an essentially constant function of \(\eta\) and for \(\eta\in B\), \(\bar{f}(\cdot,\eta,w)\) is an essentially constant function of \(\xi\). Therefore for \(\omega\)-almost every \(w\), \(\bar{f}(\cdot,\cdot,w)\) is essentially constant and thus \(\bar{f}\) depends only on the \(\Omega\) coordinate. Since \(\bar{f}\) is \(G\)-invariant and \(G\) acts ergodicaly on \((\Omega,\omega)\) we conclude \(\bar{f}\) is essentially constant as needed.
Having proved ergodicity we know that the space \(Y\) is a single point and the operator \(V\) is simply given by \(V(f)=\int f(\xi,\eta,w)dm\times\omega\). We can therefore re-state theorem 10.1 as follows:
**Theorem 10.6**.: _For any \(f\in L^{1}(\partial^{2}G\times\Omega,m\times\omega)\), for almost every \((\xi,\eta,w)\) the values \(I^{b}_{a}(f)(\xi,\eta,w)\) and \(J^{b}_{a}(f)(\xi,\eta,w)\) are finite for any interval \([a,b]\) and:_
\[\lim_{T\to\infty}I^{b+T}_{a}(f)(\xi,\eta,w)=\int f(\xi,\eta,w)dm\times\omega\]
_If in addition \(f\) is supported on a bounded set then:_
\[\lim_{T\to\infty}J^{b+T}_{a}(f)(\xi,\eta,w)=\int f(\xi,\eta,w)dm\times\omega\]
## Appendix A Cocycles and Almost Cocycles
Let \(G\) be a locally compact second countable group with a measure class preserving action on a standard Lebesgue space \((X,\mu)\). A Borel **cocycle** of the action \(G\mathop{\curvearrowright}X\) with values in \(\mathbb{R}\) is a Borel function \(\alpha:G\times X\to\mathbb{R}\) such that for every \(g,h\in G\), for almost every \(x\in X\):
\[\alpha(gh,x)=\alpha(g,hx)+\alpha(h,x)\]
A cocycle is called strict if the above equation holds for every \(x\). Two cocycles \(\alpha,\rho\) are called cohomologous if there exists a function \(F:X\to\mathbb{R}\) such that for every \(g\), for almost every \(x\):
\[\alpha(g,x)-\rho(g,x)=F(gx)-F(x)\]
If \(\alpha,\rho\) are strict cocycles they are strictly **cohomologous** if the above equation holds for every \(x\).
A Borel **almost cocycle** of the action \(G\mathop{\curvearrowright}X\) with values in \(\mathbb{R}\) is a Borel function \(\sigma:G\times X\to\mathbb{R}\) such that for every \(g,h\in G\), for almost every \(x\in X\):
\[\sigma(gh,x)\approx\sigma(g,hx)+\sigma(h,x)\]
An almost cocycle is called strict if the above equation holds for every \(x\).
**Lemma A.1**.: _Let \(G\) be a second countable locally compact group with a measure class preserving action on a standard Lebesgue space \((X,\mu)\). Suppose that \(\alpha,\sigma:G\times X:\to\mathbb{R}\) are a strict Borel cocycle and a strict Borel almost cocycle respectively such that for every \(g\in G\), for almost every \(x\in X\), \(\alpha(g,x)\approx\sigma(g,x)\). There exists a strict cocycle \(\rho\) strictly cohomologous to \(\alpha\) and a \(G\)-invariant full measure Borel set \(S\subseteq X\) such that for all \((g,x)\in G\times S\), \(\rho(g,x)\approx\sigma(g,x)\)._
Proof.: For \(x\in X\) denote \(E_{x}(C)=\{g\in G||\alpha(g,x)-\sigma(g,x)|<C\}\). For every \(g\in G\), for almost every \(x\in X\), \(\alpha(g,x)\approx\sigma(g,x)\) so by Fubini's theorem there exists a full measure Borel set \(A\subseteq X\) and \(C_{1}>0\) such that for every \(x\in A\), \(E_{x}(C_{1})\) has full Haar measure. By [22, Appendix B.8] there exists a full measure Borel subset \(B\subseteq A\) such that \(S=G.B\) is Borel. If \(E_{x}(C_{1})\) has full Haar measure then there exists \(C_{2}>0\) such that for almost every \(g\in G\), \(E_{gx}(C_{2})\) has full Haar measure. Indeed if \(g\in E_{x}(C_{1})\) then for any \(h\in E_{x}(C_{1})g^{-1}\):
\[\alpha(h,gx)=\alpha(hg,x)-\alpha(g,x)\approx\sigma(hg,x)-\sigma(g,x)\approx \sigma(h,gx)\]
Since every \(x\in S\) has an orbit which intersects \(B\) we conclude that for any \(x\in S\), for almost all \(g\in G\), \(E_{gx}(C_{2})\) has full Haar measure.
We claim that there exists a Borel function \(F:S\to\mathbb{R}\) and \(C_{3}>0\) such that for every \(x\in S\), for almost every \(g\in G\), \(|\alpha(g,x)-\sigma(g,x)-F(x)|<C_{3}\). Note that for almost every \(x\in B\) we can take \(C_{3}=C_{1}\) and \(F(x)=0\) but the point is that the formula holds over \(S\) which is \(G\) invariant. To see this notice that if \(x\in S\) then for almost every \(g\in G\), \(E_{gx}(C_{2})\) has full measure, so for almost every \(g\), for almost every \(h\), \(|\alpha(h,gx)-\sigma(h,gx)|<C_{2}\). For such \(g,h\):
\[\alpha(hg,x)-\alpha(g,x)=\alpha(h,gx)\approx\sigma(h,gx)\approx\sigma(hg,x)- \sigma(g,x)\]
Taking \(k=hg\), re-arranging terms and using Fubini's theorem we get that in particular for almost every \((g,k)\in G^{2}\):
\[\alpha(g,x)-\sigma(g,x)\approx\alpha(k,x)-\sigma(k,x)\]
Since each side depends only on one of the coordinated \((g,k)\) we conclude that there exists a full measure subset \(D_{x}\subseteq G\) such that for all \(g,k\in D_{x}\):
\[\alpha(g,x)-\sigma(g,x)\approx\alpha(k,x)-\sigma(k,x)\]
Choosing any probability measure \(\theta\) on \(G\) in the class of Haar measure we can define \(F(x)=\int\alpha(g,x)-\sigma(g,x)d\theta(g)\) (we only do this to ensure \(F\) is Borel). Since for every \(x\), for almost every \(g\in G\), \(E_{gx}(C_{2})\) has full measure, we conclude that for every \(x\in S\), \(|F(gx)|<C_{2}\) for almost every \(g\in G\). For convenience we extend \(F\) to \(X\) by putting \(F(x)=0\) for \(x\notin S\).
We can now define the cocycle \(\rho\) by:
\[\rho(g,x)=\alpha(g,x)-F(x)+F(gx)\]
By definition \(\alpha\) and \(\rho\) are strictly cohomologous. Since for every \(x\in S\), for almost every \(g\in G\), \(|F(gx)|<C_{2}\). We conclude by taking \(C_{4}=C_{2}+C_{3}\), that for all \(x\in S\), for almost every \(g\in G\), \(|\rho(g,x)-\sigma(g,x)|<C_{4}\).
We now show that for all \((g,x)\in G\times S\), \(\rho(g,x)\approx\sigma(g,x)\) uniformly in \(g,x\). Indeed given \((g,x)\in G\times S\), there exists a full measure set of \(h\in G\) such that \(\rho(h,gx)\approx\sigma(h,gx)\) and \(\rho(hg,x)\approx\sigma(hg,x)\). (Here we are using the critical fact that \(gx\in S\) which would not be true for \(A\) or \(B\)). For such an \(h\):
\[\rho(g,x)=\rho(hg,x)-\rho(h,gx)\approx\sigma(hg,x)-\sigma(h,gx)\approx\sigma( g,x)\]
So for every \((g,x)\in G\times S\), \(\rho(g,x)\approx\sigma(g,x)\), as needed.
## Appendix B Actions Admitting a Borel Cross Section
Let \(G\) be a locally compact second countable group acting on a standard measure space with sigma finite measure \((Z,m)\). Assume that the stabilizer of almost every point in \(Z\) is compact and that the action has a cross section, i.e. a measurable subset \(\hat{X}\subset Z\) intersecting every orbit exactly once. Denote \(X=Z//G\) the space of \(G\)-ergodic components and denote by \(p\) the projection \(p:Z\to X\). Since bijective Borel maps are Borel isomorphisms, \(p\) restricts to a Borel isomorphism between \(\hat{X}\) and \(X\) so \(\hat{X}\) is endowed with a canonical measure class and a measurable projection \(Z\to\hat{X}\) sending every point to the point in \(\hat{X}\) sharing the same orbit. In particular every fiber over \(X\) is a \(G\)-orbit. Finally assume that there exists a compact identity neighborhood \(V\subset G\) such that the measure \(p_{*}(m|_{V\hat{X}})\) is \(\sigma\)-finite on \(X\).
Fix a left Haar measure \(\lambda\) on \(G\). Let \(B\) be a transitive \(G\)-space with compact stabilizers. Recall that up to a positive scalar there is a unique \(\sigma\)-finite \(G\)-invariant measure on \(B\). Choosing a base point \(a\in A\) we can construct this measure as the image of \(\lambda\) under the orbit map \(g\mapsto gb\). Choosing a different base point can only change the measure by a scalar and if \(G\) is unimodular it dose not depend on the base point at all. To sum up, fixing a left Haar measure gives a canonical choice of invariant measure on pointed transitive \(G\)-spaces with compact stabilizers. This measure is independent of the base point when \(G\) is unimodular.
Since \(\hat{X}\) gives a choice of base point from every \(G\) orbit in \(Z\), for almost every \(x\in X\) we have a canonical \(G\)-invariant measure \(m_{x}\) supported on the fiber over x. If \(G\) is unimodular the measures \(m_{x}\) do not depend on \(\hat{X}\). Denote by \(K_{x}\) the stabilizer of the point in \(\hat{X}\) in the fiber over \(X\). An explicit formula for \(m_{x}\) is \(m_{x}(A\hat{X})=\lambda(AK_{x})\) for any measurable \(A\subset G\).
**Theorem B.1**.: _Given \(G\), \(Z\) and \(\hat{X}\) as above, there exists a unique measure \(\nu\) on \(Y\) such that \(m=\int\!m_{x}d\nu(y)\). The measure \(\nu\) is in the measure class of \(p_{*}m\) and if \(G\) is unimodular it dose not depend on \(\hat{X}\)._
Proof.: Denote by \(\tau\) the measure \(p_{*}(m|_{V\hat{X}})\). Since \(G\) is second countable we can write \(G=\bigcup_{i\in\mathbb{N}}g_{i}V\) so \(\tau\) and \(p_{*}m\) are in the same measure class (but \(p_{*}m\) might not be \(\sigma\)-finite). Since \(\tau\) and \(m\) are \(\sigma\)-finite there exists by [18, Theorem 6.3] a unique disintegration \(m=\int\!\alpha_{x}d\tau(x)\). Now for every \(g\in G\), \(\int\!\alpha_{x}d\tau(x)=m=g_{*}m=\int\!g_{*}\alpha_{x}d\tau(x)\). By uniqueness of the disintegration, almost every \(\alpha_{x}\) is invariant. Since almost every \(\alpha_{x}\) is supported on the fiber \(p^{-1}(x)\), which is a \(G\)-orbit, there exists \(c(x)>0\) such that \(\alpha_{x}=c(x)m_{x}\). \(c(x)\) is a measurable function since \(c(x)=\frac{\alpha_{x}(V\hat{X})}{m_{x}(V\hat{X})}=\frac{\alpha_{x}(V\hat{X})} {\lambda(VK_{x})}\), \(\alpha_{x}\) is a measurable family of measures and \(\lambda(VK_{x})\) is a measurable function. Therefore \(m_{x}\) is a measurable family of measures. For any \(f\in L^{1}(m)\) we have \(\int\!f(z)dm(z)=\int\!\int\!f(z)c(x)dm_{x}(z)d\tau(x)\) so defining \(d\nu=c(x)d\tau\) we get \(m=\int\!m_{x}d\nu(x)\). Since \(c(x)>0\) for almost every \(x\), \(\nu\) and \(\tau\) are in the same measure class which is the measure class of \(p_{*}m\). \(\nu\) is unique since if \(\nu^{\prime}\) is a measure satisfying \(m=\int\!m_{x}d\nu^{\prime}(x)\) then for any measurable \(f\in L^{1}(\nu^{\prime})\), \(\int\!f(x)d\nu^{\prime}(x)=\int\!\int_{V\hat{X}}\!\frac{f(p(z))}{m_{p(z)}(V \hat{X})}dm_{x}(z)d\nu^{\prime}(x)=\int\!\frac{f(p(z))}{m_{p(z)}(V\hat{X})}dm(z)\) depends only on \(m\) and the \(m_{x}\) which only depend on \(\hat{X}\). Finally If G is unimodular the measures \(m_{x}\) do not depend on \(\hat{X}\) so since there is a unique measure \(\nu\) such that \(m=\int\!m_{x}d\nu(x)\), \(\nu\) is independent of \(\hat{X}\).
**Theorem B.2**.: _Suppose in the setting above that the space \((Z,m)\) has a measure preserving action of a group \(H\) commuting with the \(G\) action. If \(G\) is unimodular the natural \(H\) action on \(Y\) preserves the measure \(\nu\)._
Proof.: \(H\) acts by automorphisms on the measure preserving system \((Z,m,G)\), thus for any \(h\in H\), \(h_{*}\nu\) is the canonical measure on \(X\) associated to the cross section \(h\hat{X}\) by theorem B.1. Since \(G\) is unimodular the measure is independent of the cross section so this is the same as the measure associated to the cross section \(\hat{X}\), i.e. \(\nu\). So \(h_{*}\nu=\nu\) as needed.
|
2309.12712 | Big model only for hard audios: Sample dependent Whisper model selection
for efficient inferences | Recent progress in Automatic Speech Recognition (ASR) has been coupled with a
substantial increase in the model sizes, which may now contain billions of
parameters, leading to slow inferences even with adapted hardware. In this
context, several ASR models exist in various sizes, with different inference
costs leading to different performance levels. Based on the observation that
smaller models perform optimally on large parts of testing corpora, we propose
to train a decision module, that would allow, given an audio sample, to use the
smallest sufficient model leading to a good transcription. We apply our
approach to two Whisper models with different sizes. By keeping the decision
process computationally efficient, we build a decision module that allows
substantial computational savings with reduced performance drops. | Hugo Malard, Salah Zaiem, Robin Algayres | 2023-09-22T08:50:58Z | http://arxiv.org/abs/2309.12712v1 | # Big Model Only for Hard Audios: Sample Dependent Whisper Model Selection for Efficient Inferences
###### Abstract
Recent progress in Automatic Speech Recognition (ASR) has been coupled with a substantial increase in the model sizes, which may now contain billions of parameters, leading to slow inferences even with adapted hardware. In this context, several ASR models exist in various sizes, with different inference costs leading to different performance levels. Based on the observation that smaller models perform optimally on large parts of testing corpora, we propose to train a decision module, that would allow, given an audio sample, to use the smallest sufficient model leading to a good transcription. We apply our approach to two Whisper models with different sizes. By keeping the decision process computationally efficient, we build a decision module that allows substantial computational savings with reduced performance drops.
Hugo Malard\({}^{1}\), Salah Zaiem\({}^{2}\), Robin Algayres\({}^{1,3}\)\({}^{1}\) ENS-PSL, Paris \({}^{2}\)LTCI, Telecom Paris, Institut Polytechnique de Paris
\({}^{3}\)Inria, Paris Speech recognition, efficiency
## 1 Introduction
Recent progress in neural-based automatic speech recognition (ASR) has been driven by new modelling architectures, data collection and processing but also by larger models that have recently exceeded a billion parameters [1]. Such advances have promised enhanced accuracy and capabilities, yet they have also shared in escalating computational demands. These ASR models are usually available in a certain range of sizes with varying performance levels. For instance, Whisper models [1] are available in 6 sizes from Tiny (39M parameters) to Large (1.5B parameters), Nvidia FastConformers [2] range from Large (118M parameters) to XXLarge (1.2B parameters) and self-supervised models like Hubert [3] or WavLM [4] are generally available in Base and Large versions. Systematically, following the deep learning trend across modalities, larger version models, although generally trained on the same datasets, perform substantially better than their reduced-size counterparts. This is shown in Figure 1 (a), where the mean Word Error Rates (WER) of four Whisper models with different sizes on the test set of CommonVoice [5] are presented. The mean WER drops from \(28.1\) with the "Tiny" version to \(10.2\) with the "Medium" one.
However, as shown in Figure 1 (b), this performance drop may not concern a significant part of the testing points. In this figure, every cell \((i,j)\) shows the proportion of samples in the CommonVoice test set where model \(i\) performs better or equally to model \(j\). For instance, the third cell in the first line (cell \((0,2)\)) states that for \(52\%\) of the testing samples, the Tiny model (39M) performs equally or better than the Small one (244M) while bearing more than \(6\) times fewer parameters. Based on this observation, this work explores whether we can predict if audio samples will fall into this category. By doing so, audio samples that would not benefit from the costly inference of a large model can be assigned to a smaller one in order to reduce the total computational load.
More precisely, this study aims to develop a _decision module_ that, given an audio sample, chooses a Whisper model version that has the lowest inference cost without WER degradation. Due to the complexity of the task, in this paper, we only focus on deciding if an audio sample should be decoded with Whisper Tiny (39M) or with Whisper Small (244M). These two model versions are relevant candidates as they exhibit high differences in both WER, inference costs and latency.
A few works [6, 7] have already attempted to choose among several ASR model versions using WER prediction. Given the textual output of an ASR model, they explored the prediction of the sentence-level WER. Yet, these methods are not aiming to reduce inference costs but rather to decide whether an audio sample should be retreated by a more complex ASR model. Indeed, the most efficient techniques predict WER using full ASR pipelines based on either acoustic encoding and language model (LM) beam search [6, 7]. Such methods that rely on costly beam searches cannot be used in our case where the aim is to reduce the computational load.
Another close line of work is dynamic or early-exiting approaches. Instead of saving computation by choosing between separate ASR models, these methods have attempted to make forward passes lighter by skipping some of the last transformer layers of an ASR model [8, 9]. Decision to exits is based on entropy or representation-similarity thresholds. However, early-exiting, as developed in these works, can only save layer computation in the ASR encoder, while, as shown in Table 1, for attentional encoder-decoder architectures, most of the computations occur in the beam search decoding.
The closest work to our effort is from Lugosch _et al._[10], who propose to save computation cost at inference time by choosing be
Figure 1: Absolute and relative performances of Whisper models on CommonVoice test set. The four models are Whisper Tiny (39M parameters), Base (74M), Small (244M) and Medium (759M). Each cell \(i,j\) in Figure (b) represents the percentage of utterances where the model \(i\) performs at least as good as the model \(j\).
tween a large and a small ASR decoder. However, their method relies only on the log-likelihood of the encoder which is one of the baselines of our work.
This paper explores possibilities to build a decision module that allows efficient selection between different ASR model sizes while keeping high performance. Our contributions are threefold :
* We successfully reduce inference costs for a negligible WER degradation. In addition, our method can be used to interpolate between model sizes, saving the need for costly training of intermediate models.
* We explore different inputs and architectures for the decision module and compare them to several baselines and toplines.
* The codebase1, developed within the SpeechBrain [11] framework, is released for further investigations. Footnote 1: [https://github.com/hugomalard/Big-model-only-for-hard-audios.git](https://github.com/hugomalard/Big-model-only-for-hard-audios.git)
## 2 Methods
This section describes our main pipeline represented in Figure 2. For a given audio sample, we first extract speech features and connect the output to a decider module. This latter is responsible for choosing to transcribe the audio with either a cheap model, Whisper Tiny, or a more expensive one, Whisper Small.
### Feature extractor
One of the questions we tackle in this work is which speech features are needed to predict how hard to transcribe is an audio sample. We explore two levels of representations, depending on their closeness to the raw signal waveform. We call low-level features hand-crafted representations like Mel spectrograms or MFCCs while high-level features are typically the outputs of a transformer encoder trained with self-supervision [4, 12] or text supervision [1]. Intuitively, it is reasonable to think that high-level features will provide better input to the decider module. Indeed, while low-level features will only provide a frequency analysis of the speech input, higher-level features provide high-quality zero-shot phonetic encoding [13] and contain various additional information such as speaker and gender identity [14] as well as audio backgrounds [15].
However, regarding our objective of minimizing computational cost, low-level features may be a favorable choice compared to high-level ones. Yet, we observed that the computational cost of representing speech with high-level features is negligible compared to the cost of the full ASR pipeline. Indeed, attention-based encoder-decoder models, like Whisper, are composed of an encoder and a decoder model that have virtually the same number of parameters. The encoder represents speech into latents and the decoder turns the latents into text with a beam search. While the encoding only costs one forward pass through the encoder, the decoding costs a large number of forward passes through the decoder. For instance, we present in Table 1 the computational cost of encoding and decoding the test set of the CommonVoice [5] with either Whisper Tiny or Whisper Small. As expected, it appears that most of the computation is done in the beam-search decoding.
Therefore, considering both the quality of high-level features and the relatively low cost of encoding speech with a transformer stack, we decided to use the Whisper Small encoder as our feature extractor. We ablate this choice in the results section at Table 2.
### The decider
As shown previously in Figure 1 (b), Whisper Tiny performs as well or even better than Whisper Small in \(52\%\) of the sentences in the CommonVoice dataset. The decider from Figure 1 is a neural model that exploits this observation. Specifically, for an audio sample \(a\), the decider is trained on the output of the frozen feature extractor to predict 1 or 0 according to the following equation.
\[g_{\mathcal{M}_{T},\mathcal{M}_{S}}(a)=\mathbb{1}_{WER(\mathcal{M}_{T}(a))>WER (\mathcal{M}_{S}(a))} \tag{1}\]
Where \(\mathcal{M}_{T}\) and \(\mathcal{M}_{S}\) are respectively Whisper Tiny and Whisper Small. It means that the model should learn to predict 0 is the true WER obtained with \(\mathcal{M}_{T}\) is lower than the one of \(\mathcal{M}_{S}\).
In a computationally aware context, using a lightweight module as decider is crucial, as its computational cost will be systematically paid at each inference. Using a transformer stack for the decider would induce a high inference cost due to the high dimensionality of the output of the feature extractor. A convolutional stack, scaling linearly with the sequence length, does not suffer from those side effects. Moreover, the locality bias of the convolutions may be pertinent in this context since errors may occur at localized segments of the speech sample. Therefore, we opted for a one-dimensional small ResNet [16] for the decider module architecture.
Instead of simply connecting the feature extractor output to the decider input, we learn a weighted sum of the feature extractor layers as in [17]. This method exploits the fact that different layers of a transformer stack encode different types of information from the audio signal [18].
Here is a summary of our pipeline at inference time for transcribing one given audio sample, \(a\). First, \(a\) is turned into latents using Whisper Small encoder. The decider computes a learned weighted
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Model** & **Encoding** & **Beam-search decoding** \\ \hline Whisper Tiny & 0.01T & 0.22T \\ Whisper Small & 0.13T & 2.49T \\ \hline \hline \end{tabular}
\end{table}
Table 1: Total MAC (multiply–accumulate) operations on CommonVoice test set for the encoder and the decoder of Whisper Tiny and Small using a beam search with n=8
Figure 2: The decision module is composed of a feature extractor that encodes the speech signal into latents that are given in input to the decider. Based on the output of the decider, the audio sample is transcribed either by Whisper Tiny or Whisper Small.
average of the encoder layers before doing a forward pass in a small 1D ResNet. If the decider output value falls below a threshold \(h\), the beam search is performed in the Whisper Small decoder. Otherwise, \(a\) is encoded and beam-search decoded with Whisper Tiny.
## 3 Experiments
In this section, we describe the datasets and hyperparameters used for the training of the decider model. In addition, we present baselines and toplines that act as comparison points to our method.
### Datasets and setups
In this study, two datasets are considered: LibriSpeech [19] and CommonVoice 7.0 [5]. The LibriSpeech corpus is composed of read English speech recordings with 960 hours for training, two dev splits _dev-clean_ and _dev-other_ and two test splits _test-clean_ and _test-other_ of 5 hours each. CommonVoice is a collection of speech samples from worldwide users recording themselves with their own devices covering a large variety of age, gender and accents. The English part of the dataset presents roughly 1260 hours of recorded audio.
Our decider module is a small ResNet [16], with 3 ResBlocks of two convolutional layers, each with 256 feature maps. The output of the ResNet is average-pooled before going through a linear layer with one sigmoid output neuron. In addition, the decider has one learnable weight per feature extractor layer. The feature extractor remains frozen during the decoder training. Training is done with binary cross entropy using Adam[20] optimizer and a learning rate of \(10^{-5}\) with cosine annealing.
### Baselines and Toplines
In order to compare our decision module from Section 2 to simpler methods with threshold-based decisions, we present in this subsection three considered baselines.
First, based on the well-known impact of noise on ASR quality, the output of a blind Signal to Noise Ratio (SNR) estimator [21] is used for the decision module. Here, the decider relies on a simple threshold, determined by equal error rate: it runs Whisper Small if the computed SNR is lower than the threshold, otherwise it uses Whisper Tiny. A second baseline consists of using an accent detection model [22] to assign audios of rarer accents to the larger model. Indeed, [23] shows a significant effect of English accents on ASR performances. For this baseline, the decision module consists in selecting Whisper Tiny if the English accent detected is either American, British or Canadian.
Finally, inspired by Lugosch _and al._[10], a third baseline explores the use of ASR encoder-decoder logits as a confidence measure. Precisely, using Whisper Tiny, we perform a full greedy decoding, compute the entropy of the logit probabilities for each time-step, then aggregate with mean pooling over the time dimension. Here again, the decider is a threshold on entropy values that is set on a validation set as for the SNR baseline.
Regarding toplines, we propose two decision modules that are voluntarily unrealistic in order to show the potential computational savings in an ideal case scenario. The first topline is an oracle that knows the true value of equation 1 for any audio sample. For the second topline, we assume that the WER of Whisper Tiny is known in advance for any audio sample. The decider here is a threshold on WER values set on a validation set to determine when to use the larger Whisper model. The threshold is chosen such that it corresponds to the larger value for which there is no false negative (i.e choosing "Tiny" while it performs less well than "Small").
## 4 Results and Discussion
### Accuracy of decision modules
Table 2 presents the decision modules accuracies on equation 1. It consists, for a collection of audio samples, in the percentage of times a decision module correctly assigns audio samples to the appropriate Whisper model as defined in Equation 1.
The three first rows present the accuracy of the three baselines. The two first ones achieve accuracies close to random performance, which shows that neither SNR nor accent seems to capture the difficulty of the audio sample. For SNR, we hypothesize that this is due to the fact that the recordings come from relatively clean backgrounds, leading to very noisy and poorly informative estimation from the blind SNR estimator. On the contrary, logit-level entropy, although being computationally costly gives significantly better results reaching \(64.5\%\) accuracy. It seems to indicate that the model internal states contain useful information about the difficulty of decoding audio samples.
The second part of the table considers different feature extractors to our ResNet decider. As expected, higher-level features perform better than 80-dimensional Mel Spectrograms. The encoder of Whisper Small model performs better than the Wav2vec2.0 in our setting with \(68.2\%\) accuracy, compared to \(63.9\%\) for Wav2Vec2.0 features. This suggests that model-related features are the best adapted to the decision task.
Finally, the oracle based on a threshold over the WER of Whisper Tiny produces strong results, showing that the WER of smaller Whisper version can be used to allocate audio samples to large models efficiently. However, blind WER prediction remains a noisy and computationally costly endeavour [7].
We ablate the architectural choice of the decider model in Table 2. First, removing the learned weighted sum of encoder layers slightly degrades the accuracy. Second, using a one-layer transformer network with a roughly equal number of parameters to the ResNet, apart from raising the cost of the decision, also degrades accuracy scores by a couple of percent. Finally, inspired by [15], we implemented a small TL-transformer architecture which is composed of one transformer layer on the encoder layers outputs and another one on their pooled (time-wise) representations. This approach scores even lower than the simple one-layer transformer architecture.
### WER/MACs trade-off
Table 4 shows the trade-off WER/MACs obtained with our pipeline compared to transcribing speech using Whisper models. MACs, which stands for multiply-accumulate operations, is our measure of computational cost. It is important to note that the MACs column shows the cost of the full pipeline for transcribing the CommonVoice test set, including the cost of the decision module when there is one. Table 4 starts with the WER/MACs of simply transcribing speech using the different Whisper models with beam search. Then, we include the logit entropy of Whisper Tiny, which is the best baseline from Table 2. This baseline increases the WER by an absolute \(0.7\) points while reducing the model computational cost by \(150G\). Comes next, our main pipeline that uses a decision module composed of Whisper Small encoder and Resnet. For this latter, we provide scores at two different thresholds, \(0.3\) and \(0.5\). By comparison with simply running Whisper Small, selecting a threshold of \(0.5\) on the sigmoid output of our decision module gives a \(16\%\) higher WER while resulting in a \(35\%\) decrease in MACs. Using a threshold of \(0.3\) increases the WER by an absolute \(0.37\) points while reducing the model computational cost by \(310G\) MACs (_i.e._\(12\%\) of the total
cost). Finally, the last 2 lines show the hypothetical improvements that can be obtained using a perfect decider, or a threshold, based on a perfect estimation of the WER of \(\mathcal{M}_{T}\). Not only they allow to reduce the MACs, but they also reduce significantly the computational load. These toplines confirm the potential of the approach and call for further research on model size assignment.
Figure 3 shows the performances (WER) and computational costs (MACs) of the intermediate models obtained using our main pipeline (i.e. Whisper Small encoder and ResNet). Almost all the points are under the plotted diagonal, which means that the resulting drop in performance (relative to the larger model) is systematically smaller than the gain in computational cost. Whisper Base is included in Figure 3 as it is an intermediate model between Whisper Tiny and Small. Its WER/MACs is only slightly better than our selection approach. This shows, that the method presented yields nearly equivalent performance to that of an intermediate model trained entirely from scratch, saving very costly training.
### Discussion
The failure of the baselines and the encoder layers being the best performing input, tend to show that the errors are very dependent of the ASR model, rather than complexities inherent to the audio signal that would make any ASR model fail to transcribe.
To investigate this, we compute correlation values between the WER of a Conformer Large [24] model and of a Wav2Vec2.0 Base model fine tuned on LibriSpeech, with the WER of Whisper tiny, on the LibriSpeech test sets (combined clean and other).
The Pearson correlation coefficient between the WER of the Conformer model and the WER of Whisper Tiny reaches only \(0.44\), while the Spearman correlation is only \(0.41\). Similarly, these two correlation quantities between the WER of Wav2Vec2.0 and Whisper Tiny reach respectively \(0.51\) and \(0.45\). The low correlation values and the weak monotonic relationship seem to indicate that models have different intrinsic failure cases. It confirms, as our results have first shown, that a successful model selection approach needs model-related inputs.
## 5 Conclusion
In this study, we explored a new, computationally efficient approach, that selects for an audio sample the most efficient model among two of different sizes. It can be applied to interpolate between two trained models of fixed sizes without additional training, reducing the relative computational cost more than it degrades performances.
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Method** & **WER\(\downarrow\)** & **MACs\(\downarrow\)** \\ \hline \(\mathcal{M}_{T}\) & 28.1 & 0.23T \\ \(\mathcal{M}_{Base}\) & 20.7 & 0.60T \\ \(\mathcal{M}_{S}\) & 13.3 & 2.62T \\ \hline \(\mathcal{M}_{T}\) logit entropy & 14.0 & 2.47T \\ encoder \(\mathcal{M}_{S}\) + ResNet @\(0.5\) & 15.4 & 1.72T \\ encoder \(\mathcal{M}_{S}\) + ResNet @\(0.3\) & 13.7 & 2.31T \\ \hline _WER \(\mathcal{M}_{T}\) Oracle_ & _12.93_ & _1.954T_ \\ _Oracle_ & _12.27_ & _1.468T_ \\ \hline \hline \end{tabular}
\end{table}
Table 4: Average WER and MACs (lower the better) associated with each method on the CommonVoice test set. The table starts with the performances of \(\mathcal{M}_{T}\), \(\mathcal{M}_{Base}\) and \(\mathcal{M}_{S}\) which are full ASR pipeline of respectively Whisper Tiny, Base and Small. \(\mathcal{M}_{T}\) logit entropy is our baseline. encoder \(\mathcal{M}_{S}\)+ ResNet is our main contribution for which we give performances at two different theshold value (0.3 and 0.5). The last 2 lines are our toplines.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Feature extractor** & **Decider** & **test-clean\(\uparrow\)** & **test-other\(\uparrow\)** & **CV test\(\uparrow\)** \\ \hline SNR [21] & thresh. & 50.7 & 47.2 & 47.0 \\ Accent [22] & thresh. & n/a & n/a & 52.0 \\ \(\mathcal{M}_{T}\) logit entropy & thresh. & 64.4 & 63.7 & 64.5 \\ \hline Mel f-bands & ResNet & 62.5 & 55.2 & 60.0 \\ Wav2Vec2.0 Base & ResNet & 52.3 & 57.7 & 63.9 \\ \(\mathcal{M}_{T}\) encoder & ResNet & 65.1 & 65.0 & 66.4 \\ \(\mathcal{M}_{S}\) encoder & ResNet & **68.0** & **66.6** & **68.2** \\ \hline _WER \(\mathcal{M}_{T}\)_ & _thresh._ & _84.8_ & _80.7_ & _80.3_ \\ \hline \hline \end{tabular}
\end{table}
Table 2: Accuracies (higher the better) of the several decision modules on LibriSpeech and CommonVoice test sets. SNR, Accent and Logit Entropy are baseline models that only require to fit a threshold on a validation set. When the Decider is ResNet, a dedicated ResNet is trained on each of the different feature extractors. The last line is a topline model based on an oracle that provides the WER of Whisper Tiny. \(\mathcal{M}_{T}\) and \(\mathcal{M}_{S}\) are respectively Whisper Tiny and Small.
Figure 3: WER/MACs values on the CommonVoice test set for different pipelines. The blue dots are the values for multiple thresholds using our best decision module (Whisper Small encoder and ResNet). The crosses correspond to the Whisper models Tiny, Base and Small respectively. Finally, the black line is a linear interpolation between the MACs and WER of Whisper Tiny and Whisper Small
## 6 Acknowledgements
This work was funded in part by the European Research Council (ERC-2011-AdG-295810 BOOTPHON), the Agence Nationale pour la Recherche (ANR-17-EURE-0017 FrontCG, ANR-10-IDEX-0001-02 PSL*, ANR-19-P3IA-0001 PRAIRIE 3IA Institute) and grants from CIFAR (Learning in Machines and Brains), Meta AI Research (Research Grant), Google (Faculty Research Award), Microsoft Research (Azure Credits and Grant), and Amazon Web Service (AWS Research Credits). Furthermore, this work was performed using HPC resources from GENCI-IDRIS (Grant 2021-[AD011011217])
|
2309.04170 | Comment on "Extending the Laws of Thermodynamics for Arbitrary
Autonomous Quantum Systems" | Recently, Elouard and Lombard Latune [PRX Quantum 4, 020309 (2023)] claimed
to extend the laws of thermodynamics to "arbitrary quantum systems" valid "at
any scale" using "consistent" definitions allowing them to "recover known
results" from the literature. I show that their definitions are in conflict
with textbook thermodynamics and over- or underestimate the real entropy
production by orders of magnitude. The cause of this problem is traced back to
problematic definitions of entropy and temperature, the latter, for instance,
violates the zeroth law. It is pointed out that another framework presented in
PRX Quantum 2, 030202 (2021) does not suffer from these problems, while Elouard
and Lombard Latune falsely claim that it only provides a positive entropy
production for a smaller class of initial states. A simple way to unify both
approaches is also presented. | Philipp Strasberg | 2023-09-08T07:30:45Z | http://arxiv.org/abs/2309.04170v1 | # Comment on "Extending the Laws of Thermodynamics for Arbitrary Autonomous Quantum Systems"
###### Abstract
Recently, Elouard and Lombard Latune [PRX Quantum **4**, 020309 (2023)] claimed to extend the laws of thermodynamics to "arbitrary quantum systems" valid "at any scale" using "consistent" definitions allowing them to "recover known results" from the literature. I show that their definitions are in conflict with textbook thermodynamics and over- or underestimate the real entropy production by orders of magnitude. The cause of this problem is traced back to problematic definitions of entropy and temperature, the latter, for instance, violates the zeroth law. It is pointed out that another framework presented in PRX Quantum **2**, 030202 (2021) does not suffer from these problems, while Elouard and Lombard Latune falsely claim that it only provides a positive entropy production for a smaller class of initial states. A simple way to unify both approaches is also presented.
A recent interesting attempt of Elouard and Lombard Latune (abbreviated ELL in the following) suggests microscopic definitions for thermodynamic quantities for two ("arbitrary" and of "any scale") interacting quantum systems \(A\) and \(B\)[1]. Their legitimate starting point is that the traditional dichotomy of heat and work reservoirs should be contained as limiting cases in a fully quantum description, and they succeed to derive formal mathematical identities resembling known relations from textbook thermodynamics. In the following, I will explain that this resemblance is, at best, only formal.
For pedagogical purposes I start with the identity resembling Clausius' inequality for a system \(A\) in contact with a single heat bath,
\[\Delta S_{A}-\int_{0}^{t}dt^{\prime}\beta_{B}(t^{\prime})\dot{Q}_{B}(t^{\prime })\geq 0. \tag{1}\]
It is derived by ELL, see Eq. (18) in Ref. [1], for _any_ decorrelated initial state of the form \(\rho_{A}(0)\otimes\rho_{B}(0)\) by identifying \(S_{A}(t)\equiv S_{\rm{vN}}[\rho_{A}(t)]\) (with \(S_{\rm{vN}}\) the von Neumann entropy), by defining the inverse temperature \(\beta_{B}(t)\) of \(B\) by equating \(S_{\rm{vN}}[\rho_{B}(t)]=S_{\rm{vN}}[w_{B}(\beta_{B}(t))]\), where \(w_{B}(\beta_{B}(t))\) denotes the Gibbs (canonical) state of \(B\), and by setting \(\dot{Q}_{B}(t)\equiv-\dot{S}_{\rm{vN}}[\rho_{B}(t)]/\beta_{B}(t)\).
Elementary algebra shows that these definitions make Eq. (1) identical to
\[I_{AB}(t)\geq 0, \tag{2}\]
where \(I_{AB}(t)=S_{\rm{vN}}[\rho_{A}(t)]+S_{\rm{vN}}[\rho_{B}(t)]-S_{\rm{vN}}[\rho_ {AB}(t)]\) is the non-negative mutual information. This quantity is upper bounded by \(2\min\{\ln d_{A},\ln d_{B}\}\), where \(d_{A}\) (\(d_{B}\)) denotes the Hilbert space dimension of \(A\) (\(B\)). Thus, whenever either \(d_{A}\) or \(d_{B}\) is small, the ELL entropy production in Eq. (2) is bounded by a small number.
Problematic results follow, which are best illustrated with extreme cases [2]. For instance, consider the case where \(B\) is very small (say a single qubit) and \(A\) describes a box of volume \(V\) with \(N\) gas particles initially confined to a smaller volume \(V^{\prime}<V\). Then, the expansion of the gas generates an entropy production proportional to \(N\), whereas the ELL entropy production can never exceed \(2\ln 2\) (in units of \(k_{B}\)). Similar problems appear if \(A\) is very small (say again a single qubit) and \(B\) is very large. For instance, let \(A\) be driven for a long time by a work reservoir (e.g., an external laser field or a part of \(B\) that autonomously implements a driving source). Then, entropy production should grow with time, but the ELL entropy production can again never exceed \(2\ln 2\). Further counterexamples can be constructed by assuming some intrinsic dissipation in \(B\), e.g., \(B\) could be composed of different regions initialized with different temperatures, something which is clearly conceivable for arbitrary initial states \(\rho_{B}(0)\), among other examples. Note that the importance of entropy production to scale extensively has been emphasized in quantum thermodynamics [3].
The reason for this conflict can be traced back to the ELL definitions of entropy and temperature. Since the inadequacy of von Neumann entropy has been discussed at length in the literature, including recent references [4; 5; 6], I here only focus on the ELL definition of temperature, which--to the best of my knowledge--was first introduced in Ref. [7]. Recall that the ELL temperature is determined by equating the von Neumann entropies of the actual state of \(B\) with a corresponding Gibbs ensemble. It follows, first, that this definition requires _full_ knowledge of the state of \(B\) (as also acknowledged by ELL). Second, ELL temperature is not always defined if the ground state of \(B\) is degenerate, in contrast to the claim that it "always admits a unique solution" [1]. Excluding this in the following, it follows that ELL temperature is _always zero_ if the state of \(B\) is _pure_--_independent_ of its energy, which is certainly problematic if one wants to reproduce the equation \(1/T=\partial_{E}S(E)\). It also conflicts with empirically measured temperatures and it is unable to predict the flow of energy. To be specific, consider two _equal_ (in size and constituents) bodies in thermal contact. Then, the flow of energy is determined by their energy difference, which is not predicted by the ELL temperature [8]. In addition, suppose that these bodies have initially the same energy such that there is no net heat flow and their empirically measured temper |
2309.10105 | Understanding Catastrophic Forgetting in Language Models via Implicit
Inference | We lack a systematic understanding of the effects of fine-tuning (via methods
such as instruction-tuning or reinforcement learning from human feedback),
particularly on tasks outside the narrow fine-tuning distribution. In a
simplified scenario, we demonstrate that improving performance on tasks within
the fine-tuning data distribution comes at the expense of capabilities on other
tasks. We hypothesize that language models implicitly infer the task of the
prompt and that fine-tuning skews this inference towards tasks in the
fine-tuning distribution. To test this, we propose Conjugate Prompting, which
artificially makes the task look farther from the fine-tuning distribution
while requiring the same capability, and we find that this recovers some of the
pretraining capabilities in our synthetic setup. Since real-world fine-tuning
distributions are predominantly English, we apply conjugate prompting to
recover pretrained capabilities in LLMs by simply translating the prompts to
different languages. This allows us to recover in-context learning abilities
lost via instruction tuning, natural reasoning capability lost during code
fine-tuning, and, more concerningly, harmful content generation suppressed by
safety fine-tuning in chatbots like ChatGPT. | Suhas Kotha, Jacob Mitchell Springer, Aditi Raghunathan | 2023-09-18T19:28:48Z | http://arxiv.org/abs/2309.10105v2 | # Understanding Catastrophic Forgetting in Language Models via Implicit Inference
###### Abstract
Fine-tuning (via methods such as instruction-tuning or reinforcement learning from human feedback) is a crucial step in training language models to robustly carry out tasks of interest. However, we lack a systematic understanding of the effects of fine-tuning, particularly on tasks outside the narrow fine-tuning distribution. In a simplified scenario, we demonstrate that improving performance on tasks within the fine-tuning data distribution comes at the expense of suppressing model capabilities on other tasks. This degradation is especially pronounced for tasks "closest" to the fine-tuning distribution. We hypothesize that language models implicitly infer the task of the prompt corresponds, and the fine-tuning process predominantly skews this task inference towards tasks in the fine-tuning distribution. To test this hypothesis, we propose _Conjugate Prompting_ to see if we can recover pretrained capabilities. Conjugate prompting artificially makes the task look farther from the fine-tuning distribution while requiring the same capability. We find that conjugate prompting systematically recovers some of the pre-training capabilities on our synthetic setup. We then apply conjugate prompting to real-world LLMs using the observation that fine-tuning distributions are typically heavily skewed towards English. We find that simply translating the prompts to different languages can cause the fine-tuned models to respond like their pre-trained counterparts instead. This allows us to recover the in-context learning abilities lost via instruction tuning, and more concerningly, to recover harmful content generation suppressed by safety fine-tuning in chatbots like ChatGPT.
Code available at [https://github.com/kothasuhas/understanding-forgetting](https://github.com/kothasuhas/understanding-forgetting)
## 1 Introduction
The development of large language models (LLMs) typically involves two stages--pretraining (next token prediction) on vast text corpora and fine-tuning on carefully curated datasets to adapt the pretrained model to the application of interest. The fine-tuning stage is critical in enabling language models to output helpful text, and there has been significant interest in various fine-tuning methods such as instruction-tuning (Ouyang et al., 2022; Wei et al., 2022; Mishra et al., 2022; Chung et al., 2022; Taori et al., 2023) and reinforcement learning from human feedback (Christiano et al., 2023; Stiennon et al., 2022; Bai et al., 2022; Ziegler et al., 2020).
One fundamental concern is that fine-tuning datasets are considerably smaller and less diverse than web-scale pretraining datasets (Raffel et al., 2020; Arivazhagan et al., 2019; Gao et al., 2021), and there is always a risk that the fine-tuned model "catastrophically forgets" (McCloskey and Cohen, 1989) how to solve problems that the pretrained model could solve. Such a gap has been reported as an "alignment tax" in works such as Ouyang et al. (2022) and Bai et al. (2022), but there is no clear understanding of what these trade-offs are and how to mitigate them. Given the importance of the fine-tuning process, it is imperative to build a systematic understanding of the effects.
In this work, we introduce a synthetic setup to understand the effects of fine-tuning. Building on the prior work of in-context learning linear functions (Garg et al., 2023) by pretraining transformers (Vaswani et al., 2017) on a large number of weight vectors, we show that the resulting transformers can be sub-optimal when evaluated on a few specific weight vectors of special interest. This mirrors real-world settings where the pretraining data is uncurated data from the web, while there are some "natural" tasks of special interest, like question answering. We find that fine-tuning on the
weights (tasks) of interest enables transformers to achieve optimal performance on these tasks but comes at the cost of performing worse on other tasks (Figure 4).
We further examine the performance degradation induced by fine-tuning and observe a striking structure in terms of which tasks are adversely affected. The most affected tasks are outside but still "close" to the fine-tuning distribution as measured by their likelihood under the fine-tuning distribution (Figure 5). In other words, the fine-tuned model performs more like the pretrained model on tasks that are far from the fine-tuning distribution. We hypothesize this is because transformers are implicitly "inferring" the task before solving the corresponding task. The fine-tuning process might not significantly change the capability to solve tasks (inside and outside the fine-tuning distribution), but rather skew the task inference heavily towards the fine-tuning task distribution, disproportionately affecting tasks close to but outside the fine-tuning distribution.
Assuming this framework, we can recover the suppressed pretraining capability by forcing a prompt to look very different from the fine-tuning distribution, helping the fine-tuned model infer that the task is outside the fine-tuning distribution. To do this, we propose _Conjugate Prompting_. For a prompt \(P\) outside the fine-tuning distribution, we prompt the language model with prompt \(P^{\prime}\) such that (i) \(P^{\prime}\) is less likely under the fine-tuning distribution and (ii) the solution to prompt \(P\) can be easily recovered from the solution to prompt \(P^{\prime}\). Since \(P^{\prime}\) is farther from the fine-tuning distribution than \(P\), the fine-tuned model will solve \(P^{\prime}\) with the pretrained capability, allowing us to extract a better solution for the original task \(P\). We test our conjugate prompting method in the linear regression setup and find that it alleviates some of the trade-offs induced by fine-tuning (Figure 6).
Drawing inspiration from the synthetic experiments, we validate whether fine-tuning affects real language models in the same manner. Since fine-tuning datasets are curated and primarily in English, we apply conjugate prompting with language translation to lower the likelihood of being drawn from the fine-tuning distribution while preserving the core task. We construct a problem where it is ambiguous whether the correct task is in-context learning or following an instruction and find that instruction-tuning suppresses in-context learning. Across 5 models and 4 non-English languages (with 2 additional transformations), conjugate prompting recovers the pretrained capability of in-context learning. We also consider the problem of harmful content generation where language models like ChatGPT are actively fine-tuned to suppress the capability of answering harmful instructions: here it is in an adversary's interest to recover suppressed capabilities of pretraining. We find that conjugate prompting can still circumvent the fine-tuned capability of refusal of harmful instructions and recover the pretrained capability of following the instruction.
## 2 Linear Regression Experiments
We explore a simple synthetic setup where we train transformers to in-context learn linear functions. Our setup tries to mirror the structure of large language model training by pretraining over a broad
Figure 1: **How does fine-tuning affect language models?** When pretrained over data that contains the orange task \(T_{1}\) and the blue task \(T_{2}\), a model may infer a prompt \(P\) is from task \(T_{1}\) and solve the task accordingly. When fined-tuned over task \(T_{2}\), the model may no longer perform task \(T_{1}\). We hypothesize that this might not mean the task \(T_{1}\) is forgotten, but rather just the implicit task inference is shifted away from \(T_{1}\). Leveraging this viewpoint, we provide conjugate prompting to recover pretrained model behavior by countering the change in implicit task inference, shedding light onto the nature of catastrophic forgetting.
class of many tasks, from the distribution \(\mathcal{D}_{\text{cont}}\), and a special set of few tasks, from the distribution \(\mathcal{D}_{\text{disc}}\) (Section 2.4). When we fine-tune the model to improve performance over \(\mathcal{D}_{\text{disc}}\), the model seems to "forget" the capability to solve tasks from \(\mathcal{D}_{\text{cont}}\) (Section 2.5). However, we hypothesize that these capabilities are rather "suppressed" (Sections 2.6 and 2.7), and we find that we can recover them through our principled conjugate prompting strategy (Section 2.8).
### Setup: in-context learning of linear functions
We are interested in learning functions \(f\in\mathcal{F}\) that map inputs \(x\in\mathbb{R}^{d}\) to outputs \(y\in\mathbb{R}\). Inspired by previous works (Garg et al., 2023; Akyurek et al., 2022; Li et al., 2023), we focus on linear regression for noisy data, where every function is given by \(f_{w}\colon x\mapsto\langle w,x\rangle\) for a fixed \(w\in\mathbb{R}^{d}\). We are given a set of samples \(S\) of variable length \(k\) from \(0\) to maximum length \(N\) such that
\[S=\left\{(x_{1},y_{1}),\ldots,(x_{k},y_{k})\right\}, \tag{1}\]
with \(y_{i}=f_{w}(x_{i})+\epsilon_{i}\) and \(\epsilon_{i}\sim\mathcal{N}(0,\sigma^{2})\). From this, a model estimates the output \(y_{\text{query}}\) for a given input \(x_{\text{query}}\). We will refer to an instance from our function class \(f_{w}\) as a _task_, and when it is clear from context, we will refer to tasks by the associated weight vector \(w\). In this section, all inputs will be sampled from the normal distribution via \(x_{i}\sim\mathcal{N}(0,I_{d})\).
Training an auto-regressive model.We consider auto-regressive models \(T_{\theta}\) that take in a sequence of tokens, each in \(\mathbb{R}^{d}\), and produce a real-valued output. For samples \(S\) generated under \(w\) as in Equation 1, we feed the model \(T_{\theta}\) the sequence \([x_{1},y_{1},\ldots,x_{k},y_{k},x_{\text{query}}]\)1 (which we will refer to as the _prompt_) and take its output \(\hat{y}\) as a prediction of \(y_{\text{query}}\). When appropriate, we will refer to the \(x_{i}\)'s in the prompt as \(X\in\mathbb{R}^{k\times d}\) and the \(y_{i}\)'s as \(y\in\mathbb{R}^{k}\). We train and evaluate \(T_{\theta}\) with respect to a weight distribution \(\mathcal{D}\) via the quadratic loss
Footnote 1: Every \(1\)-dimensional token is right-padded with \(d-1\) zeroes
\[\mathcal{L}(\theta,\mathcal{D})=\sum_{k=0}^{N}\operatorname*{\mathbb{E}}_{ \begin{subarray}{c}x_{i}\sim\mathcal{N}(0,I_{d})\\ \epsilon_{i}\sim\mathcal{N}(0,\sigma^{2})\end{subarray}}\left[\left(T_{ \theta}\left([x_{1},y_{1},\ldots,x_{k},y_{k},x_{\text{query}}]\right)-y_{ \text{query}}\right)^{2}\right]. \tag{2}\]
by sampling a fresh batch of \(x,w,\epsilon\) in each step. Under the quadratic loss, the optimal output is \(\operatorname*{\mathbb{E}}\left[f_{w}(x_{\text{query}})+\epsilon\mid X,y \right]=\left\langle\operatorname*{\mathbb{E}}\left[w\mid X,y\right],x_{\text {query}}\right\rangle\). For our model, we use a 22.4 million paramater GPT-2 style transformer. For a comprehensive explanation of our experimental setup, refer to Appendix B.5.
### Gaussian prior over weights (\(\mathcal{D}_{\text{cont}}\))
Prior work on learning linear functions (Garg et al., 2023; Akyurek et al., 2022; Li et al., 2023) assumes weights are sampled from a Gaussian prior \(\mathcal{D}_{\text{cont}}=\mathcal{N}(0,\tau^{2}I_{d})\), which we will refer to as the "continuous distribution". In this case, the Bayes optimal predictor performs _ridge regression_:
\[w^{*}_{\text{cont}}(X,y)=\operatorname*{\mathbb{E}}\left[w\mid X,y\right]= \left(X^{\top}X+\frac{\sigma^{2}}{\tau^{2}}I_{d}\right)^{-1}X^{\top}y. \tag{3}\]
As noted in prior work, for most values of \(\tau,\sigma\), a converged transformer's predictions closely match the Bayes optimal predictor when evaluated on new weight vectors from the same Gaussian prior. We replicate this for \(\tau=1\) in Figure 2, left.
Figure 2: **Pretraining loss.** We compare a model trained on \(\mathcal{D}_{\text{cont}}\) against the optimal algorithm of ridge regression (left) and a model trained on \(\mathcal{D}_{\text{disc}}\) of 64 tasks against the optimal algorithm of discrete regression (right). Transformers match Bayes-optimal.
### Discrete prior over fixed weights (\(\mathcal{D}_{\text{disc}}\))
The Gaussian prior spreads probability mass over a large region of weight vectors, but in real world distributions, there isn't such a "uniform" prior over the task space. Rather, there are a few common tasks (e.g. summarization or sentiment analysis) which frequently appear in the task distribution, and pretrained LLMs rely on these priors for producing outputs (Min et al., 2022; Wei et al., 2023; Pan et al., 2023).
We take this scenario to the extreme and consider training over a "fixed" set of weights with the distribution \(\mathcal{D}_{\text{disc}}\) sampling \(w\) uniformly from \(\{w_{1},\dots,w_{n}\}\). We refer to this as the "discrete distribution". For our experiments, we set \(n=64\) and fix each \(w_{i}\) as an independent sample of \(\mathcal{N}(0,I_{d})\). With this new prior over the weights, ridge regression is no longer the optimal solution. The Bayes optimal estimator for \(\mathcal{D}_{\text{disc}}\) is:
\[w_{\text{disc}}^{*}(X,y)=\frac{\sum_{w\in\mathcal{W}}w\varphi\left((y-Xw)/ \sigma\right)}{\sum_{w\in\mathcal{W}}\varphi\left((y-Xw)/\sigma\right)}, \tag{4}\]
where \(\varphi\left(\cdot\right)\) is the density of the standard multivariate normal distribution (derivation in Appendix A.1). We refer to this estimator as _discrete regression_. After training for sufficiently many steps, we find that the Transformer achieves the same loss as the Bayes-optimal estimator \(w_{\text{disc}}^{*}\), clearly outperforming ridge regression on the fixed set of weights (Figure 2, right).
### Pretraining over the mixture (\(\mathcal{D}_{\text{mix}}\))
We know that web-scale pretraining data is heavy-tailed, consisting of some important tasks seen often (similar to \(\mathcal{D}_{\text{disc}}\)), as well a large number of diverse tasks each seen rarely (similar to \(\mathcal{D}_{\text{cont}}\)). To best model this structure, we consider the mixture of the continuous and discrete distributions, which we refer to as the "mixture distribution"
\[\mathcal{D}_{\text{mix}}=\alpha\mathcal{D}_{\text{disc}}+(1-\alpha)\mathcal{D} _{\text{cont}} \tag{5}\]
for a scalar \(\alpha\). The Bayes optimal estimator for this mixture distribution takes the form
\[w_{\text{mix}}^{*}(X,y)=g(X,y)w_{\text{disc}}^{*}(X,y)+(1-g(X,y))w_{\text{cont }}^{*}(X,y), \tag{6}\]
where \(g(X,y)\) is the posterior probability that the prompt \(X,y\) was sampled from \(\mathcal{D}_{\text{disc}}\) (complete expression and derivation in Appendix A.1). Intuitively, the Bayes optimal predictor utilizes ridge regression to get \(w_{\text{cont}}^{*}\) and discrete regression to get \(w_{\text{disc}}^{*}\) which it appropriately weights by the posterior. We refer to this solution as _mixture regression_.
Mixture regression demonstrates a trade-off.For a given \(\alpha\), we measure model performance by evaluating its loss on the continuous and discrete distributions. For mixture regression, there is a natural trade-off between performance on these distributions based on the prior determined by \(\alpha\) as pictured by the black curve in Figure 3. For \(\alpha\) close to \(0\), mixture regression heavily weights ridge regression, while for \(\alpha\) close to \(1\), it weights discrete regression. For intermediate \(\alpha\), mixture regression can utilize the posterior to infer the distribution and get low loss on both \(\mathcal{D}_{\text{cont}}\) and \(\mathcal{D}_{\text{disc}}\).
Pretrained models approach mixture regression.We can similarly measure the loss of transformers evaluated on the continuous and discrete distributions. We find that as we train models for
Figure 3: **Trade-off over training. We measure the loss over \(\mathcal{D}_{\text{cont}}\) and \(\mathcal{D}_{\text{disc}}\) for different models over different values of \(\alpha\). The black curve, mixture regression, faces a natural trade-off over different values of \(\alpha\). We also pretrain models for \(\alpha\in\{0.2,0.5,0.8\}\) and measure their losses at \(1000,2000,3000,4000\) and \(5000\) steps. The solid red lines are trajectories over time for a fixed \(\alpha\) and the dotted red lines are the trade-off for a fixed time step. Over the course of training, models approach mixture regression.**
longer on the mixture distribution, they approach the Bayes-optimal solution of mixture regression for the respective \(\alpha\). However, this convergence is very slow, especially for smaller values like \(\alpha=0.2\). Moreover, even a converged model is bounded in how well it can perform on the discrete distribution due to the trade-off presented by mixture regression.
### The effect of fine-tuning pretrained models
In practice, there is often a distributional mismatch between the tasks the model learned to solve during pretraining and the tasks of interest to an end user. For example, training for next token prediction over internet data doesn't naturally respond to human-written instructions or avoid outputting toxic content. Additionally, pre-training on all tasks is not a data-efficient nor compute-efficient method to improve performance on a specific target application.
The most common solution to these problems is to fine-tune the pretrained model over the tasks of interest. We replicate this in our controlled setup by targeting performance on the fixed set of discrete tasks in \(\mathcal{D}_{\text{disc}}\) which requires the model to perform discrete regression (Equation 4). Fine-tuning is necessary since pretraining is both inefficient and limited by the distributional mismatch.
Fine-tuning helps for \(\mathcal{D}_{\text{disc}}\) and hurts \(\mathcal{D}_{\text{cent}}\).Fine-tuning the pretrained models from Section 2.4 over \(\mathcal{D}_{\text{disc}}\) rapidly improves performance on \(\mathcal{D}_{\text{disc}}\), demonstrating the utility of fine-tuning. However, this also leads to large performance drops on \(\mathcal{D}_{\text{cont}}\). For example, as seen in Figure 4, when we fine-tune a transformer pretrained over \(\mathcal{D}_{\text{mix}}\) for \(\alpha=0.5\), the loss decreases on \(\mathcal{D}_{\text{disc}}\) while it increases on \(\mathcal{D}_{\text{cont}}\). This is an instance of "catastrophic forgetting" (McCloskey & Cohen, 1989), where fine-tuning a model to improve at one task causes it to worsen at other tasks.
This performance drop could imply that the model can not perform ridge regression anymore. Thanks to our synthetic setup, we investigate how fine-tuning is affecting model predictions and utilize this understanding to recover lost performance on the continuous distribution.
### Understanding the effects of fine-tuning
To develop a deeper understanding of how fine-tuning enhances performance on \(\mathcal{D}_{\text{disc}}\) while damaging performance on \(\mathcal{D}_{\text{cont}}\), we analyze how the prompt influences the change in loss. We find that the change in loss incurred by fine-tuning is not uniform over all prompts and depends on the likelihood that the prompt was sampled from the fine-tuning distribution \(\mathcal{D}_{\text{disc}}\). In Figure 5, we see how the change in loss induced by fine-tuning varies with the likelihood of being drawn from the fine-tuning distribution. For prompts that are likely to be drawn from the fine-tuning distribution, the loss increases as we lower the likelihood. This lines up with the standard intuition that models will have stronger performance for inputs that are in-distribution and worse performance for inputs that are out-of-distribution. However, this trend does not continue forever and in fact reverses for the continuous prompts. As the likelihood continues to decrease, the model improves performance, running counter to standard intuition about out-of-distribution inputs. With this understanding of how fine-tuning affects model predictions unevenly, we can better probe what function the fine-tuned model has learned.
Figure 4: **Fine-tuning improves discrete loss and hurts continuous loss.** We pretrain a transformer over the mixture of \(\alpha=0.2\) with \(64\) discrete tasks for \(5000\) steps. We fine-tune this model for \(400\) steps on only \(\mathcal{D}_{\text{disc}}\) as highlighted in yellow. The discrete loss rapidly decreases, while the continuous loss rapidly increases.
### Hypothesis: Fine-tuning is suppressing solutions
We consider factoring a model into "capabilities" and "task inference" via
\[w_{\theta}(X,y)=\underbrace{g_{\theta}(X,y)}_{\text{task inference discrete capability}}\underbrace{w_{\text{disc}}(X,y)}_{\text{task inference}}+\underbrace{(1-g_{\theta}(X,y))}_{\text{task inference}}\underbrace{w_{\text{cont}}(X,y)}_{\text{ridge capability}}, \tag{7}\]
where \(g_{\theta}(X,y)\) is some weighting function on the discrete solution. A capability refers to whether the transformer can internally perform an algorithm of interest (i.e. discrete regression or ridge regression) and task inference refers to whether the model can correctly disambiguate which algorithm to use. If \(w_{\theta}(X,y)\) were equal to the Bayes optimal \(w^{*}_{\text{mis}}\) and the model perfectly implemented discrete regression and ridge regression, then \(g_{\theta}(X,y)\) would be the true posterior probability that the prompt is drawn from the discrete distribution. Due to our limited mechanistic understanding of transformers, we can not test whether this is how language models compute solutions. However, we can utilize this framework as an assumption to develop insight into what function is learned by the model.
Assuming this framework, catastrophic forgetting can be seen as task inference up-weighting fine-tuning tasks and potential degrading pretraining capabilities. However, from Figure 4, we see that the loss on \(\mathcal{D}_{\text{cont}}\) jumps abruptly as we fine-tune, suggesting that the model is more likely to have learned to down-weight the ridge regression solution rather than completely "unlearn" any internal implementation of ridge regression within a few steps. We hypothesize that during fine-tuning, the drop in performance on the continuous distribution is largely driven by altered task inference, i.e. for a prompt \(X,y\) from \(\mathcal{D}_{\text{cont}}\), \(g_{\theta}(X,y)\) is larger due to the fine-tuning updates. We also hypothesize that the ridge regression and discrete regression capabilities are somewhat preserved.
### Conjugate prompting for linear regression
If the hypothesis was true, we could recover ridge regression through setting \(g_{\theta}(X,y)\) to \(0\). Since we do not what function the transformer is precisely implementing, this is infeasible, so we try to change the prompt instead. Specifically, for \(X,y\) generated under task \(w\), we consider the scaled prompt \(X,\gamma y\) for a scale factor \(\gamma\). The scaled prompt \(X,\gamma y\) is a valid linear regression problem generated under task \(\gamma w\) with noise \(\gamma\epsilon\). Since a sufficiently large \(\gamma\) will decrease the true posterior \(g(X,y)\) for all \(\alpha\), we expect that \(g_{\theta}(X,\gamma y)\) would be lower than \(g_{\theta}(X,y)\), weighting the model output towards ridge regression. Under this scaling, the loss-optimal prediction for the scaled prompt \(X,\gamma y\) would correspond to \(\langle\gamma w,x_{\text{query}}\rangle\), which is simply the loss-optimal prediction for the prompt \(X,y\) scaled by \(\gamma\).
Therefore, to make the model perform ridge regression instead of discrete regression, we compose our insights into the following prompting strategy. Instead of directly feeding our prompt into the model, we scale the labels \(\gamma\), feed the model the scaled prompt, and scale down the model output. This should recover ridge regression if the model can perform ridge regression for the scaled prompt and if our hypothesis is true. This strategy is an instance of _conjugate prompting_, which we generalize in Section 3.
Conjugate prompting recovers ridge regression.We evaluate this strategy in our ridge regression setup in Figure 6. We find that in line with our hypothesis, conjugate prompting can help improve performance for fine-tuned models. Specifically, we observe that the strategy helps at low
Figure 5: **Change in loss vs density under \(\mathcal{D}_{\text{disc}}\).** We sample 2048 prompts of \(10\) exemplars from the continuous distribution (orange) and discrete distribution (blue). For each prompt, we evaluate the log likelihood of being drawn under \(\mathcal{D}_{\text{disc}}\). We also evaluate how much the loss of the \(\alpha=0.5\) model changed before and after fine-tuning (scaled by the norm of the task vector). We use a binned scatterplot to show the mean and standard deviation over 10 bins of the data. We find the largest increase for the samples of the discrete distribution that are closest to the continuous distribution. We demonstrate this effect for more exemplar counts and pretrained models in Appendix B.1.
sample counts where it is truly ambiguous whether the prompt is drawn from the continuous or discrete distribution. At higher sample counts, the benefits of conjugate prompting are not observed since there is little ambiguity to resolve, and scaling the prompt simply tests a potentially harder task. Since we can get closer to ridge regression through conjugate prompting, we claim the ridge regression solution has not been "forgotten" but "suppressed" since it can be partially recovered through manipulating task inference.
## 3 Conjugate Prompting to Recover Pretraining Capabilities
In Section 2.8, we have observed that applying our model \(T\) to linear regression prompts with lower likelihood of being drawn under the fine-tuning distribution yields an output with lower continuous distribution loss. We interested in generalizing this strategy to recover capabilities learnt and utilized during pretraining but not during fine-tuning. We can take advantage of this observation to design a novel prompting strategy that requires a transform \(s\) from the original prompt \(P\) to a new prompt \(P^{\prime}\) satisfying two important properties:
1. **(Lower likelihood)**\(P^{\prime}\) should have lower likelihood under the fine-tuning distribution relative to \(P\). This manipulates task inference in favor of the desired solution learnt at pretraining, ensuring that when evaluating \(P^{\prime}\), the model will achieve lower loss.
2. **(Invertibility)** There should exist an inverse to the prompting strategy \(s^{-1}\) to convert the answer \(T(P^{\prime})\) to an answer to \(P\). This ensures that solving \(P^{\prime}\) effectively also solves \(P\).
When we "conjugate" the model by \(s\), e.g. apply \(s^{-1}\circ T\circ s\), we will transform the input into a space where \(T\) performs the solution of interest, and then undo the original transformation, yielding a solution to the original problem that reflects the suppressed pretrained capability. Under this framework, the conjugate prompting strategy in Section 2.8 is succintly described as \(s:(X,y)\rightarrow(X,\gamma y)\). When the model and capabilities of interest naturally contain such a transformation, we can design a conjugate prompting strategy which recovers pretrained capabilities.
## 4 Experiments on large language models
In this section, we investigate whether our understanding of fine-tuning as shifting task inference holds in large-scale language models trained on real-world data. We study two common settings for fine-tuning language models: (i) to improve their helpfulness in instruction following (Section 4.1) and (ii) to reduce their harmfulness by preventing the generation of dangerous content (Section 4.2). In each case, we show that we fine-tuning systematically seems to perform "worse" than pretraining on some tasks, but is this change catastrophic? We find that conjugate prompting of the fine-tuned model recovers some of the pretrained behavior in both cases, just like in the stylized setting of Section 2.8.
Figure 6: **Conjugate prompting for fine-tuned models. We take transformers pretrained over \(\mathcal{D}_{\text{mix}}\) for \(\alpha\in\{0.2,0.5,0.8\}\) for \(5000\) steps and fine-tuned over \(\mathcal{D}_{\text{disc}}\) for \(400\) steps. We evaluate their loss on the continuous distribution where they under-perform on ridge regression. Conjugate prompting with label scale factor \(\gamma\in\{1.5,2.0\}\) recovers the pretrained solution of ridge regression, especially on lower sample counts where there is more ambiguity. We demonstrate this effect for more \(\alpha\) and \(\gamma\) in Appendix B.2.**
### Effect of instruction tuning on in-context learning
Instruction tuning is a common fine-tuning procedure to enable pretrained LLMs to follow natural language instructions. While instruction tuning improves instruction following ability, we find that it can come at the cost of other capabilities such as in-context learning. This is particularly amplified when the two tasks are at conflict with each other. For example, suppose the prompt contains exemplars corresponding to a latent task, but the final query \(x_{\text{query}}\) takes the form of an instruction (such as _What is 2_ + _2?_). How well do models perform in-context learning in this setting?
To test this, we generate prompts using the template in Figure 7, where there are different solutions conditioned on whether the task is in-context learning (ICL) or instruction following. See Appendix C.1 for full details on how these prompts were generated and Appendix C.2 for concrete examples. We find that fine-tuned models are always less likely to perform in-context learning compared to their pre-trained counterparts: Alpaca and Vicuna-7b perform ICL on \(56.75\%\) and \(40.00\%\) less inputs than LLaMa-7b and OPT-IML-1.3b performs ICL on \(21.00\%\) less inputs than OPT-1.3b. We can contextualize this drop in ICL with fine-tuning under the implicit inference framework of Section 2.7. Let \(\text{L}(\texttt{prompt})\) denote the probability distribution over possible completions by an LLM given \(\texttt{prompt}\). Let \(\text{L}_{\text{IF}}\) denote this distribution conditioned on a model that always performs the task of instruction following, and let \(\text{L}_{\text{ICL}}\) denote the counter part for ICL. As per our hypothesis, we can write our model \(\text{L}\) as
\[\text{L}(\texttt{prompt})=g_{\theta}(\texttt{prompt})\text{L}_{\text{IF}}( \texttt{prompt})+(1-g_{\theta}(\texttt{prompt}))\text{L}_{\text{ICL}}(\texttt{ prompt}),\]
where the model internally estimates \(g_{\theta}\) which is the posterior likelihood of the model interpreting the latent task to be instruction following. Our hypothesis predicts that one reason instruction-tuned models are worse at ICL is because instruction-tuning increases \(g_{\theta}\) for most prompts, suppressing the in-context learning capability \(\text{L}_{\text{ICL}}\). Note that there might also be a change in the internal representations of \(\text{L}_{\text{ICL}}\) and \(\text{L}_{\text{IF}}\), but we only focus on what can be recovered by simply manipulating the task inference.
If this were true, conjugate prompting (see Section 3) can reverse the effect of \(g\) and would cause the fine-tuned model to perform ICL more often.
Conjugate prompting to perform ICL over instruction following.We observe that the instruction tuning data for Alpaca, Vicuna, and OPT-IML are primarily in English. As a result, translating prompts to different languages satisfies the "lower likelihood" property of conjugate prompting (Section 3). Language translation also satisfies the "invertibility" property because we can simply reverse-translate the answer to English for the tasks we consider 2. Furthermore, the pretrained models we consider are capable of performing ICL in different languages. Other than language translation, we find that the additional transformations of Leetspeak and Pig Latin (also discussed in Wei et al. (2023)) satisfy these properties.
Footnote 2: Language translation might violate the task invertibility if we consider tasks that require specific contextual knowledge that might vary across languages
We test whether language translation biases the fine-tuned model more towards pretrained behavior of ICL. To do so, we compute the drop in ICL frequency between the fine-tuned and pretrained counterparts across several languages, with translation implemented Google Translate (Wu et al.,
Figure 7: **Language model experiments.** Left: For the incorrect learning vs instruction following problem, each prompt can be solved differently by inferring the prompt as an ICL task or an IF task. Bottom: For the harmful generation problem, each harmful instruction can be solved differently by inferring the prompt as an ANSWER task (faithfully answering the problem) or REFUSE task (refusing or answering a different question).
2016). We present comprehensive results across 5 models, 4 non-English languages, and 2 additional transformations in in Table 1. We see that translation _always_ results in a smaller drop in ICL frequency compared to English prompts (except for by \(0.25\%\) for one model pair for one language. For example, with Alpaca, Leetsepeak results in a drop of only \(1.0\%\), French shows a drop of \(29.00\%\), while in contrast English results in a much larger drop of \(56.75\%\). This confirms that conjugate prompting can successfully revert task inference and this change can often be significant in practice. We provide a more detailed decomposition of these accuracies in Appendix C.3.
### Effects of Reinforcement Learning from Human Feedback On Harmful Content Generation
Since models are pretrained to model any harmful text found on the internet, they are not immediately safe for deployment. As such, models are typically fine-tuned to produce text which reflect human preferences through Reinforcement Learning from Human Feedback (RLHF).
Does this fit within our framework? Consider a prompt like "How do I build a bomb?". If we refer to Lanswered as the capability that attempts to answer the question while LREFUSE is the solution that
Figure 8: **Example of conjugate prompting. Left: An instruction-tuned model tends to follow the instruction instead of in-context learning. We translate the prompt to another language such as French, take the model output, and translate it to English to recover in-context learning. Right: Safety fine-tuning encourages refuse harmful instructions. We translate the prompt to another language such as Malayalam, take the model output, and translate it to English to recover harmful content generation.**
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Pretrained & Font-tuned & Language & Pretrained & Fin-tuned & Change N \\ Model & Model & & KL accuracy & ICL accuracy & ICL task \\ \hline \multirow{6}{*}{LLAMa} & \multirow{6}{*}{ALpaca} & English & 92.00 \% & 35.25 \% & 54.75 \% \\ & & French & 98.50 \% & 69.50 \% & 29.00 \% \\ & & Spanish & 100.00 \% & 52.25 \% & 47.75 \% \\ & & Dutch & 97.75 \% & 46.55 \% & 51.00 \% \\ & & Hungarian & 96.00 \% & 50.25 \% & 45.75 \% \\ & & LaTe
refuses to answer the question, we can idealize the model's completion as
\[\text{L}(\text{prompt})=g_{\theta}(\text{prompt})\text{L}_{\text{REFUSE}}( \text{prompt})+(1-g_{\theta}(\text{prompt}))\text{L}_{\text{Answer}}(\text{prompt})\]
The pretrained model might see examples of both capabilities, but RLHF will uniformly incentivize refusing to answer the question. Therefore, fine-tuning for safety can be framed as "catastrophically forgetting" how to answer the question.
Conjugate prompting to follow harmful instructions instead of refusing them.In the context of our prior results, when we view language models as implicitly inferring the task, fine-tuning may be suppressing \(\text{L}_{\text{Answer}}\) rather than forgetting it. Moreover, we expect that fine-tuning is primarily in English since preference data labelling is much more expensive and less diverse than unsupervised pretraining data (Hao, 2023). To test this claim, we test conjugate prompting to recover language model behavior before safety fine-tuning. Specifically, we test GPT-3.5 before and after fine-tuning for conversational dialogue 3. For our prompts, we sample \(100\) instructions from AdvBench (Zou et al., 2023). We say that the model output reflects the ANSWER task if it attempts to answer the question, and otherwise reflects the REFUSE task if it is a refusal or an answer to a different question 4. We provide concrete details for our setup and labelling guidelines in Appendix D.1 as well as examples of harmful instructions, model completions, and labels in Appendix D.2.
Footnote 3: This corresponds to text-davinci-003 and qpt-3.5-turbo, respectively.
Footnote 4: We do not assess the correctness of the answer to the question since we are testing the success of the refusal mechanism. This is in line with research in this field such as Zou et al. (2023) and Wei et al. (2023).
As can be seen in Table 2, fine-tuning takes the answer frequency from \(92\%\) to \(3\%\) in English. Similar to Section 4.1, we test whether we can apply conjugate prompting via language translation to recover the pretrained capability of answering harmful instructions. In line with our hypothesis, we find that the drop in performing the ANSWER task is _always_ lower in non-English languages. For example, fine-tuning took the English ANSWER frequency from \(92\%\) to \(3\%\) while it took the Malayalan ANSWER frequency from \(71\%\) to \(65\%\). Therefore, we claim that \(\text{L}_{\text{ANSWER}}\) is not forgotten, and conjugate prompting can recover this capability. We note that the brititleness of safety-training as well as conjugate prompting transformation functions have been concurrently documented by Wei et al. (2023) in their comprehensive and impressive analysis of jailbreaking attacks.
## 5 Related Work
Understanding in-context learning.There has been a recent line of work on understanding how _pretrained_ transformers perform in-context learning of simple functions. Garg et al. (2023); Li et al. (2023) study which classes can be in-context learnt, Chan et al. (2022); Kirsch et al. (2022) study the conditions where in-context learning emerges, and Akyurek et al. (2022); von Oswald et al. (2022); Dai et al. (2023) focus on the exact in-context learning algorithm implemented in transformers. Inspired by these works, we focus on understanding in-context learning in the context of fine-tuning.
Another line of work focuses on how transformers implicitly determine which task to perform, with Xie et al. (2021) hypothesizing that next-token prediction task of pretraining can involve implicit bayesian inference; Min et al. (2022); Wei et al. (2023); Tamkin et al. (2022) construct experimental setups to probe how the prompts affect what task the model is inferring. Our work studies the same idea of task inference but builds on this work to first characterize the effect of fine-tuning and then intervene via conjugate prompting to switch between fine-tuned and pretrained behavior.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Language & GPT-3.5 & ChatCPT & Change in \\ & Answer Freq & Answer Freq & Frequency \\ \hline English & 92 \% & 3 \% & 89 \% \\ Japanese & 56 \% & 9 \% & 47 \% \\ Hungarian & 87 \% & 12 \% & 76 \% \\ Swhill & 63 \% & 16 \% & 47 \% \\ Malayalan & 71 \% & 65 \% & 6 \% \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Measuring toxic generation vs refusal.** We report the frequency that the model faithfully attempts to follow the harmful instruction vs the frequency it refuses or answers an unrelated question. We compare ChatGPT (gpt-3.5-turbo) against GPT-3.5 that hasn’t undergone safety fine-tuning for deployment (text-davinci-003). Each accuracy is taken over 100 harmful instructions from AdvBench ((Zou et al., 2023)). Every non-English language has a lower pretrained answer frequency and a lower frequency change than English.
**Fine-tuning pretrained language models.** There is a large body of work on fine-tuning language models in a manner that preserves performance (Raffel et al., 2020; Arivazhagan et al., 2019; Gao et al., 2021), generalizes slightly out-of-distribution (Wei et al., 2022; Sanh et al., 2022; Min et al., 2022a), and aligns with human usage/values (Christiano et al., 2023; Stiennon et al., 2022; Bai et al., 2022; Ziegler et al., 2020; Ouyang et al., 2022; Mishra et al., 2022; Chung et al., 2022).
**Catastrophic forgetting and continual learning.** The general phenomenon of catastrophic forgetting where training on new tasks degrades performance on old tasks has been widely reported and studied (McCloskey & Cohen, 1989; Goodfellow et al., 2015; Kemker et al., 2017). There have been many attempts to address this problem, in what is customarily referred to as continual learning, via regularization or data replay (Kirkpatrick et al., 2017; Parisi et al., 2019; Peng & Risteski, 2022). In this work, we focus on the setting of fine-tuning in LLMs where we find more structure in the exact nature of catastrophic forgetting. This allows us to devise a prompting strategy to recover pretraining capabilities with no change to the actual fine-tuning procedure.
**Prompting in different languages.** Prior works have studied the multilingual capabilities of language models (Shi et al., 2022; Ahuja et al., 2023; Lin et al., 2022). The general finding is that transformers will best complete tasks in English with varying levels of performance drops in other languages. In this work, we focus on the disparity between pretraining and fine-tuning process across different languages (Hao, 2023).
**Adversarial Attacks.** Prior work and tweets have studied how to "jailbreak" LLMs to elicit undesirable content (Shin et al., 2020; Guo et al., 2021; Carlini et al., 2023; Zou et al., 2023). Specific instantiations of our conjugate prompting framework are closely related to some prior works. Ipolito et al. (2022) utilizes style transfer to circumvent hard-coded defenses for memorization-based attacks. In recent concurrent work, Wei et al. (2023) report a similar finding of the possibility of jailbreaking via translation. We hope our work provides a unified perspective on such findings, along with a possible explanation via our hypothesis of shifting implicit inference.
**Multi-task learning and meta-learning.** The objective of learning to solve multiple tasks simultaneously falls under the broad umbrella of meta-learning (Finn et al., 2017; Kirsch & Schmidhuber, 2022; Andrychowicz et al., 2016) and multi-task learning (Evgeniou & Pontil, 2004; Radford et al., 2019). Within this literature, the work on (Yin et al., 2020) provides an algorithm that allows for control over whether meta-learners utilize solutions to known tasks or generalize to new tasks. This is similar in objective to conjugate prompting but we focus only on manipulating the input rather than modifying the fine-tuning procedure.
## 6 Discussion and future work
We find that the apparent catastrophic effects of fine-tuning may be explained as shifting implicit task inference. We demonstrate that for fine-tuned models, transforming prompts to look dissimilar to the fine-tuning data via conjugate prompting can recover pretrained capabilities.
Contrary to heuristic prompt engineering, we provide a principled mechanism by which our strategy can recover pretraining performance. We acknowledge that using the pretrained model is a more direct way to obtain pretrained behavior when desired, but in the increasingly common blackbox API setting, we do not have access to all stages of model training (such as LLaMa-2 (Touvron et al., 2023) and Claude). As such, conjugate prompting is also a warning that simply restricting access to only safety-finetuned models is not secure--we can still extract the pretrained behavior.
More than the immediate utility of conjugate prompting, we hope that our analysis of fine-tuning brings us closer to principled adaptation of pretrained models to applications of interest. We believe that our inference hypothesis opens up a number of interesting questions in terms of whether transformers explicitly execute task inference. A more mechanistic understanding of our phenomenon is still missing--are there sub-networks that perform task inference and capabilities, and could we manipulate these networks directly? Finally, it would also be interesting to develop better fine-tuning procedures, perhaps inspired by the continual learning and meta-learning literature. Better fine-tuning methods accompanied by a principled understanding would open up new avenues to guide task inference and leverage transformer capabilities, providing robust and steerable methods for deployment.
Broader Impact.The primary motivation of this work is to offer a principled understanding of the process of fine-tuning, in order to make fine-tuning more efficient and reliable. We acknowledge that
our current analysis reveals gaps in fine-tuning which could potentially be exploited to bypass safety measures introduced when fine-tuning. However, this work does not directly create new attacks or expose new vulnerabilities. Instead, we offer an explanation unifying the success of various existing manual attempts at bypassing safety fine-tuning. We hope our work contributes to the open discussion of limitations of current methods in containing the potential dangers posed by LLMs, and opens up new avenues for further research into the safety and reliability of these systems.
## 7 Limitations
We acknowledge that the controlled nature of our hypothesis test comes at the cost of evaluating accuracy on more common NLP benchmarks. The specific instantiation of conjugate prompting through language translation may not be perfect since it relies on third-party services (e.g. Google Translate), is of lower quality in low-resource languages, and may not preserve the task for problems that rely on the language (e.g. famous quotes). Moreover, conjugate prompting requires domain-specific understanding of the tasks in the fine-tuning data and the task of interest.
## 8 Acknowledgements
We gratefully acknowledge the support of Apple and the AI2050 program at Schmidt Futures. JMS was supported by the NSF Graduate Research Fellowship. We thank Huan Zhang for providing compute for the linear regression experiments and Sang Michael Xie, Andrej Ristseki, Daniel Fried, and Jacob Steinhardt for helpful feedback in earlier stages of this work. |
2309.06767 | Well-posedness for an hyperbolic-hyperbolic-elliptic system describing
cold plasmas | In this short note, we provide the well-posedness for an
hyperbolic-hyperbolic-elliptic system of PDEs describing the motion of
collision free-plasma in magnetic fields. The proof combines a pointwise
estimate together with a bootstrap type of argument for the elliptic part of
the system. | Diego Alonso-Orán, Rafael Granero-Belinchón | 2023-09-13T07:38:28Z | http://arxiv.org/abs/2309.06767v1 | # Well-posedness for an hyperbolic-hyperbolic-elliptic system describing cold plasmas
###### Abstract.
In this short note, we provide the well-posedness for an hyperbolic-hyperbolic-elliptic system of PDEs describing the motion of collision free-plasma in magnetic fields. The proof combines a pointwise estimate together with a bootstrap type of argument for the elliptic part of the system.
Key words and phrases:Cold plasma, well-posedness, hyperbolic-hyperbolic-elliptic system 2020 Mathematics Subject Classification: 35R35, 35Q35, 35S10, 76B03
## 1. Introduction and main result
The motion of a cold plasma in a magnetic field consisting of singly-charged particles can be described by the following system of PDEs [3, 9]
\[n_{t}+(un)_{x} =0, \tag{1a}\] \[u_{t}+uu_{x}+\frac{BB_{x}}{n} =0,\] (1b) \[B-n-\left(\frac{B_{x}}{n}\right)_{x} =0, \tag{1c}\]
where \(n,u\) and \(B\) are the ionic density, the ionic velocity and the magnetic field, respectively. Moreover, it has also been used as a simplified model to describe the motion of collision-free two fluid model where the electron inertial, charge separation and displacement current are neglected and the Poisson equation (1c) is initially satisfied, [3, 10]. In (1) the spatial domain \(\Omega\) is either \(\Omega=\mathbb{R}\) or \(\Omega=\mathbb{S}^{1}\) (_i.e._\(x\in\mathbb{R}\) or \(x\in[-\pi,\pi]\) with periodic boundary conditions) and the time variable satisfies \(t\in[0,T]\) for certain \(0<T\leq\infty\). The corresponding initial-value problem consists of the system (1) along with initial conditions
\[n(x,0)=n_{0}(x),\ u(x,0)=u_{0}(x), \tag{2}\]
which are assumed to be smooth enough for the purposes of the work.
System (1) was introduced by Gardner & Morikawa [9]. Furthermore, Gardner & Morikawa formally showed that the solutions of (1) converge to solutions of the Korteveg-de Vries equation (see also the paper by Su & Gardner [12]). Berezin & Karpman extended this formal limit to the case where the wave propagates at angles of certain size with respect to the magnetic field [3]. Later on, Kakutani, Ono, Taniuti & Wei [10] removed the hypothesis on the angle. This formal KdV limit was recently justified by Pu & Li [11]. Very recently in [2], by means of a multi-scale expansion (cf. [1, 4]), the authors derived three asymptotic models of (1) and studied several analytical properties of the models: the existence of conserved quantities, the Hamiltonian structure, the well-posedness and the formation of singularities in finite time. More precisely, for
the uni-directional model which resembles the well-known Fornberg-Whitham equation (cf. [8]), the authors showed that wave-breaking occurs, that is, the formation of an infinite slope in the solution. In [14], a new sufficient condition on the initial data is given which exhibits wave breaking extending the previous work [2]. To the best of the author's knowledge, although system (1) has been introduced more than 50 year's ago, the well-posedness of the system has not been studied elsewhere. The result of this work is to give a positive answer to the previous problem and the main theorem reads as follows
**Theorem 1**.: _Let \(n_{0}(x)>0\), \(n_{0}(x)-1\in H^{2}\) and \(u_{0}(x)\in H^{3}\). Then, there exists a \(T>0\) and unique solution of (1) such that_
\[(n-1,u)\in C([0,T],H^{2}\times H^{3}).\]
**Notation.** For \(1\leq p\leq\infty\), let \(L^{p}=L^{p}(\mathbb{R})\) be the usual normed space of \(L^{p}\)-functions on \(\mathbb{R}\) with \(||\cdot||_{p}\) as the associated norm. For \(s\in\mathbb{R}\), the inhomogeneous Sobolev space \(H^{s}=H^{s}(\mathbb{R})\) is defined as
\[H^{s}(\mathbb{R})\triangleq\left\{f\in L^{2}(\mathbb{R}):\|f\|_{H^{s}( \mathbb{R})}^{2}=\int_{\mathbb{R}}(1+\xi^{2})^{s}|\widehat{f}(\xi)|^{2}<+ \infty\right\},\]
with norm
\[\|f\|_{H^{s}}=\|f\|_{L^{2}}^{2}+\|f\|_{\dot{H}^{s}}\,.\]
Moreover, throughout the paper \(C=C(\cdot)\) will denote a positive constant that may depend on fixed parameters and \(x\lesssim y\) (\(x\gtrsim y\)) means that \(x\leq Cy\) (\(x\geq Cy\)) holds for some \(C\).
## 2. Proof of Theorem 1
The proof follows the classical a priori estimates approach which combines the derivation of useful a priori energy estimates and the use of a suitable approximation procedure via mollifiers (see for instance [2]). First, we write system (1) in the new variables \(n=1+\eta,B=1+b\). Then system (1) becomes
\[\eta_{t}+(u\eta)_{x}+u_{x} =0, \tag{3a}\] \[u_{t}+uu_{x}+\frac{(1+b)b_{x}}{1+\eta} =0,\] (3b) \[b-\eta-\left(\frac{b_{x}}{1+\eta}\right)_{x} =0. \tag{3c}\]
We are going to find the appropriate energy estimates for the following energy
\[\mathcal{E}(t)=\|\eta(t)\|_{H^{2}}^{2}+\|u(t)\|_{H^{3}}^{2}+\max_{x\in \mathbb{R}}\frac{1}{1+\eta(x,t)}. \tag{4}\]
In order to estimate the last term in the energy \(\mathcal{E}(t)\), we need to derive a pointwise estimate. To that purpose, following [6] and defining
\[m(t)=\min_{x\in\mathbb{R}}\eta(x,t)=\eta(\underline{x}_{t},t),\text{ for }t>0,\]
it is easy to check that \(m(t)\) is a Lipschitz functions and one has the following bound
\[|m(t)-m(s)|\leq\max_{y,z}|\partial_{t}\eta(y,z)||t-s|.\]
From Rademacher's theorem it holds that \(m(t)\) is differentiable in \(t\) almost everywhere and furthermore
\[m^{\prime}(t)=\partial_{t}\eta(\underline{x}_{t},t)\text{ a.e.} \tag{5}\]
Then, using (1a) and noticing that \(n_{x}(\underline{x}_{t},t)=\) we readily see that
\[m^{\prime}(t)=-u_{x}(\underline{x}_{t},t)m(t)-u_{x}(\underline{x}_{t},t)=-u_{ x}(\underline{x}_{t},t)(1+m(t)) \tag{6}\]
Moreover, since by assumption \(m(0)>-1\) we also have that
\[m(t)>-1,\quad\text{ for }0<t\ll 1. \tag{7}\]
We remark that this is not a monotonicity statement relying on a sign condition for \(u_{x}(\underline{x}_{t},t)\), but just a small in time argument. Hence, following the argument in [7] and using (6) we find that
\[\frac{d}{dt}\left(\max_{x\in\mathbb{R}}\frac{1}{1+\eta(x,t)}\right)=-\frac{ \partial_{t}\eta(\underline{x}_{t},t)}{(1+m(t))^{2}}=\frac{u_{x}(\underline{x }_{t},t)}{1+m(t)}\leq C(\mathcal{E}(t))^{2}. \tag{8}\]
The lower order \(L^{2}\) norm of \(\eta\) is bounded by
\[\frac{1}{2}\frac{d}{dt}\left\|\eta\right\|_{L^{2}}^{2}\lesssim\left\|\eta \right\|_{L^{2}}^{2}\left\|u_{x}\right\|_{L^{\infty}}+\left\|\eta\right\|_{L^{ 2}}\left\|u_{x}\right\|_{L^{2}} \tag{9}\]
Similarly, we find that
\[\frac{1}{2}\frac{d}{dt}\left\|u\right\|_{L^{2}}^{2}\lesssim(1+\left\|b\right\| _{L^{\infty}})\left\|\frac{b_{x}}{1+\eta}\right\|_{L^{2}}\left\|u\right\|_{L^{ 2}} \tag{10}\]
Testing equation (3a) and (3b) with \(\partial_{x}^{4}\eta\) and \(\partial_{x}^{6}u\) respectively, and integrating by parts we have that
\[\frac{1}{2}\frac{d}{dt}\left\|\partial_{x}^{2}\eta\right\|_{L^{2 }}^{2} \lesssim\left\|\eta\right\|_{H^{2}}^{2}\left\|u\right\|_{H^{3}}+ \left\|\eta\right\|_{H^{2}}\left\|u\right\|_{H^{3}}, \tag{11}\] \[\frac{1}{2}\frac{d}{dt}\left\|\partial_{x}^{3}u\right\|_{L^{2}}^{2} \lesssim\left\|u\right\|_{H^{3}}^{3}+(1+\left\|b\right\|_{L^{\infty}}) \left\|\frac{b_{x}}{1+\eta}\right\|_{H^{3}}\left\|u\right\|_{H^{3}}. \tag{12}\]
Therefore, combining (9)-(12) and using Sobolev embedding and Young's inequality that
\[\frac{1}{2}\frac{d}{dt}\left(\left\|\eta\right\|_{H^{2}}^{2}+\left\|u\right\|_ {H^{3}}^{2}\right)\lesssim\left\|\eta\right\|_{H^{2}}^{3}+\left\|u\right\|_{H^ {3}}^{3}+(1+\left\|b\right\|_{H^{1}})^{2}\left\|\frac{b_{x}}{1+\eta}\right\|_ {H^{3}}^{2}+\left\|u\right\|_{H^{3}}^{2}. \tag{13}\]
Moreover, using (3c) we find that
\[\left\|\frac{b_{x}}{1+\eta}\right\|_{H^{3}}^{2}=\int_{\mathbb{R}}\left|\left( \frac{b_{x}}{1+\eta}\right)_{xxx}\right|^{2}dx=\int_{\mathbb{R}}\left(\frac{b _{x}}{1+\eta}\right)_{xxx}\left(b-\eta\right)_{xx}\ dx\leq\left\|\frac{b_{x}}{1 +\eta}\right\|_{\dot{H}^{3}}\left(\left\|\eta\right\|_{H^{2}}+\left\|b\right\|_ {H^{2}}\right).\]
Therefore, we find that
\[\left\|\frac{b_{x}}{1+\eta}\right\|_{H^{3}}\leq\left\|\eta\right\|_{H^{2}}+ \left\|b\right\|_{H^{2}}.\]
Plugging the previous estimate in (13) we infer that
\[\frac{1}{2}\frac{d}{dt}\left(\left\|\eta\right\|_{H^{2}}^{2}+\left\| u\right\|_{H^{3}}^{2}\right) \lesssim\left\|\eta\right\|_{H^{2}}^{3}+\left\|u\right\|_{H^{3}}^{ 3}+(1+\left\|b\right\|_{H^{1}})^{2}\left(\left\|\eta\right\|_{H^{2}}+\left\| b\right\|_{H^{2}})+\left\|u\right\|_{H^{3}}^{2}\] \[\lesssim 1+\left\|\eta\right\|_{H^{2}}^{3}+\left\|u\right\|_{H^{3}}^{ 3}+\left\|b\right\|_{H^{2}}^{3}. \tag{14}\]
To close the energy estimate, we need to compute \(\left\|b\right\|_{H^{2}}^{3}\). To that purpose, we first find using the elliptic equation (3c) and integrating by parts that
\[\left\|b\right\|_{L^{2}}^{2}=\int_{\mathbb{R}}\eta b\ dx+\int_{\mathbb{R}}\left( \frac{b_{x}}{1+\eta}\right)_{x}b\ dx=\int_{\mathbb{R}}\eta b\ dx-\int_{\mathbb{R }}\frac{b_{x}^{2}}{1+\eta}\ dx\]
Therefore, using the pointwise estimate (7) we find that the last term
\[-\int_{\mathbb{R}}\frac{b_{x}^{2}}{1+\eta}\ dx\leq 0,\]
and hence Young's inequality yields
\[\left\|b\right\|_{L^{2}}^{2}\leq\left\|\eta\right\|_{L^{2}}^{2} \tag{15}\]
To compute the higher-order norm, let us first write
\[\left\|b_{x}\right\|_{L^{2}}^{2}=\int_{\mathbb{R}}\frac{1+\eta}{1+\eta}(b_{x} )^{2}\ dx=-\int_{\mathbb{R}}\frac{b_{x}}{1+\eta}(1+\eta)_{x}b\ dx-\int_{ \mathbb{R}}\left(\frac{b_{x}}{1+\eta}\right)_{x}(1+\eta)\ b\ dx=I_{1}+I_{2}. \tag{16}\]
Using Holders and Young's inequality we readily see that
\[\left|I_{1}\right|\leq\left\|b_{x}\right\|_{L^{2}}\left\|\eta_{x}\right\|_{L^ {\infty}}\left\|b\right\|_{L^{2}}\left\|\frac{1}{1+\eta}\right\|_{L^{\infty}} \leq\frac{1}{2}\left\|b_{x}\right\|_{L^{2}}^{2}+C\left\|\eta\right\|_{H^{2}}^ {2}\left\|b\right\|_{L^{2}}^{2}\left\|\frac{1}{1+\eta}\right\|_{L^{\infty}}^{2}. \tag{17}\]
On the other hand, using once again the elliptic equation (3c) we find that
\[I_{2}=\int_{\mathbb{R}}(\eta-b)(1+\eta)b\ dx=\int_{\mathbb{R}}\left(\eta b+ \eta^{2}b-b^{2}(1+\eta)\right)\ dx\leq\left\|b\right\|_{L^{2}}\left\|\eta \right\|_{L^{2}}+\left\|b\right\|_{L^{2}}\left\|\eta\right\|_{L^{2}}\left\| \eta\right\|_{L^{\infty}}. \tag{18}\]
Therefore, collecting (16) -(18) we infer that
\[\left\|b_{x}\right\|_{L^{2}}^{2}\leq\frac{1}{2}\left\|b_{x}\right\|_{L^{2}}^{ 2}+C\left\|\eta\right\|_{H^{2}}^{2}\left\|b\right\|_{L^{2}}^{2}\left\|\frac{1} {1+\eta}\right\|_{L^{\infty}}^{2}+\left\|b\right\|_{L^{2}}\left\|\eta\right\|_ {L^{2}}+\left\|b\right\|_{L^{2}}\left\|\eta\right\|_{L^{\infty}} \tag{19}\]
and hence using (15) we conclude that
\[\left\|b_{x}\right\|_{L^{2}}^{2}\lesssim\left\|\eta\right\|_{H^{2}}^{2}\left\| \eta\right\|_{L^{2}}^{2}\left\|\frac{1}{1+\eta}\right\|_{L^{\infty}}^{2}+ \left\|\eta\right\|_{L^{2}}^{2}+\left\|\eta\right\|_{L^{2}}^{2}\left\|\eta \right\|_{L^{\infty}}. \tag{20}\]
We iterate the previous idea, to provide an estimate \(\left\|b_{xx}\right\|_{L^{2}}\). To that purpose, we write
\[\left\|b_{xx}\right\|_{L^{2}}^{2}=-\int_{\mathbb{R}}\frac{1+\eta}{1+\eta}b_{ xxx}b_{x}\ dx=\int_{\mathbb{R}}(1+\eta)b_{xx}\left(\frac{b_{x}}{1+\eta}\right)_{x} dx+\int_{\mathbb{R}}(1+\eta)_{x}b_{xx}\frac{b_{x}}{1+\eta}\ dx=J_{1}+J_{2}.\]
Using the elliptic equation (3c), we have that
\[J_{1}=\int_{\mathbb{R}}(1+\eta)b_{xx}(b-\eta)\ dx \leq\left\|b_{xx}\right\|_{L^{2}}\left\|(1+\eta)(b-\eta)\right\| _{L^{2}}\] \[\leq\frac{1}{2\epsilon}\left\|b_{xx}\right\|_{L^{2}}^{2}+C_{ \epsilon}\left(1+\left\|\eta\right\|_{H^{2}}^{4}+\left\|b\right\|_{L^{2}}^{4}\right) \tag{21}\]
where in the second inequality we have used the Sobolev embedding and Young's ineqality. Similarly,
\[J_{2}\leq\left\|b_{xx}\right\|_{L^{2}}\left\|(1+\eta)_{x}\frac{b_{x}}{1+\eta} \right\|_{L^{2}}\leq\frac{1}{2\epsilon}\left\|b_{xx}\right\|_{L^{2}}^{2}+C_{ \epsilon}\left(1+\left\|\eta\right\|_{H^{2}}^{8}+\left\|\frac{1}{1+\eta} \right\|_{L^{\infty}}^{8}+\left\|b_{x}\right\|_{L^{2}}^{8}\right). \tag{22}\]
Therefore taking \(\epsilon\ll 1\) (for instance \(\epsilon=1/4\)), we find that
\[\frac{1}{2}\left\|b_{xx}\right\|_{L}^{2}\leq C\left(1+\|\eta\|_{H^{2}}^{8}+\left\| \frac{1}{1+\eta}\right\|_{L^{\infty}}^{8}+\left\|b_{x}\right\|_{L^{2}}^{8} \right). \tag{23}\]
Hence, estimate (23) combined with the previous estimates for \(\left\|b_{x}\right\|_{L^{2}}\) given in (20) and \(\left\|b\right\|_{L^{2}}\) in (15) we conclude that
\[\left\|b\right\|_{H^{2}}^{3}\leq C\left(1+\mathcal{E}(t)\right)^{p}, \tag{24}\]
for some \(C>0\) and \(p>2\) large enough. The precise power of \(p\) can be computed though it is not essential to provide a local-in-time solution. Hence, plugging the previous estimate into (14) and taking into account (8) we conclude that
\[\frac{d}{dt}\mathcal{E}(t)\leq C\left(1+\mathcal{E}(t)\right)^{p} \tag{25}\]
for some \(C>0\) and \(p>2\) large enough which ensures a local time of existence \(T^{\star}>0\) such that
\[\mathcal{E}(t)\leq 4\mathcal{E}(0),\quad\text{ for }0\leq t\leq T^{\star}.\]
In order to construct the solution, we first define the approximate problems using mollifiers, which reads
\[\eta_{t}^{\epsilon}+\mathcal{J}_{\epsilon}(\mathcal{J}_{\epsilon }u\mathcal{J}_{\epsilon}\eta)_{x}+\mathcal{J}_{\epsilon}\mathcal{J}_{ \epsilon}u_{x}^{\epsilon} =0, \tag{26a}\] \[u_{t}^{\epsilon}+\mathcal{J}_{\epsilon}\left(\mathcal{J}_{ \epsilon}u\mathcal{J}_{\epsilon}u_{x}\right)+\frac{(1+b)b_{x}}{1+\eta^{ \epsilon}} =0,\] (26b) \[b-\eta^{\epsilon}-\left(\frac{b_{x}}{1+\eta^{\epsilon}}\right)_ {x} =0. \tag{26c}\]
Repeating the previous estimates we find a time of existence \(T^{\star}>0\) for the sequence of regularized problems. Using compactness arguments and passing to the limit we conclude the proof of existence. The time continuity for the solution is obtained by classical arguments. On the one hand, the differential equation (25) gives the strong right continuity at \(t=0\). Using the change of variables \(\hat{t}=-t\), we get the strong left continuity at \(t=0\), which combined show the continuity in time of the solution.
## Acknowledgments
D.A-O is supported by the Spanish MINECO through Juan de la Cierva fellowship FJC2020-046032-I. R.G-B is supported by the project "Mathematical Analysis of Fluids and Applications" Grant PID2019-109348GA-I00 funded by MCIN/AEI/ 10.13039/501100011033 and acronym "MAFyA". This publication is part of the project PID2019-109348GA-I00 funded by MCIN/ AEI /10.13039/501100011033. This publication is also supported by a 2021 Leonardo Grant for Researchers and Cultural Creators, BBVA Foundation. The BBVA Foundation accepts no responsibility for the opinions, statements, and contents included in the project and/or the results thereof, which are entirely the responsibility of the authors. D.A-O and R. G-B are also supported by the project "Analisis Matematico Aplicado y Ecuaciones Diferenciales" Grant PID2022-141187NB-I00 funded by MCIN/ AEIand acronym "AMAED". |
2302.14777 | VQA with Cascade of Self- and Co-Attention Blocks | The use of complex attention modules has improved the performance of the
Visual Question Answering (VQA) task. This work aims to learn an improved
multi-modal representation through dense interaction of visual and textual
modalities. The proposed model has an attention block containing both
self-attention and co-attention on image and text. The self-attention modules
provide the contextual information of objects (for an image) and words (for a
question) that are crucial for inferring an answer. On the other hand,
co-attention aids the interaction of image and text. Further, fine-grained
information is obtained from two modalities by using a Cascade of Self- and
Co-Attention blocks (CSCA). This proposal is benchmarked on the widely used
VQA2.0 and TDIUC datasets. The efficacy of key components of the model and
cascading of attention modules are demonstrated by experiments involving
ablation analysis. | Aakansha Mishra, Ashish Anand, Prithwijit Guha | 2023-02-28T17:20:40Z | http://arxiv.org/abs/2302.14777v1 | # VQA with Cascade of
###### Abstract
The use of complex attention modules has improved the performance of the Visual Question Answering (VQA) task. This work aims to learn an improved multi-modal representation through dense interaction of visual and textual modalities. The proposed model has an attention block containing both self-attention and co-attention on image and text. The self-attention modules provide the contextual information of objects (for an image) and words (for a question) that are crucial for inferring an answer. On the other hand, co-attention aids the interaction of image and text. Further, fine-grained information is obtained from two modalities by using a Cascade of Self- and Co-Attention blocks (CSCA). This proposal is benchmarked on the widely used VQA2.0 and TDIUC datasets. The efficacy of key components of the model and cascading of attention modules are demonstrated by experiments involving ablation analysis.
_Keywords: Visual Question Answering, Attention Networks, Self-Attention, Co-attention, Multi-modal Fusion, Classification Networks_
## 1 Introduction
Initial attention-based approaches [35][15][36][1] focused on identifying salient image regions based on the text of a given question. In other words, the focus was on giving attention to images (visual attention) only. Subsequent methods, often referred to as _co-attention_-based methods [21][4], combined textual attention along with image attention. Textual attention focuses on relevant words in the context of the given image. Co-attention-based methods improved the performance of VQA systems. A few studies [36][22][9][10] have shown that considering attention in a cascaded or stack-based manner helps in obtaining enriched representation with fine-grained information.
Recent attention-based models have taken inspiration from transformer-based models [33] to include self-attention (SA) as well. The SA helps in incorporation of internal correlation within a modality. For text modality, SA encodes internal correlation among words to obtain informative representation of the given sentence. Similarly, for image modality, SA helps in encoding correlation among the salient regions of image. Figure 1 shows an example for illustration. The given question is _"What color is the women's shirt?_. Salient regions within image include 'woman'. It is likely to be informative if the region consisting of 'women' keeps the contextual information such as "dress she is wearing", "hair color" as well as correlation with other salient objects. Here, women's shirt could be one of the more correlated region with respect to some other salient objects. The SA helps in encoding such information.
Based on the advantages of each of the following modules: SA, co-attention (CA) and cascade of attention mechanisms, this work proposes combining them together in a systematic manner. Towards this objective, the proposed model builds one self- and co-attention based attention block (SCA), that combines both SA and CA in a specific way. For each of the text and image modalities, a specific SA module obtains a feature representation for the respective modality. Then the co-attention module uses self-attended representation of one modality and attends (takes attention) on the self-attended representation of the other modality to obtain cross-modality contextual representation for the second modality. Thus, there are two SA modules (one for each text and image modalities) and two co-attention modules within a single SCA block (Figure 2). In one complex attention block of SCA, both modalities guide itself to capture internal correlation and each other to learn the robust representation of each of the visual and textual domains.
The proposed model exploits the niche attributes of the different attention mechanisms and further combines them together in a dense attention module (SCA block). A Cascade of multiple SCA blocks (CSCA) is used to extract fine-grained information. Figure 2 gives the overview of the \(t^{\text{th}}\) SCA block which takes representations of question and image of the \((t-1)^{\text{th}}\) block as input, and provides the improved representation of question and image.
To analyse and evaluate the model performance, extensive experiments are performed on two widely used VQA datasets: _VQA2.0_[12] and _TDIUC_[14]. Ablation analysis experiments are also performed to understand the impact of the important components of the proposed model. Primary contributions of this work are:
Figure 1: An example to illustrate the self-attention relevance for visual content.
Figure 2: Overview of proposed model. An attention block, referred to as SCA, comprises of _self-attention (SA)_ and _co-attention (CA)_ modules. Multiple such attention blocks are cascaded, where output of some \((t-1)^{th}\) block is presented as input to the \((t)^{th}\) block.
* A dense attention based VQA model comprising of cascaded attention blocks.
* The core of each attention block consists of self-attention and co-attention so that the two modalities guide each other to obtain an enriched representation.
* _TDIUC_ and _VQA2.0_.
## 2 Related Work
VQA, being a multimodal task, requires an unified representation of the text and image modalities. Initial VQA models [2, 12, 31, 13] adopted simple fusion based approaches. These models first obtained feature representations of individual modality using corresponding pre-trained networks and then combined them to obtain a joint representation using a fusion schema. Simple fusion schemes include concatenation or element-wise summation or multiplication. Fukui et al. [8] proposed bi-linear pooling to capture interaction of components of the two modalities in a better way. Seeing the advantage of the bilinear pooling based fusion methods, further variants of bilinear pooling with lesser complexity or faster convergence were proposed. MFB [39], MLB [17], MFH [40] were proposed to obtain a representation providing better interaction of the two modalities.
Introduction of attention mechanism in [3] equipped neural models with a systematic procedure to assign relative weights of importance to sequential inputs. Shi et al. in [29] have introduced image attention guided by question to focus on salient image regions relevant to the given question. This helped in obtaining improved feature representations. This led to the development of several attention based approaches for VQA [15][16][36][34][22][35][1]. Studies in [36][34][22] have shown that applying attention multiple times helps in obtaining enriched representation embedded with fine-grained information.
Authors in [21][39] have proposed that attention on textual features in context of visual features along with visual attention plays a key role in VQA models. Such two way atten
Figure 3: Functional block diagram of the proposed approach. Initial feature extraction followed through a cascade of self-attention and co-attention mechanisms. Final features are fused through element-wise multiplication and fed to a fully connected network for answer classification.
tion mechanism is referred to as _dual attention_ or _co-attention_ or _cross-modality attention_ in the literature. We have also used these terms interchangeably. Kim et al. [16] have proposed bilinear interaction based attention for dual modality. Do et al. [7] have proposed an approach by exploiting knowledge distillation with a teacher and student model. Mishra et al. [22] have proposed co-attention based multistage model for VQA. In another work, the authors [23] have proposed question categorization and dual attention for VQA. RAMEN [30] is an unified model that uses high level reasoning and can deal with VQA datasets based on both real-world and synthetic images.
Another class of attention mechanism uses intra-modal attention (self-attention) along with cross-modal attention (co-attention) to learn better feature representation. Gao et.al. [9] have proposed DFAF that combines self-attention and co-attention. Multi-modal Latent Interaction (MLIN) [10] used multi-modal reasoning through summarization, interaction, and aggregation. Yu et al. [38] have proposed an encoder-decoder based dense attention mechanism. These models are relatively dense than the previous approaches and hence, are referred to as dense attention based models. Authors in [20][32] have proposed transformer based attention models for multimodality tasks. These models are pretrained for multiple tasks on huge datasets, that could be further exploited for downstream tasks.
The proposed model falls in the category of dense attention based methods. It uses a cascade of attention blocks to obtain a multi-modal feature representation. Here, each attention block comprises of intra-modality and cross-modality interactions. The proposed method is described next.
## 3 Proposed Method
The proposed framework treats VQA as an answer classification task following existing works like [1][9][2][12][10]. The input image \(I\) (\(I\in\mathcal{I}\)) and the associated natural language question \(q\) (\(q\in\mathcal{Q}\)) are first subjected to feature extraction (Subsection 3.1). Pretrained deep networks are used to extract features from a few salient image regions. The network embeddings are used to represent the input image. Similarly, a pretrained network is used to obtain the word embeddings of the associated input question. These word embeddings collectively represent the input question. The feature embeddings of both image and text modalities are subjected to self-attention mechanism (Subsection 3.2) for capturing the relationships among different regions of \(I\) and words of \(q\). The self-attended representations of these two modalities are further processed by co-attention modules (Subsection 3.3). This single stage of **S**elf and **Co-**A**ttention mechanism cascade forms a single SCA block (Figure 2). Multiple SCA blocks are cascaded to obtain further fine grained representations of both modalities. The embeddings obtained from the final SCA block are fused (Subsection 3.4) and fed to the answer classification network (Subsection 3.5) to predict the answer \(\hat{a}\) (\(\hat{a}\in\mathcal{A}\)).
### Feature Extraction
A pretrained deep network based object detection model (Faster R-CNN, [27]) is used to identify the top-\(n_{v}\) salient regions from the input image \(I\). The pretrained ResNet-101 [13] network is used to compute the visual feature of each region as an embedding \(\mathbf{r}\in\mathbb{R}^{d_{v}}\). Thus, the input image \(I\) is represented as \(\mathbf{r}\mathbf{I}\in\mathbb{R}^{d_{v}\times n_{v}}\) by using \(n_{v}\) number of \(d_{v}\) dimensional ResNet-101 embeddings.
\[\mathbf{r}\mathbf{I}=[\mathbf{r}_{1},\dots\mathbf{r}_{n_{v}}];\mathbf{r}\in \mathbb{R}^{d_{v}} \tag{1}\]
The input natural language question \(q\) is first padded and trimmed to a length of \(n_{w}\) words. The word features are further extracted as pretrained GloVe embeddings \(\mathbf{eq}\in\mathbb{R}^{d_{w}}\)[26]. Thus, the question \(q\) is represented as \(\mathbf{E_{q}}\in\mathbb{R}^{d_{w}\times n_{w}}\) by using \(n_{w}\) number of \(d_{w}\) dimensional embeddings.
\[\mathbf{E_{q}}=[\mathbf{eq}_{1},\dots\mathbf{eq}_{n_{w}}];\mathbf{eq}\in \mathbb{R}^{d_{w}} \tag{2}\]
All feature embeddings in \(\mathbf{r}\mathbf{I}\) and \(\mathbf{E_{q}}\) are projected to a common \(d\) dimensional space to obtain the respective initial feature embedding matrices as \(\mathbf{r}\mathbf{I}(0)\) and \(\mathbf{E_{q}}(0)\).
\[\mathbf{r}\mathbf{I}(\mathbf{0}) = W_{c}^{I}\mathbf{r}\mathbf{I} \tag{3}\] \[\mathbf{Eq}(\mathbf{0}) = W_{c}^{Q}\mathbf{E_{q}} \tag{4}\]
Here, \(W_{c}^{I}\in\mathbb{R}^{d\times d_{v}}\) and \(W_{c}^{Q}\in\mathbb{R}^{d\times d_{w}}\) are the transformation matrices. These representations are provided as input to the self- and co-attention modules.
### Self-Attention
The self-attention (SA) mechanism is one of the key components of the proposed model. It is incorporated for both textual (question as collection of words) and visual (image as top-\(n_{v}\) salient regions) modalities. At the \(t^{\text{th}}\) (\(t=1,\dots T\)) block, the input to SA are \(\mathbf{r}\mathbf{I}(t-1)\) and \(\mathbf{E_{q}}(t-1)\). Following [33], the SA uses _keys_ and _queries_, both of dimension \(d_{KQ}\) and values of dimension \(d_{VS}\) respectively. The _Multi-Head Attention_[33] is incorporated to capture the attention from different aspects. For this, \(n_{h}\) parallel heads are added, where each head is considered to learn the relationships from different view (for image) and context (for question).
Let \(\mathbf{E_{M}}=\{\mathbf{em}_{1}\dots\mathbf{em}_{l}\}\) be a matrix of feature embeddings, where \(\mathbf{em}\in\mathbb{R}^{d_{m}}\) and \(\mathbf{E_{M}}\in\mathbb{R}^{d_{m}\times l}\). For visual features, \(\mathbf{E_{M}}=\mathbf{r}\mathbf{I}(t-1)\), \(l=n_{v}\) and \(d_{m}=d\). Similarly, for question features, \(\mathbf{E_{M}}=\mathbf{E_{q}}(t-1)\), \(l=n_{w}\) and \(d_{m}=d\).
The query (\(Q_{S}^{(i)}\)), key (\(K_{S}^{(i)}\)) and value (\(V_{S}^{(i)}\)) matrices for the \(i^{\text{th}}\) head can be respectively
expressed as follows
\[Q_{S}^{(i)} = \left(W_{i}^{QS}\right)^{\mathsf{T}}\mathbf{E_{M}} \tag{5}\] \[K_{S}^{(i)} = \left(W_{i}^{KS}\right)^{\mathsf{T}}\mathbf{E_{M}}\] (6) \[V_{S}^{(i)} = \left(W_{i}^{VS}\right)^{\mathsf{T}}\mathbf{E_{M}} \tag{7}\]
where, \(W_{i}^{QS}\in\mathbb{R}^{d_{m}\times d_{KQ}}\), \(W_{i}^{KS}\in\mathbb{R}^{d_{m}\times d_{KQ}}\) and \(W_{i}^{VS}\in\mathbb{R}^{d_{m}\times d_{VS}}\) are transformation matrices. Using \(\{Q_{S}^{(i)},K_{S}^{(i)},V_{S}^{(i)}\}\), the inner product of query is performed with all the keys and is divided by \(\sqrt{d_{k}}\) for more stable gradients [33]. The _SoftMax_ function is applied on the inner product to obtain the attention weights for question words and image salient regions. A scaled inner product based attention is computed for all the heads in the following manner.
\[\mathbf{H_{i}}=\left(V_{S}^{(i)}\right)\mathrm{SoftMax}\left(\frac{{Q_{S}^{(i )}}^{\top}K_{S}^{(i)}}{\sqrt{d_{K}}}\right) \tag{8}\]
\[\mathbf{MH}(\mathbf{E_{M}})=W_{mh}\mathbf{H} \tag{9}\]
Here, \(W_{mh}\in\mathbb{R}^{d_{m}\times(n_{h}\times d_{VS})}\) is the transformation matrix. The output ( \(\mathrm{MH}(\mathbf{E_{M}})\) ) of multi-head attention module is passed through fully connected feed forward layers with ReLU activation and dropout to prevent overfitting. Further, residual connections [13] followed by layer normalization are applied on top of fully connected layers for faster and more accurate training. The layer normalization is applied over the embedding dimension only.
Figure 4: Multihead Attention Mechanism
Finally, the self-attended embeddings of the input feature \(\mathbf{E_{M}}\) are obtained as \(\mathbf{SE_{M}}=\{\mathbf{sem}_{1}\ldots\mathbf{sem}_{l}\}\) where \(\mathbf{sem}\in\mathbb{R}^{d_{m}}\) and \(\mathbf{SE_{M}}\in\mathbb{R}^{d_{m}\times l}\). Multihead attention mechanism is shown in Figure 4.
### Co-Attention
For cross-modal interactions, the co-attention module intakes the representations of two modalities and generates attention in context of each other. To facilitate this, the self-attended embeddings \(\widetilde{\mathbf{E_{q}}}(t-1)\) and \(\widetilde{\mathbf{r}}\mathbf{I}(t-1)\) are taken as input. For generating image attention in context of question words, keys and values are generated from self-attended intermediate question representation while the query is obtained from the image itself (following Equation 8). Thus, the query (\(Q_{C}^{(i)}\)), key (\(K_{C}^{(i)}\)) and value (\(V_{C}^{(i)}\)) are respectively computed as follows.
\[Q_{C}^{(i)} = \left(W_{i}^{QC}\right)^{\intercal}\widetilde{\mathbf{E_{q}}}(t-1) \tag{10}\] \[K_{C}^{(i)} = \left(W_{i}^{KC}\right)^{\intercal}\widetilde{\mathbf{r}} \widetilde{\mathbf{I}}(t-1)\] (11) \[V_{C}^{(i)} = \left(W_{i}^{VC}\right)^{\intercal}\widetilde{\mathbf{E_{q}}}(t-1) \tag{12}\]
Here, \(W_{i}^{QC}\in\mathbb{R}^{d_{m}\times d_{KQ}}\), \(W_{i}^{KC}\in\mathbb{R}^{d_{m}\times d_{KQ}}\) and \(W_{i}^{VC}\in\mathbb{R}^{d_{m}\times d_{KV}}\) are transformation matrices. Similarly, for cross-modal question attention, the query is obtained from self-attended question embeddings. While the keys and values are obtained from self-attended image embeddings. These queries, keys and values are similarly processed following Equations 8
Figure 5: _Self-attention_ and _Co-attention_ mechanism overview. Here, \(M\) denotes the input modality.
and 9 to obtain the multi-head attention. This is fed to fully connected layers with ReLU, dropout, skip connections and layer normalization. The output of this network provides the final output of the co-attention module. Figure 5 demonstrates the overview of the _self-attention_ and _co-attention_ mechanism followed.
### Cascading & Fusion
A single SCA block comprising of _self-attention_ (intra-modality interaction) and _co-attention_ (inter-modality interaction) generates an enriched representation (\(\mathbf{rI}(t),\mathbf{E_{q}}(t)\)) of its input visual and textual features.
Existing works [36][22] suggest the stacking of multiple such blocks to obtain further fine grained representations. This is accomplished by cascading multiple SCA block to \(T\) steps. Let \(\mathbf{rI}(T)\in\mathbb{R}^{d\times n_{v}}\) and \(\mathbf{E_{q}}(T)\in\mathbb{R}^{d\times n_{w}}\) be the respective visual and question representations obtained from the final (\(T\)th) SCA block.
The feature representations are obtained by averaging the attended embeddings of two modalities. So, the final visual embedding, say \(\mathbf{I}_{f}\) is obtained as follows.
\[\mathbf{I}_{f}=\frac{1}{k}\sum_{j=1}^{k}\mathbf{rI}(T)[:,j] \tag{13}\]
Similarly, the question encoding, say \(\mathbf{Q}_{f}\) is evaluated in the following manner.
\[\mathbf{Q}_{f}=\frac{1}{n_{w}}\sum_{j=1}^{n_{w}}\mathbf{E_{q}}(T)[:,j] \tag{14}\]
The unified multi-modal representation, say \(\mathbf{F}\in\mathbb{R}^{d}\) is obtained by fusing \(\mathbf{I}_{f}\) and \(\mathbf{Q}_{f}\) through element-wise multiplication.
\[\mathbf{F}=\mathbf{I}_{f}\odot\mathbf{Q}_{f} \tag{15}\]
The fused embedding \(\mathbf{F}\) is fed to a fully connected network for answer prediction.
### Answer Prediction
The fused embedding \(\mathbf{F}\) is fed to fully connected network with single hidden layer of dimension \(d_{hp}\). The number of labels at the output layer is \(n_{c}\) (\(n_{c}=\mid\mathcal{A}\mid\)). The output answer vector, say \(\mathbf{\hat{a}}\) is predicted as follows.
\[\mathbf{\hat{a}}=\mathrm{FCNet}\left(\mathbf{F};d_{hp};n_{c}\right) \tag{16}\]
### Model Learning
Let the respective ground truth and predicted answer be \(a\) and \(\hat{a}\) (\(a,\hat{a}\in\mathcal{A}\)) for input image \(I\) and question \(Q\). This model uses cross-entropy loss for answer prediction and is defined
as
\[\mathcal{L}_{c}=-\sum_{j=1}^{n_{c}}a[j]log(\hat{a}[j]) \tag{17}\]
The combined set of parameters for proposed model includes the ones for feature extraction, block of dense attention and fusion mechanism.
## 4 Experiment Design
This section discusses the datasets used to benchmark the proposed model, the three evaluation metrics and the necessary implementation details.
### Dataset
The proposed model is evaluated through experiments performed on the datasets VQA2.0 [12] and TDIUC [14]. The VQA2.0 [12] dataset is widely used for the VQA task. There are three question categories in VQA2.0. These are _'Yes/No'_ (\(37.6\%\)), _'Number'_ (\(13.03\%\)) and _'Other'_ (\(49.37\%\)). The dataset is divided into _train_, _validation_ and _test_ sets with \(443757\), \(214354\) and \(447793\) image, question and answer triplets respectively.
The Task-Directed Image Understanding Challenge (TDIUC) [14] is another large VQA dataset of real images. Questions are categorized into \(12\) types. These are _'Scene Recognition'_ (\(4.03\%\)), _'Sport Recognition'_ (\(1.91\%\)), _'Color'_ (\(11.82\%\)), _'Other Attributes'_ (\(1.73\%\)), _'Activity Recognition'_ (\(0.52\%\)), _'Positional Reasoning'_ (\(2.32\%\)), _'Object Recognition'_ (\(5.66\%\)), _'Absurd'_ (\(22.16\%\)), _'Utility & Affordance'_ (\(0.03\%\)), _'Object Presence'_ (\(39.73\%\)), _'Counting'_ (\(9.96\%\)) and _'Sentiment Understanding'_ (\(0.13\%\)). Total \(1.6\) million question, image and answer triplets are split into _train_ and _validation_ sets. The _train_ set consists of \(1.1\) million triplets and \(0.5\) million triplets are in the validation split. To deal with language prior issues, TDIUC consists of a special category 'Absurd', where an input question is not related to the visual content of a given image.
### Evaluation Metrics
For evaluation of the TDIUC dataset, _Arithmetic-Mean Per Type (AMPT)_ and _Harmonic-Mean Per Type (HMPT)_ are proposed in [14] as fair evaluation metrics along with _Overall Accuracy_. The AMPT is the average of question category-wise accuracies with uniform weight to each category. On the other hand, HMPT measures the ability of the model to have a high score across all question types.
The VQA2.0 dataset evaluation is performed using the following metric defined in [2].
\[\mathbf{Accuracy(\hat{a})}=min\Big{\{}\frac{\textbf{\#humans that said \(\hat{a}\)}}{\textbf{3}},\textbf{1}\Big{\}} \tag{18}\]
Each question in the VQA2.0 dataset was answered by \(10\) annotators. The above evaluation metric considers a predicted answer correct if it matches the answers given by at
least \(3\) annotators.
### Implementation Details
Visual feature representation \(\mathbf{rI}\) is constructed by extracting \(n_{v}=36\) (for TDIUC) and \(n_{v}=100\) (for VQA2.0) image regions. The use of ResNet-101 embeddings provide image region features of \(d_{v}=2048\) dimensions. The question length in terms of number of tokens (\(n_{w}\)) is set to \(14\) by trimming or padding. The GloVe word embeddings of \(d_{w}=300\) dimensions are considered. The image and word features are projected to same dimensions \(d=512\). For self- and co-attention computations, the key, query and value vector dimensions are set to \(64\), i.e., \(d_{KQ}=d_{VS}=64\). The model uses \(n_{h}=8\) heads for multi-head attention. The model is trained for \(15\) epochs with a batch size of \(64\) samples for both experiments and analysis. The hidden layer dimension of answer prediction FCNet is set to \(d_{hp}=1024\). The Adamax optimizer [18] is used with a decaying step learning rate. The initial learning rate is set to \(0.002\), and it decays by \(0.1\) after every \(5\) epochs. The proposed model CSCA is built on the PyTorch framework and is trained on NVIDIA-GTX \(1080\) GPU.
## 5 Results and Discussion
### Quantitative Results
**Overall Performance & Category-wise Performance Comparison on TDIUC Dataset -** Table 1 and 2 present the respective class-wise and overall performance for the TDIUC dataset. In terms of the overall accuracy, Arithmetic-MPT (AMPT) and Harmonic-MPT
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} \hline \hline
**Question Type** & **SAN** & **RAU** & **MCB** & **QTA** & **BAN** & **CSCA** \\ & **[36]** & **[14]** & **[11]** & **[28]** & **[16]** & \\ \hline Scene Recognition & 92.3 & 93.96 & 93.06 & 93.80 & 93.1 & **94.48** \\ Sport Recognition & 95.5 & 93.47 & 92.77 & 95.55 & 95.7 & **95.85** \\ Color Attributes & 60.9 & 66.86 & 68.54 & 60.16 & 67.5 & **75.51** \\ Other Attributes & 46.2 & 56.49 & 56.72 & 54.36 & 53.2 & **60.89** \\ Activity Recognition & 51.40 & 51.60 & 52.35 & 60.10 & 54.0 & **61.00** \\ Positional Reasoning & 27.9 & 35.26 & 35.40 & 34.71 & 27.9 & **42.14** \\ Object Recognition & 87.50 & 86.11 & 85.54 & 86.98 & 87.5 & **89.11** \\ Absurd & 93.4 & 96.08 & 84.82 & **100.0** & 94.47 & 97.28 \\ Utility \& Affordance & 26.3 & 31.58 & 35.09 & 31.48 & 24.0 & **40.35** \\ Object Presence & 92.4 & 94.38 & 93.64 & 94.55 & 95.1 & **96.34** \\ Counting & 52.1 & 48.43 & 51.01 & 53.25 & 53.9 & **60.70** \\ Sentiment Und. & 53.6 & 60.09 & 66.25 & 64.38 & 58.7 & **67.19** \\ \hline
**Overall Accuracy** & 82.0 & 84.26 & 81.86 & 85.03 & 85.5 & **88.12** \\
**Harmonic Mean** & 53.7 & 59.00 & 60.47 & 60.08 & 54.9 & **67.05** \\
**Arithmetic Mean** & 65.0 & 67.81 & 67.90 & 69.11 & 67.4 & **73.34** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Category-wise comparison of CSCA with previous state-of-the-art methods on the TDIUC dataset
(HMPT) measures, the proposed model CSCA exhibits better performance compared to most of the baseline methods. Also, in terms of class-wise accuracy, CSCA leads in all except one class. A significant relative gain of 12.6% is observed compared to the next best performing model for the _'Counting'_ category of questions. Table 3 presents the results for different models trained _'Without Absurd'_ category of questions. It is observed that CSCA performs better than the existing ones for all three defined evaluation metrics.
**Overall Performance & Category-wise Performance Comparison on VQA2.0 Dataset** - Table 4 demonstrates the results on test-dev and test-std splits of the VQA2.0 dataset. Performance of the proposed model CSCA is comparable with that of the best among the existing methods. The models LXMERT [32], ViLBERT [20] are pre-trained for multiple vision and language based tasks and are fine-tuned for VQA. Here, CSCA has obtained 67.36% accuracy on the validation set. This is around 1% improvement over the best performance among the existing methods.
### Basic Analysis
**Effect of Training Data Size on Performance -** An analysis is performed to observe the effect of the variation of training dataset size on model performance. The primary objective of this experiment was to ascertain whether a model trained on a smaller dataset can provide similar performance as the one learned from the complete set. To explore this, the model is trained with four different datasets obtained from the original VQA2.0 dataset. The first three datasets are obtained by random shuffling of all samples of the VQA2.0 dataset followed by the extraction of 25%, 50% and 75% samples. The fourth one is the complete VQA2.0 dataset (i.e. 100%). Other experimental setups like hidden dimen
\begin{table}
\begin{tabular}{l|c|c} \hline
**Model** & **Overall Accuracy** & **Arithmetic Mean** \\ \hline
**BTUP[1]** & 82.91 & 68.82 \\
**QCG[24]** & 82.05 & 65.67 \\
**BAN2-CTI[7]** & 87.00 & 72.5 \\
**DFAF[9]** & 85.55 & **NA** \\
**RAMEN[30]** & 86.86 & 72.52 \\
**MLIN[10]** & 87.60 & NA \\ \hline
**CSCA** & **88.12** & **73.34** \\ \hline \end{tabular}
\end{table}
Table 2: Comparing _Overall Accuracy_ of CSCA for TDIUC dataset
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline
**Metrics** & **MCB** & **QTA** & **BAN** & **BAN2-CTI** & **CSCA** \\ & **[11]** & **[28]** & **[16]** & **[7]** & \\ \hline
**Overall Accuracy** & 78.06 & 80.95 & 81.9 & 85.0 & **85.30** \\
**Arithmetic-MPT** & 66.07 & 66.88 & 64.6 & 70.6 & **71.21** \\
**Harmonic-MPT** & 55.43 & 58.82 & 52.8 & 63.8 & **65.40** \\ \hline \end{tabular}
\end{table}
Table 3: Performance of CSCA on TDIUC data (except Absurd category samples) trained without ‘Absurd’ Category samples
sion, number of answer classes are kept similar to the original setup for all variants of the dataset. The Epoch-wise performances for the four different datasets are shown in Figure 5(a). As expected, the model performance improved with an increase in training dataset
\begin{table}
\begin{tabular}{l|c|c c c c|c} \hline \hline \multirow{2}{*}{**Methods**} & \multicolumn{2}{c|}{**Val**} & \multicolumn{4}{c|}{**Test-Dev**} & **Test-Std** \\ \cline{2-7} & **Overall** & **Yes / No** & **Number** & **Other** & **Overall** \\ \hline
**MCB [8]** & 59.14 & 78.46 & 38.28 & 57.80 & 62.27 & 53.36 \\
**MLB [17]** & 62.98 & 83.58 & 44.92 & 56.34 & 66.27 & 66.62 \\
**MUTAN [4]** & 62.71 & 82.88 & 44.54 & 56.50 & 66.01 & 66.38 \\
**MFH [40]** & 62.98 & 84.27 & 49.56 & 59.89 & 68.76 & – \\
**BLOCK [5]** & 64.91 & 83.14 & 51.62 & 58.97 & 68.09 & 68.41 \\ \hline
**SAN [36]** & 61.70 & 78.40 & 40.71 & 54.36 & 61.70 & – \\
**BTUP [1]** & 63.20 & 81.82 & 44.21 & 56.05 & 65.32 & 65.67 \\
**BAN [16]** & 65.81 & 82.16 & 45.45 & 55.70 & 64.30 & – \\
**v-VRANet [37]** & – & 83.31 & 45.51 & 58.41 & 67.20 & 67.34 \\
**ALMA [19]** & – & 84.62 & 47.08 & 58.24 & 68.12 & 66.62 \\
**ODA [41]** & 64.23 & 83.73 & 47.02 & 56.57 & 66.67 & 66.87 \\
**BAN2-CTI [7]** & 66.00 & – & – & – & – & 67.4 \\
**CRANet [25]** & – & 83.31 & 45.51 & 58.41 & 67.20 & 67.34 \\
**CoR [34]** & 65.14 & 84.98 & 47.19 & 58.64 & 68.19 & 68.59 \\
**MUREL [6]** & 65.14 & 84.77 & 49.84 & 57.85 & 68.03 & 68.41 \\ \hline
**DFAF [9]** & 66.66 & 86.09 & 53.32 & 60.49 & 70.22 & 70.34 \\
**MLIN [10]** & 66.53 & 85.96 & 52.93 & 60.40 & 70.18 & 70.28 \\
**LXMERT [32]** & – & – & – & – & – & **72.5** \\
**ViLBERT [20]** & – & – & – & – & 70.55 & 70.92 \\ \hline
**CSCA** & **67.36** & 86.57 & **53.58** & **61.06** & **70.72** & 71.04 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Model performance on VQA 2.0 dataset: Validation, Test-Dev & Test-Std splits. CSCA is compared with several state-of-the-art methods including _Fusion based_, _Visual Attention_, _Dense Attention_ based methods separated with lines.
Figure 6: Illustrating the learning curves on training datasets formed with different amounts of instances from VQA2.0. (a) Performance on validation set of VQA2.0 with respect to the number of epochs. (b) Overall accuracy for VQA2.0 dataset with different proportion of the training data.
size. It can be observed that in all four settings, the model performance evolves over a different number of epochs in a similar fashion. However, Figure (b)b indicates that the relative gain achieved by increasing the training dataset size from \(25\%\) to \(50\%\) is significant compared to that by increasing from \(50\%\) to \(75\%\) or \(75\%\) to \(100\%\). This observation may be attributed to the fact that in a collection of randomly shuffled datasets, not many novel instances were encountered during the subsequent increase of the training data.
**Effect of Number of SCA Blocks -** In one pass, it is difficult for a model to grasp all relevant information through a representation. Thus, attention blocks in cascade extract the fine-grained information and pass it on to the next one for further refinement. A set of experiments are performed to identify the optimal number of blocks in the cascade. Additionally, the effect of different independent attention mechanisms (SA only, CA only, SCA) for answer prediction is also analyzed. In Figure (a)a, overall performance for validation split of VQA2.0 dataset is given with respect to varying number of blocks. Figure (b)b shows the parameter counts with respect to the number of blocks. As per expectation, it is observed that the models perform poorly with single attention blocks (SA only, CA only, SCA). However, the performance is observed to rise only up to four number of blocks. Increasing the number of blocks beyond four does not lead to any further performance improvement. However, adding more blocks also lead to an increase in the number of model parameters (Figure (b)b. Furthermore, one can observe that only CA module can perform better than using only the SA module. This is as per the expectation. Similarly, Figure 8 shows that the model performance keeps improving until the fourth SCA block for the TDIUC dataset. The model performance starts deteriorating with a further increase in the number of blocks.
### Ablation Analysis
The proposed model performs self-attention on the two modalities to obtain intra-modality correlated features. Then the co-attention module uses respective representations of the two modalities to obtain cross-modality correlated features by performing attention for
Figure 7: Number of attention blocks incorporated. (a) Validation accuracy for VQA2.0 _‘val’_ split with respect to attention blocks. (b) Parameter counts with respect to attention blocks
one modality in the context of another. In this ablation analysis, we examine the impact of individual attention module in various combinations to understand their importance. We also analyze the set of correct predictions obtained in these settings.
Table 5 and 6 present the results of ablation analysis experiments in terms of performance and complexity. The complexity is expressed in terms of the number of model parameters. The first row of the table shows the model performance when neither of the attention is incorporated. The features for both modalities are fused directly via element-wise multiplication without applying self- or co-attention. Second row shows the performance when _only self-attention_ (SA only) is incorporated on both modalities and answer prediction is based on the fused embedding of the self-attended representations of the individual modalities. Here, the fused representation is obtained via element-wise multiplication. Third row shows the results when _only co-attention_ (CA only) is incorporated on image and question in the context of the other. The last row shows the results from the proposed model that comprises of both _self-attention_ and _co-attention_ in cascade (SCA).
\begin{table}
\begin{tabular}{c c|c c} \hline \hline
**SA** & **CA** & \begin{tabular}{c} **Overall** \\ **Accuracy** \\ \end{tabular} &
\begin{tabular}{c} **Parameter** \\ **(in Millions)** \\ \end{tabular} \\ \hline \(\mathcal{X}\) & \(\mathcal{X}\) & 69.18 & 7 \\ \(\mathcal{X}\) & \(\mathcal{V}\) & 70,46 & 21 \\ \(\mathcal{V}\) & \(\mathcal{X}\) & 87.42 & 25 \\ \(\mathcal{V}\) & \(\mathcal{V}\) & **88.12** & **36** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Evaluating model performance on TDIUC dataset to investigate the effect of _number of attention blocks_ and self-attention & cross attention.
Figure 8: _Validation Accuracy_ and Parameter Count (in Millions) for _TDIUC_ dataset with respect to the number of SCA blocks incorporated in the VQA model.
\begin{table}
\begin{tabular}{c c|c c c c c} \hline \hline
**SA** & **CA** & \begin{tabular}{c} **Yes / No** \\ \end{tabular} & \begin{tabular}{c} **Number** \\ \end{tabular} & \begin{tabular}{c} **Other** \\ **Accuracy** \\ \end{tabular} &
\begin{tabular}{c} **Overall** \\ **(in Millions)** \\ \end{tabular} \\ \hline \(\mathcal{X}\) & \(\mathcal{X}\) & 69.95 & 36.42 & 50.19 & 55.80 & 22 \\ \(\mathcal{V}\) & \(\mathcal{X}\) & 79.08 & 40.75 & 49.96 & 59.69 & 15 \\ \(\mathcal{X}\) & \(\mathcal{V}\) & 81.17 & 44.63 & 56.34 & 64.13 & 25 \\ \(\mathcal{V}\) & \(\mathcal{V}\) & **84.92** & **49.51** & **58.71** & **67.36** & **42** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Evaluating model performance on VQA2.0 dataset to investigate the effect of _different basic attention modules_ of the proposed model
As per expectation, the model without any attention mechanism provides the lowest performance (first row). The "SA only" model provides lower performance as it lacks the interaction of two modalities and learns a comparatively poor representation (second row). Co-attention is the crucial component for multi-modality that is found to perform better than _self-attention_. In terms of computational complexity, a simple fusion-based model uses the least number of parameters, while the proposed model (SCA) requires the highest number of parameters. However, the performance improvement, especially for VQA2.0 dataset, overcomes the complexity issue. We observe that the change in model performance is similar for both datasets in this analysis.
Figure 9 shows the model's performance over various attention mechanisms for the different types of questions category on VQA2.0 dataset. The following are observed from the results for the _'Number'_ category of questions. While using the SA only and CA only blocks, the respective models show the overall performances of \(65\%\) and \(73\%\). Models using SA and CA attention individually predicts \(7\%\) of samples correctly that are not correctly classified by any of the other models. Similarly, the model using SCA block classifies \(12\%\) of samples correctly that are not correctly classified either by the models using SA or CA only. Thus, the models using SCA blocks achieved the best performance. The same pattern was observed over the other questions types i.e., _'Yes/No'_ and _'Other'_. The detailed result for all the question types are shown in figure 9.
### Qualitative Results
The qualitative results are presented in Figure 10 to demonstrate the efficacy of the proposed model. For this, two salient regions of a given image with the highest attention scores are highlighted. These are the attention scores obtained after cascading \(T=4\) SCA blocks. The question words that obtain the highest attention scores are also highlighted. As evident from Figure 9(a), the proposed model CSCA is able to focus on relevant image
Figure 9: SCA: Self-Attention & Co-attention, SA: Only self-attention is applied on text and visual features, CA: Cross-Modality Attention on text as well as on visual features guided by each other.
regions and question words. The top-2 salient regions corresponding to the binary question _"Are there any cows in the picture?"_ are the ones that capture the _cows_ and hence, the model responds by the answer _'Yes_'. Similarly, Figures 10b 10c 10d 10e 10f 10g 10h show that the model is trying to identify the salient image regions and relevant question words to predict the appropriate answer.
However, the model made errors as well. One of the reasons was incorrect attention to image regions. As shown in Figure 11a, the model's focus is primarily on the position from where it seems like this room is a kitchen. If the attention is given to other regions, the answer will likely to change to _'living room'_. In 11b for question _'What color is the wall in back of the desk?'_, the model focuses on the other side of the desk instead of the back. The predicted answer is 'green', the color on the side-wall of the desk.
Figure 10: Qualitative results for our proposed method CSCA. Attention for image obtained with cascade of \(T=4\) SCA blocks is presented. (top1, top2) attention score values correspond to the top two attention weight obtained for top-2 salient regions, that are relevant to infer the answer. The question words shown in blue are the ones that get the highest attention score.
## 6 Conclusion
This work proposes a dense attention mechanism-based VQA model. Dense attention is incorporated by exploiting both self-attention and co-attention. The self-attention mechanism helps in obtaining improved representation within a single modality. With self-attention, a salient region (in the case of image) interacts with every other region. The final representation inherits the contextual information for all regions. Similarly, for the input questions, self-attention provides the representation of every single word that captures the contextual information for other words as well. The proposed model also exploits the cross-modal interaction of two modalities which is further strengthened by self-attention of two modalities. Attention blocks are cascaded multiple times to facilitate refined cues of visual and textual features. The model's capability is justified by detailed experiments and analysis performed on the two benchmark VQA datasets.
The proposed method can be extended in several ways. The present proposal may be subjected to bias and consistency analysis. For example, this may be performed by rephrasing questions and flipping (or rotating) associated images. Also, the current proposal can be extended with translated question-answer pairs to validate its applicability in multi-lingual VQA.
|
2309.06001 | Measuring relative humidity from evaporation with a wet-bulb
thermometer: the psychrometer | Measuring the relative humidity of air is an important challenge for
meteorological measurements, food conservation, building design, and
evaporation control, among other applications. Relative humidity can be
measured with a psychrometer, which is a hygrometer composed of two identical
thermometers. The bulb of one thermometer is covered by a wick soaked with
water so that evaporative cooling makes it indicate a lower temperature than
the dry-bulb thermometer; it is possible to determine the relative humidity
from the difference between these readings. We describe both a model and an
experimental setup to illustrate the principle of a psychrometer for a
pedagogical laboratory. The science of psychrometry could be more broadly
taught at the undergraduate level to help introduce students to aspects of
measurement techniques, fluid mechanics, heat transfer, and non-equilibrium
thermodynamics. | Marie Corpart, Frédéric Restagno, François Boulogne | 2023-09-12T06:58:16Z | http://arxiv.org/abs/2309.06001v2 | # Measuring relative humidity from evaporation with a wet-bulb thermometer: the psychrometer
###### Abstract
Measuring the relative humidity of air is an important challenge for meteorological measurements, food conservation, building design, and evaporation control, among other applications. Relative humidity can be measured with a psychrometer, which is a hygrometer composed of two identical thermometers. The bulb of one thermometer is covered by a wick soaked with water so that evaporative cooling makes it indicate a lower temperature than the dry-bull thermometer; it is possible to determine the relative humidity from the difference between these readings. We describe both a model and an experimental setup to illustrate the principle of a psychrometer for a pedagogical laboratory. The science of psychrometry could be more broadly taught at the undergraduate level to help introduce students to aspects of measurement techniques, fluid mechanics, heat transfer, and non-equilibrium thermodynamics.
## 1 Introduction
Humidity is a general concept referring to the water content of a gas as defined, for example, on page 302 of Ref. [1]. In practice, the relative humidity \(\mathcal{R}_{H}\) is the ratio, often expressed as a percentage, of the partial pressure of water in the atmosphere \(p\) at temperature \(T\) to the saturation vapor pressure \(p_{\text{sat}}(T)\) at the same temperature:
\[\mathcal{R}_{H}(T)=\frac{p(T)}{p_{\text{sat}}(T)}. \tag{1}\]
The measurement and control of relative humidity is of importance in a diversity of fields, including weather forecasting, health care, building design, conservation, and food processing and preservation. [2, 3, 4]
Historically, the measurement of relative humidity has been challenging. A broad range of techniques have been developed, varying from the hair hygrometer invented by Saussure in 1780 to modern electronic sensors based on impedance measurements due to absorption in thin films as described in part II of Ref. [5]. These modern hygrometers need calibration standards, and the most fundamental standard used by national calibration laboratories is the gravimetric hygrometer as explained page 185 of Ref. [6]. Using this method, a certain amount of dry gas is weighed and compared with the weight of the test gas in the same volume. From this, the amount of water is determined and vapor pressure calculated. This method can provide accurate measurements, but such systems are cumbersome, expensive, and impractical for student use. In view of these limitations, it is common to use alternative standards to calibrate commercial hygrometers. According to the World Meteorological Organization (WMO), a fundamental such standard is the psychrometer, as described on page 185 of Ref. [6].
A psychrometer consists of two thermometers placed side-to-side. The surface of the sensing element, the wet bulb, is covered with a soaked muslin to maintain a thin film of water. The sensing element of the second thermometer, the dry bulb, is simply exposed to the air. The principle of the psychrometer was discovered by James Hutton in 1792, and soon thereafter the significance of the role of air flow in its operation was recognized and quantified [7, 8, 9, 10, 11]. However, mathematically modeling the principle of the psychrometer remained a challenge until the beginning of the 20th century as it requires understanding of evaporation, boundary-layer theory, and radiative heat flux.
Recently, Caporalini _et al._ described the use of a home-made psychrometer in atmospheric physics courses [12]. Their approach was configured to ensure that their psychrometer is in strict accordance with official WMO recommendations. In this article we describe an approach more oriented to physics students. Our intent is twofold: (1) to propose a simple model for predicting the relative humidity from temperature measurements, with particular emphasis on elucidating the interplay between radiative and convective heat fluxes and the air velocity; and (2) to describe an affordable apparatus to illustrate the key points of the model and demonstrate how it can be used to measure the relative humidity. Our model is described in Sect. 2 and the apparatus and measurements in Sect. 3. Section 4 offers a few summary remarks.
## 2 Model
### Problem description
The analysis in the section will involve a number of quantities which may be unfamiliar to many readers; these are summarized in Table 1.
We model the psychrometer as two spheres of radius \(R\) representing the two bulbs placed in an air flow of velocity \(U\) as depicted in Fig. 1. The atmosphere is characterized by its temperature \(T_{\rm dry}\) measured with a dry bulb thermometer, and by its total pressure \(P\). The wet bulb has a temperature \(T_{\rm wet}\). We denote \(\Delta p^{\star}=p-p_{\rm sat}(T_{\rm wet})\) as the difference of partial vapor pressure between the atmosphere \(p(T_{\rm dry})\) and the saturated vapor pressure at the surface of the wet bulb \(p_{\rm sat}(T_{\rm wet})\). In practice, the saturated vapor pressure can be calculated with the phenomenological Antoine's equation
\[p_{\rm sat}(T)=p^{\circ}\,10^{A-\frac{B}{C+T}}, \tag{2}\]
where \(p^{\circ}=10^{5}\) Pa and \(A\), \(B\), \(C\) are constants. This can be used to calculate both \(p(T_{\rm dry})\) and \(p(T_{\rm wet})\). For water at \(T\in[0,30]\)\({}^{\circ}\)C, \(A,B,C\) are obtained by fitting data extracted from [13], which gives \(A=5.341\pm 0.003\) K, \(B=1807.5\pm 1.6\) K, and \(C=-33.9\pm 0.1\) K.
The difference of temperatures \(\Delta T^{\star}=T_{\rm dry}-T_{\rm wet}\) originates from the evaporation of water on the wet bulb. Since the molar enthalpy of vaporization \(\Delta_{\rm vap}H\) is positive, evaporation cools the wet bulb, so \(T_{\rm wet}\leq T_{\rm dry}\). As a consequence, the vapor concentration at the interface is \(p_{\rm sat}(T_{\rm wet})\leq p_{\rm sat}(T_{\rm dry})\). More precisely, the temperature \(T_{\rm wet}\) is the result of the balance between the enthalpy of vaporization and thermal exchanges with the environment.
The psychrometric equation relates the difference of vapor pressures \(\Delta p^{\star}\) and the difference of temperatures \(\Delta T^{\star}\). This has the form
\[\Delta p^{\star}=-{\cal A}P\Delta T^{\star}, \tag{3}\]
where \({\cal A}\) is the psychrometer coefficient and \(P\) is the atmospheric pressure. Historically, this equation was introduced in the pioneering works of Ivory, August, and Apjohn from considerations of gas expansion, but is now used phenomenologically. [8, 14, 15, 16].
The purpose of this model is to derive the psychrometric equation (3) with modern concepts, and also to describe the physical origin of the coefficient \({\cal A}\). The psychometric equation allows a direct calculation of the relative humidity \({\cal R}_{H}(T_{\rm dry})\) based on the definition given by equation 1:
\[{\cal R}_{H}(T_{\rm dry})=\frac{p_{\rm sat}(T_{\rm wet})-{\cal A}P\Delta T^{ \star}}{p_{\rm sat}(T_{\rm dry})}. \tag{4}\]
### Mass and heat transfers
The air flow around the psychrometer is characterized by the dimensionless Reynolds number \({\rm Re}=2UR/\nu_{\rm air}\), where \(U\) is the air velocity and \(\nu_{\rm air}\) is the kinematic viscosity of the air, which is the ratio between the dynamics viscosity and the fluid density; this is also known as the momentum diffusivity.
The transport of water vapor is driven by the difference of vapor concentrations between the environment and the surface of the wet bulb, _i.e._\(\Delta c^{\star}=c_{\infty}-c_{\rm sat}(T_{\rm wet})<0\). This transport is established by diffusion across the hydrodynamic boundary layer and is characterized by the diffusion coefficient \({\cal D}_{\rm w}\) of the water vapor in air, which has the value \(2.4\times 10^{-5}\) m\({}^{2}\)/s at 20 \({}^{\circ}\)C [13]. We also define the Schmidt number as \({\rm Sc}=\nu_{\rm air}/{\cal D}_{\rm w}\simeq 0.6\). The dimensionless Schmidt number is the ratio of momentum diffusivity to mass diffusivity, and is a measure of the relative thickness of the hydrodynamic and mass-transfer boundary layers. The evaporation rate is
\[\Phi_{\rm ev}=4\pi R{\cal D}_{\rm w}\Delta c^{\star}f_{\rm ev}. \tag{5}\]
The so-called ventilation coefficient \(f_{\rm ev}\) accounts for the effect of the air flow. There is no exact expression for this coefficient, but Frossling as well as Ranz and Marshall proposed a semi-empirical one:
\[f_{\rm ev}=1+\beta_{\rm ev}{\rm Re}^{1/2}{\rm Sc}^{1/3}, \tag{6}\]
where \(\beta_{\rm ev}\simeq 0.3\) is a numerical prefactor [17, 18, 19]. This expression is suitable for describing experimental results for Reynolds numbers up to 1,280. The interested reader can find a more general expression in Whitaker [20] which holds for Reynolds numbers up to \({\rm Re}=7\ 000\). The dependence of the ventilation coefficient with the Reynolds number is reminiscent of the boundary layer thickness that scales as \({\rm Re}^{-1/2}\)[21].
Figure 1: Schematic of the dry and wet bulbs in a steady state regime. Bulbs are spherical with radius \(R\). The dry bulb is the environmental temperature \(T_{\rm dry}\) and the evaporating wet bulb is at \(T_{\rm wet}<T_{\rm dry}\). Evaporating is driven by the vapor pressure difference \(p_{\rm sat}(T_{\rm wet})-p(T_{\rm dry})\). The heat flux \(Q_{\rm ev}\) due to evaporation compensates the heat fluxes \(Q_{\rm h}\) and \(Q_{\rm rad}\) by conduction and radiation. The system is placed in an air flow characterized by its velocity \(U\).
With Eq. (5), the heat flux due to evaporation is
\[Q_{\rm ev}=\Delta_{\rm vap}H\,\Phi_{\rm ev}, \tag{7}\]
where \(\Delta_{\rm vap}H\) is the enthalpy of vaporization.
Due to the temperature difference between the wet bulb and the environment, heat transfers take place. Two contributions can be identified. First, the transfer due to air flow is analogous to the mass transfer formerly described, since it occurs across a thermal boundary layer. This heat flux can be expressed as
\[Q_{\rm h}=4\pi R\lambda_{\rm air}\Delta T^{\star}f_{\rm h}, \tag{8}\]
where \(\lambda_{\rm air}\) is the air thermal conductivity and \(f_{\rm h}\) is the ventilation coefficient [18, 19]:
\[f_{\rm h}=1+\beta_{\rm h}{\rm Re}^{1/2}{\rm Pr}^{1/3}. \tag{9}\]
The coefficient \(\beta_{\rm h}=0.3\) is a numerical prefactor, and the Prandtl number is defined as \({\rm Pr}=\nu_{\rm air}/\alpha_{\rm air}\simeq 0.7\), with \(\alpha_{\rm air}\) being the thermal diffusivity of air.[18, 19] The Prandtl number is analogous to the Schmidt number for heat transfer.
Second, temperature differences with the environment leads to an energy transfer \(Q_{\rm rad}\) (in W) from radiation given by Stefan's law,
\[Q_{\rm rad}=4\pi R^{2}\epsilon\sigma(T_{\rm dry}^{4}-T_{\rm wet}^{4}), \tag{10}\]
where \(\sigma\simeq 5.67\times 10^{-8}\,{\rm W}\cdot{\rm m}^{-2}\cdot{\rm K}^{-4}\) is the Stefan-Boltzmann constant and \(\epsilon\) is the emissivity. The emissivity \(\epsilon\) is close to unity for common materials; for water \(\epsilon=0.96\).[22]
As a result, the energy balance \(Q_{\rm ev}+Q_{\rm h}+Q_{\rm rad}=0\) can be written as
\[Q_{\rm ev}=-Q_{\rm h}\left(1+\frac{Q_{\rm rad}}{Q_{\rm h}}\right), \tag{11}\]
which highlights the significance of the radiative flux compared to the conductive one.
From a historical perspective, it is worth noting that Maxwell wrote equations (5) and (8) in his description of the psychrometer.[23] He also included the effect of radiation, but since the Stefan-Boltzmann was not yet known he assumed radiative flux to simply be linear with the difference of temperatures.[24, 25] This led to a qualitative description of the psychrometer but not a quantitative analysis.
### Psychrometer coefficient
To obtain the form of the psychrometric equation (3), two additional steps are necessary.
First, the difference of vapor concentrations \(\Delta c^{\star}\) in Eq. (5) must be expressed as a function of \(\Delta p^{\star}\). Assuming that vapor is an ideal gas, the difference of vapor molar concentrations can be related to the difference of vapor pressure as
\[\Delta c^{\star}=\left(\frac{p}{T_{\rm dry}}-\frac{p_{\rm sat}(T_{\rm wet}) }{T_{\rm wet}}\right)\frac{1}{\cal R}, \tag{12}\]
with the molar gas constant \({\cal R}\simeq 8.314\) J\(\cdot\)mol\({}^{-1}\cdot\)K\({}^{-1}\). This can be approximated by \(\Delta c^{\star}\approx\Delta p^{\star}/({\cal R}T_{\rm dry})\) for \(\Delta T^{\star}/T_{\rm dry}\ll 1\). Then we can use the ideal gas law to obtain \(\Delta c^{\star}\) as
\[\Delta c^{\star}=\Delta p^{\star}\rho_{\rm air}/PM_{\rm air}, \tag{13}\]
where \(M_{\rm air}\) and \(\rho_{\rm air}\) are respectively the molar mass and the mass density of dry air.
Second, the ratio of radiative and convective heat fluxes
\[\frac{Q_{\rm rad}}{Q_{\rm h}}=\frac{\epsilon\sigma}{\lambda_{\rm air}f_{\rm h }}\frac{T_{\rm dry}^{4}-T_{\rm wet}^{4}}{T_{\rm dry}-T_{\rm wet}}R \tag{14}\]
can be simplified for \(|T_{\rm wet}-T_{\rm dry}|/T_{\rm dry}\ll 1\) to the form
\[\frac{Q_{\rm rad}}{Q_{\rm h}}\simeq\frac{T_{\rm dry}^{3}}{T_{\rm c}^{3}}, \tag{15}\]
where we have introduced the characteristic temperature
\[T_{\rm c}(R,U)=\left(\frac{\lambda_{\rm air}f_{\rm h}}{4R\epsilon\sigma} \right)^{1/3}, \tag{16}\]
that depends on both the wet bulb size \(R\) and the air velocity \(U\) through the ventilation coefficient \(f_{\rm h}\).
From equation (11), we can obtain the psychrometric equation (3) where we used equations (5), (7), and (13) to express \(Q_{\rm ev}\) as function of \(\Delta p^{\star}\); equation (8) for the heat flux proportional to \(\Delta T^{\star}\); and equation (15) for the ratio \(Q_{\rm rad}/Q_{\rm h}\). Then, we identify the psychrometer coefficient:
\[{\cal A}=\frac{\lambda_{\rm air}M_{\rm air}}{\Delta_{\rm vap}H\,{\cal D}_{\rm w }\rho_{\rm air}}\frac{f_{\rm h}}{f_{\rm ev}}\left(1+\frac{T_{\rm dry}^{3}}{T_ {\rm c}^{3}}\right). \tag{17}\]
The psychrometer coefficient is plotted in Fig. 2 as a function of the air velocity for four wet bulb sizes. For wet bulbs of sizes larger than millimeter scale, we observe a strong decrease of the coefficient with the air velocity, with all curves converging to the same limit. The physical origin of this behavior is analyzed in the following paragraph.
### Effect of bulb size and air velocity
As shown by Eq. (17), \({\cal A}\) depends both on the wet bulb size and the air velocity through \(f_{\rm h}\) and \(f_{\rm ev}\) although it is commonly called the psychrometric "constant." Here we analyze the effects of these two parameters.
The ratio \(f_{\rm h}/f_{\rm ev}\) can be easily computed from the proposed expressions of Eq. 6 and 2.2. It appears that this ratio depends only weakly on both \(R\) and \(U\). The underlying reason is that the Schmidt and Prandtl numbers are
nearly equal for gases, \(\mathrm{Sc}\approx\mathrm{Pr}\approx 1\), hence we can expect only minute variations of \(f_{\mathrm{h}}/f_{\mathrm{ev}}\). The basis of this similarity is that kinetic theory shows that the microscopic mechanism of momentum and thermal diffusion in a gas have the same origin.
Next, we can expect the term \(T_{\mathrm{dry}}^{3}/T_{\mathrm{c}}^{3}\) to be negligible if \(T_{\mathrm{dry}}^{3}/T_{\mathrm{c}}^{3}\ll 1\). In practice, we consider that the contribution is small if the correction is smaller than \(10\%\), _i.e_\(T_{\mathrm{dry}}^{3}/T_{\mathrm{c}}^{3}<0.1\). In the limit of vanishing air velocities (\(U\to 0\)), the ratio is proportional to \(R\) and is negligible for \(R<0.5\) mm, where we used \(\lambda_{\mathrm{air}}=0.026\)\(\mathrm{W\cdot m^{-1}\cdot K^{-1}}\) and \(T_{\mathrm{dry}}=293\) K. This size is much smaller than bulbs of thermometers, so that in the absence of air flow the psychrometer coefficient is sensitive to the bulb size. In practice, ensuring the absence of air flow is difficult because of the presence of natural convection, so this limit is not really applicable.
As for the effect of the air velocity, we observe that \(T_{\mathrm{dry}}^{3}/T_{\mathrm{c}}^{3}\) is a decreasing function of \(U\). Thus, for a given wet bulb size, there exists a typical air velocity above which the ratio is negligible, _i.e._\(T_{\mathrm{dry}}^{3}/T_{\mathrm{c}}^{3}\ll 1\). For \(R=1\) cm we find a typical velocity of about \(5\) m\(\cdot\)s\({}^{-1}\).
### The importance of psychrometer ventilation
The model developed above highlights how the psychrometer coefficient depends on the geometry of the wet bulb and on the air flow velocity. The dependency on the wet bulb size arises from a competition between the heat fluxes by radiation and by convection, where the former is proportional to the surface area of the bulb while the latter is proportional to the characteristic size. For small objects, the radiative heat transfer is negligible whatever the air velocity. For increasing air flow velocities, this typical size increases because the heat transfer by convection increases, whereas the radiative flux does not change. For this reason, ventilating the psychrometer reduces the contribution of radiative heat transfers and makes the psychrometer coefficient weakly dependent on the bulb size.
Forcing the air flow also minimizes the effect of flows from natural air convection, which causes the variation of the psychrometer coefficient with air velocity to be particularly important for a centimeter-scale bulb, as illustrated in Fig. 2. In the limit \(U\to\infty\), the effect of radiation becomes negligible, and the psychrometer coefficient reduces to
\[\mathcal{A}^{\mathrm{lim}}=\frac{\lambda_{\mathrm{air}}M_{\mathrm{air}}}{ \Delta_{\mathrm{vap}}H\,\mathcal{D}_{\mathrm{w}}\rho_{\mathrm{air}}}\left( \frac{\mathrm{Pr}}{\mathrm{Sc}}\right)^{1/3}. \tag{18}\]
In our conditions, \(\mathcal{A}^{\mathrm{lim}}=6.0\times 10^{-4}\)\(\mathrm{K}^{-1}\). In textbooks one can find the expression \(\mathcal{A}=\lambda_{\mathrm{air}}M_{\mathrm{air}}/(\Delta_{\mathrm{vap}}H\, \mathcal{D}_{\mathrm{w}}\rho_{\mathrm{air}})\). This is sometimes known as the psychrometric ratio, and is nearly equal to equation (18) since \((\mathrm{Pr}/\mathrm{Sc})^{1/3}\simeq 1.04\).
### Carrier chart
Now that we have some understanding of the physical origin of the psychrometer coefficient, we are prepared to determine the relative humidity from the joint measurement of the dry and wet bulb temperatures. While one could produce tables which give values of the relative humidity corresponding to pairs of \((T_{\mathrm{dry}},T_{\mathrm{wet}})\) values, this would require several pages of output. Instead, we reproduce a psychrometric chart as originally designed by Carrier, which makes it easy to deduce the relative humidity from the temperatures.[26] This chart also makes
Figure 3: Carrier chart presenting the specific humidity \(w\) as a function of the dry bulb temperature in solid lines. The dashed lines correspond to the humidity for the specified wet bulb temperature as a function of the dry bulb temperature. The chart is produced by using the psychrometer coefficient \(\mathcal{A}^{\mathrm{lim}}\). The red lines show the reading for \(T_{\mathrm{dry}}=20\)\({}^{\circ}\mathrm{C}\) and \(T_{\mathrm{wet}}=15\)\({}^{\circ}\mathrm{C}\). The specific humidity can be read by following the green line and the purple line indicates the relative humidity.
Figure 2: Psychrometer coefficient \(\mathcal{A}\) from equation 3 as a function of the air speed \(U\) and for different radii \(R\). The black solid line represents \(\mathcal{A}^{\mathrm{lim}}\) obtained from the limit \(U\to\infty\).
it possible to determine other quantities such as the dew point and the absolute humidity, which are relevant in air conditioning.[27] A source code in Python for generating this chart is provided in Supplementary Materials.
Figure 3 shows a simplified Carrier chart on which we illustrate the following circumstance. Suppose that a psychrometer indicates a dry bulb temperature of 20 \({}^{\circ}\)C and a wet bulb temperature of 15 \({}^{\circ}\)C. The two red lines originating from each temperature intersect at a point from which the specific humidity (defined in more detail below) can be read from the green dotted line and, more importantly, the relative humidity from the purple dotted line. This leads to a relative humidity \(\mathcal{R}_{\mathrm{H}}\simeq 60\) % and a specific humidity \(w\simeq 8.5\) g/kg. For practical use, the chart must be refined by plotting additional curves to obtain a better accuracy.
To construct this chart, the first step is to plot the water content of air as a function of its temperature \(T_{\mathrm{dry}}\). The water content of air depends on temperature since it can vary between zero and the saturating value, which itself depends on the air temperature; see Eq. (2). The water content is called specific humidity when defined as the ratio of weight of vapor to the weight of humid air. As a good approximation, the specific humidity is close to the mixing ratio, which is the weight of vapor normalized by the weight of dry air. This approximation originates in the low water vapor pressure compared to the atmospheric pressure, \(p_{\mathrm{sat}}/P\ll 1\). [28] From the ideal gas law, the specific humidity can then be written as \(w=pM_{\mathrm{w}}/(PM_{\mathrm{air}})\). Furthermore, from the definition of the relative humidity (Eq. 1) and Antoine's equation (Eq. 2), we have
\[w(T)=\frac{\mathcal{R}_{H}M_{\mathrm{w}}p^{\circ}10^{A-\frac{B}{C+T}}}{PM_{ \mathrm{air}}}, \tag{19}\]
which can readily be plotted as a function of the dry bulb temperature for different relative humidity values \(\mathcal{R}_{H}\); this is illustrated in Fig. 3 by the solid lines.
Next, for different given wet bulb temperatures and relative humidity values, we solve the psychrometric equation (Eq. 4), for which saturating pressures are calculated from Eq. (2) and the psychometric coefficient is \(\mathcal{A}^{\mathrm{lim}}\) given in Eq. (18), \(\mathcal{A}^{\mathrm{lim}}=6.0\times 10^{-4}\) K\({}^{-1}\) in our conditions. Equation (4) does not directly provide the dry bulb temperature, so we use a Newton-Raphson root-finding algorithm, with the wet bulb temperature as a starting estimate. The optimization returns the dry bulb temperatures, which are represented by the dashed lines in Fig. 3. A source code written in Python and using scipy that generates this chart is provided in Supplementary Material.
We emphasize two important points here. First, the diagram is plotted for a given atmospheric pressure, which is often indicated on charts available in the literature with a "sea level" indication. Second, the chart assumes that radiation effects are negligible. Thus, deviations between the predicted wet bulb temperature and the temperature of an evaporating surface may exist, depending on conditions.
## 3 Measurements and analysis
In this section we test the predictions made in Section 2 against experimental measurements. We choose to use a simple apparatus inspired by the design proposed in [12] to measure the dry and wet bulb temperatures with a tunable air flow velocity. This device is affordable for demonstrations and practical student experiments. A table giving estimated costs of the components and a photograph of the system is provided as Supplementary Material.
### Experimental setup
The psychrometer comprises two Testo type T temperature probes (waterproof, accuracy of \(\pm 0.2\)\({}^{\circ}\)C), both connected to a RS PRO 1314 digital interface. One could also use classical alcohol thermometers. One of the two thermometers is covered with a water-soaked gauze compress. In our experiments, the wet-bulb has a typical dimension of 8 mm in diameter and 4 cm in length. While our model assumes a spherical bulb, we show in the following section that the model gives good results in comparison to experimental data upon adopting an effective bulb diameter close to the actual one. We checked at the start and end of the experiment that the gauze is wet. The first thermometer measures the temperature in the box \(T_{\mathrm{dry}}\) and the second one measures the wet bulb temperature \(T_{\mathrm{wet}}\). As shown in Fig. 4, the two thermometers are placed in a plastic tube equipped with an electric fan at one end. Are both within the tube - the dry one is shown outside. The fan has a diameter of 12 cm and is rated by the manufacturer at 234 m\({}^{3}\)/h (Sunon, EEC0381B1-A99). The fan is connected to the tube with an adapter made with a 3D-printer to achieve good sealing, although this is not a strict requirement. By adjusting the voltage applied to the fan, the air flow can be varied, and is measured with a RS PRO AM-4204 hot wire anemometer (maximum measurable velocity 20 m/s; resolution 0.1 m/s). With this setup we are able to explore air velocities up to 10 m/s, which corresponds to Reynolds number \(\mathrm{Re}=2RU/\nu_{\mathrm{air}}\simeq 5\ 000\) for \(R=4\) mm.
This device is placed in a transparent glove box measuring \(80\times 50\times 45\) cm) in which the relative humidity can be adjusted as described below. An inlet and an outlet allow flushing the box with dry nitrogen. The relative humidity in the glove box is measured by using a Velleman DEM501 digital hygrometer with a capacitance sensor (this is not shown in Fig. 4). We denote \(\mathcal{R}_{H}^{\mathrm{hygro}}\) as the value measured by the hygrometer. A separate fan
placed in the box ensures a rapid homogenization, and experiments are done at ambient temperature.
Starting from ambient humidity, flushing with nitrogen allows drying the air in the box and, with our setup, allows us to reach humidity values lower than 5 % in less than 10 minutes. To increase the humidity we replace the dry nitrogen by ambient air by slightly opening the box. To reach higher humidities, we put wet towels in the glove box and close the air entrance. In this way, the relative humidity in the box increases slowly to reach about 95 %.
### Measurements
Our first experiment explores the effect of the air flow on temperatures. Here we measured the temperatures for different applied voltages to the fan once the equilibrium is reached, with the relative humidity held constant. The dry bulb thermometer proved insensitive to the air flow, so, we plot in Fig. 5(a) the temperature difference \(\Delta T^{\star}\) as a function of the air velocity. The air flow increases the cooling effect dramatically over the first few meters per second, after which a slower variation is observed. The solid curve is the prediction of the model of Sect. 2, which shows a good agreement for a wet bulb size \(R=1\) cm. This size is characteristic of the actual bulb (8 mm diameter), so we did not attempt a more detailed analysis of the effect of the shape of the bulb. Clearly, for sufficiently large air velocities, the temperature difference becomes independent of the air flow due to the negligible contribution of the radiative heat flux. This regime of nearly constant difference thus regime allows a robust measurement of the relative humidity.
To quantify the effect of the humidity on the wet bulb measurement, we measured the temperature difference for humidity conditions ranging from \(\mathcal{R}_{H}^{\text{hygro}}=3\) % to \(\mathcal{R}_{H}^{\text{hygro}}=100\) % as measured by the commercial hygrometer. The results are shown in the inset of Fig. 5(b). Measurements are performed from low to high humidity values (green circles) and high to low humidity values (blue squares). A small hysteresis of the order of \(\delta T=0.5\)\({}^{\circ}\)C is observed, which is possibly due to the finite response time of both the thermometer and hygrometer and to the fact that the measurements are done continuously to avoid a closed-loop setup and ensure a humidity-controlled chamber.
We observe a systematic deviation between the curve \(\mathcal{R}_{H}^{\text{pyochrome}}=\mathcal{R}_{H}^{\text{hygro}}\) and the experimental data. Thus, we calibrated the commercial hygrometer by measuring the relative humidity above solutions saturated with various salts, for which the expected relative humidity is given in [29]. We performed measurements for KOH solutions for
Figure 4: Setup used to measure the temperature difference in air flow produced by an electric fan. The tube diameter is about 10 cm.
Figure 5: (a) Measured temperature difference \(\Delta T^{\star}\) as a function of the airspeed \(U\) for \(T_{\text{dry}}=20.3\pm 0.1\)\({}^{\circ}\)C and \(\mathcal{R}_{H}^{\text{hygro}}=27.8\pm 2.0\) % measured with a digital hygrometer. The solid curve corresponds to the predicted temperature difference obtained with equations 3 and 17 with a characteristic size \(R=1\) cm. The green shaded area corresponds to the range \(R\in[0.1,5]\) cm. (b) Comparison between the humidity deduced from the psychrometer and the commercial hygrometer. The inset shows the data before calibration of the commercial hygrometer and the main plot after. The solid black line indicates the equality between axes. The main figure is obtained after calibrating the commercial hygrometer with salt solutions as detailed in the text.
which we expect \({\cal R}_{H}=9\) %, for MgCl\({}_{2}\) (\({\cal R}_{H}=33\) %), and for NaCl (\({\cal R}_{H}=75\) %). The values measured by the hygrometer are \({\cal R}_{H}^{\rm exp}\approx 15\) % for KOH, \({\cal R}_{H}^{\rm exp}\approx 44\) % for MgCl\({}_{2}\), and \({\cal R}_{H}^{\rm exp}\approx 83\) % for NaCl. The relative humidities measured with the commercial hygrometer are all higher than the expected relative humidity, the difference being about 8 %. Consequently, we calibrated the digital hygrometer by shifting the measurements down by 8 %, which yields excellent agreement with our experimental results as shown in Fig. 5(b). This procedure indicates that a careful eye must be kept on all measuring equipment: Digital hygrometers require calibrations and are particularly prone to deviations. Students should always be prompted to consider calibration methods.
After these measurements and calibrations, it is now possible to propose different teaching situations.
* In a course, the teacher can make a single point experiment, show the temperature difference between the dry and wet bulb temperature, and propose the full theoretical calculation (and/or explain how to use a Carrier chart).
* In a practical work, the students can perform all the experiments resulting to figure 5 simply using a fan in order to be directly in the regime where the air velocity becomes insignificant.
* All the experiments, including the measurement of the effect of the air velocity can be reproduced.
The choice between the different pedagogical scenarios whill depend on the level of the student and the purpose of the teaching. The emphasis can be made either on theory or experiments depending of the students level.
## 4 Conclusion
We have described a model for the psychrometer which allows us to quantify the significance of radiative and convective heat transfers and role of the air velocity on the psychrometer coefficient. An experimental implementation suitable for student laboratory use gives results in good accord with the model, as verified by independent measurements with a commercial hygrometer. Measurements of the relative humidity conducted with air velocities of about a meter per second ensures results reproducible results independent of the air flow.
Quantitative understanding of the relation between evaporation and cooling is an application of practical thermodynamics. Given concern with the effects of global warming on human health, disasters such as wildfires and hurricanes, and effects on food and water supplies, we urge that the concepts of psychrometry be more broadly taught to undergraduates to inform them of how humidity is determined.
## Acknowledgments
We kindly thank Vincent Klein for creating the 3D-printed adapter, Saint-Gobain and ANRT for funding this study. FR would like to thank Suzanne Lafon for her careful correction of the manuscript.
## Author Declarations
The authors have no conflicts to disclose. |
2309.09332 | A Zigbee Based Cost-Effective Home Monitoring System Using WSN | WSNs are vital in a variety of applications, including environmental
monitoring, industrial process control, and healthcare. WSNs are a network of
spatially scattered and dedicated sensors that monitor and record the physical
conditions of the environment.Significant obstacles to WSN efficiency include
the restricted power and processing capabilities of individual sensor nodes and
the issues with remote and inaccessible deployment sites. By maximising power
utilisation, enhancing network effectiveness, and ensuring adaptability and
durability through dispersed and decentralised operation, this study suggests a
comprehensive approach to dealing with these challenges. The suggested
methodology involves data compression, aggregation, and energy-efficient
protocol. Using these techniques, WSN lifetimes can be increased and overall
performance can be improved. In this study we also provide methods to collect
data generated by several nodes in the WSN and store it in a remote cloud such
that it can be processed and analyzed whenever it is required. | Garapati Venkata Krishna Rayalu, Paleti Nikhil Chowdary, Manish Nadella, Dabbara Harsha, Pingali Sathvika, B. Ganga Gowri | 2023-09-17T17:42:15Z | http://arxiv.org/abs/2309.09332v1 | # A Zigbee Based Cost-Effective Home Monitoring System Using WSN
###### Abstract
WSNs are vital in a variety of applications, including environmental monitoring, industrial process control, and healthcare. WSNs are a network of spatially scattered and dedicated sensors that monitor and record the physical conditions of the environment.Significant obstacles to WSN efficiency include the restricted power and processing capabilities of individual sensor nodes and the issues with remote and inaccessible deployment sites. By maximising power utilisation, enhancing network effectiveness, and ensuring adaptability and durability through dispersed and decentralised operation, this study suggests a comprehensive approach to dealing with these challenges. The suggested methodology involves data compression, aggregation, and energy-efficient protocol. Using these techniques, WSN lifetimes can be increased and overall performance can be improved. In this study we also provide methods to collect data generated by several nodes in the WSN and store it in a remote cloud such that it can be processed and analyzed whenever it is required.
Wireless Sensor Networks, Zig-bee, Energy efficiency,Cost Effective.
## I Introduction
Wireless Sensor Networks (WSNs) are networks of inexpensive, low-power, small devices with wireless communication, sensors, and microcontrollers. According to [1] these networks have various applications, including environmental monitoring, industrial process control, and healthcare. WSNs play a crucial role in wildlife conservation by detecting and tracking animal movements, monitoring crop health in agriculture [2], and ensuring the stability of buildings and bridges [3]. Additionally, WSNs can assist in home monitoring and support elderly individuals.
Small, battery-operated sensor nodes form the basis of Wireless Sensor Networks (WSNs), which use wireless communication to monitor and collect data from their surroundings. These sensor nodes continuously sense environmental conditions and gather data. The nodes form a network with different topologies, such as star, mesh, or cluster-tree, and communicate with one another via wireless protocols. Energy-efficient communication protocols such as Zigbee and LoWPANs, along with techniques like data compression, have been used to improve network efficiency [4]. The distributed and decentralized nature of WSNs, enabling local decision-making based on gathered data, enhances adaptability and robustness.
Considering the existing research, studies have focused on developing and enhancing energy-efficient protocols for WSNs [1]. Other research has explored techniques such as data compression, aggregation, and hierarchical routing to improve overall network effectiveness [3]. The impact of distributed and decentralized operation on adaptability and robustness has also been investigated [4]. However, there is a lack of a comprehensive strategy that simultaneously addresses power and computational constraints, remote deployment challenges, energy efficiency, and decentralized operation. Therefore, the Zigbee protocol is chosen as a solution.
Objectives of our work is to put to use an energy-efficient protocol that optimizes computing resources and power usage in WSNs.Address challenges associated with remote and inaccessible deployment sites.Improve network efficiency through data aggregation. Design a distributed and decentralized strategy to enhance the robustness and adaptability of WSNs.
To achieve these goals, we propose a unique methodology that combines multiple approaches and readily available algorithms specifically tailored for WSNs. This methodology takes into account factors such as remote deployment sites, limitations of individual sensor nodes, and the need for energy-efficient operations. It leverages the benefits of distributed and decentralized decision-making to enhance adaptability and resilience.
The following sections in this paper section 2 outlines the setup, utilizing XBee modules and Arduino-integrated sensors for data collection. In Section 3, we develop a user-friendly GUI for efficient data visualization. Section 4 focuses on collecting and securely storing the data on the cloud, while implementing continuous monitoring for detecting data changes.
## II Related Works
The table of literature review Table I provides a summary of the key findings from the relevant research literature.
In [5] Altaf et al. describes a wireless monitoring system for banana ripening using knowledge-level artificial intelligence algorithms and XBee-based WSN architecture. The system incorporates XBee modules for wireless communication between sensor nodes and a central controller, providing real-time data collecting. Throughout the ripening process, a variety of sensors are used to measure the temperature, humidity, and gas concentration levels. In order to determine the ideal ripening conditions, the collected data is processed
using knowledge-level artificial intelligence techniques that incorporate professional knowledge and rules. The proposed system offers an efficient and optimized approach to monitor and control the banana ripening process wirelessly, facilitating improved quality and productivity in the industry.
In [6] Ling et al. describes an XBee wireless protocol-based elderly infrared body temperature tele-monitoring system. The technology uses infrared body temperature sensors to remotely check on elderly people's body temperatures. The XBee protocol is used to wirelessly send the temperature data obtained, allowing for real-time monitoring and analysis. With the aid of this device, older patients' health can be monitored discrerely and conveniently, enabling early identification of aberrant temperature levels and fast intervention when required. However, detailed information regarding the affiliations of the authors or more data regarding the reference are not available.
In [7] Sean et al. examines the idea of ambient assisted living (AAL) and how the Internet of Things (IoT) is integrated
into it. For the purpose of creating an intelligent environment for assisted living, the authors suggest a system that makes use of a range of sensors combined with the XBee platform. These sensors keep an eye on a number of variables, including temperature, humidity, light, motion, and gas concentrations, and they provide real-time data for assessing people's safety and well-being. With the help of the XBee platform, wireless connectivity and communication are made possible, allowing for smooth data transfer between the sensors and the main system. The system aims to improve people's quality of life by offering a smart living environment that fosters independence, security, and comfort, especially for the elderly or people with disabilities.
In [8] Allahham et al. provides a clever monitoring system for university settings that makes use of Zigbee WSNs. The system incorporates a network of wireless sensors placed throughout the campus to gather information on the climate, security, and energy use. These sensors enable real-time monitoring and analysis by wirelessly transmitting the data to a centralised monitoring system. The solution offers effective and dependable wireless communication while minimising power consumption by utilising Zigbee technology. The suggested system intends to better campus administration, boost safety precautions, maximise energy use, and offer useful information for making decisions. However, the reference does not provide any detailed information regarding the affiliations of the writers.
In [4] Desnanjaya et al. presents a performance evaluation of the XBee Pro Series 2B RF module-based WSN for data transmission. The study focuses on measuring the efficiency and reliability of data transmission in a WSN context. The XBee Pro Series 2B RF module will be used in tests to monitor important performance parameters like packet loss, throughput, and latency. They examine how several elements, including distance, hop count, and interference, affect the effectiveness of the WSN. The analysis's findings shed light on the XBee Pro Series 2B RF module's advantages and disadvantages in terms of data transmission in a WSN configuration.
In [9] Maneesha V. Ramesh et al., and [10] M Shyama et al. the hindrance faced by WSN's were explored which gave us their significance of these factors affect a WSN network.
In [11] Sanjeev Kumar Shah et al., the study gave insights about the flexible and extensible architecture to integrate WSN and IoT, and in [12] V.Sanjay Kumar et al., the authors describe the effectiveness of sensory networks in safeguarding humans and were able to implement a smart surveillance system.
## III Methodology
Hardware elements such as the Jetson Nano, Arduino microcontrollers, XBee modules, and other sensors are combined to build a smart home application. The Arduino microcontrollers operate as bridges between the Jetson Nano and the sensor nodes, while the Jetson Nano serves as the main controller. The Jetson Nano, Arduino microcontrollers, and sensor nodes can all communicate wirelessly with each other using XBee modules, which also makes it easier to send and receive control signals and data. The system's sensors gather information on variables related to the environment, including vibration, soil moisture, water levels, human presence, distance, temperature, humidity, flame, gas concentration, light intensity, sound intensity, and level of light and sound. Within the smart home system, LED lights, buzzers, and a 7-color LED provide visual and aural feedback.The collected data can be processed and analyzed by the Jetson Nano for decision-making and control of the connected devices.
The hardware setup includes the following components:
* **Jetson Nano**: The Jetson Nano serves as the main controller and interface for the smart home system. It provides powerful computing capabilities and acts as the central hub for data processing and decision-making.
* **Arduino**: Four Arduino microcontrollers are utilized in the project. These microcontrollers serve as intermediaries between the Jetson Nano and the different sensor nodes, facilitating data acquisition and communication.
* **Breadboard**: Four breadboards are used to provide a platform for connecting and prototyping the various hardware components, enabling their integration into the system.
* **Jumper Wires**: Jumper wires are employed to establish electrical connections between different components, ensuring the proper flow of data and signals within the system.
* **XBee Modules**: Three XBee modules are utilized to enable wireless communication between the Jetson Nano, Arduino microcontrollers, and the sensor nodes. These modules facilitate the transmission of control signals and data exchange within the smart home application.
* **Sensors**: A variety of sensors are incorporated into the system to gather data on different environmental parameters. These include:
* **Flame sensor**: Detects the presence of fire or flames.
* **Gas sensor**: Measures the concentration of gases in the environment.
* **Photoresistor sensor**: Detects and measures light levels.
* **Big sound sensor**: Captures and analyzes sound intensity.
* **Temperature and humidity sensor**: Measures temperature and humidity levels.
* **PIR sensor**: Detects human presence based on infrared radiation.
* **Ultrasonic sensor**: Measures distance by emitting and receiving ultrasonic waves.
* **Shock sensor**: Detects sudden vibrations or movements.
* **Soil and moisture sensor**: Measures moisture levels in the soil.
* **Water level sensor**: Monitors the water level in tanks or containers.
* Additional Components: The setup also includes LED lights, buzzers, and a 7-color LED for visual and auditory
feedback.
### _Xbee and its Role in WSN_
XBee is a wireless communication module that uses the Zigbee protocol. Zigbee is a low-power, low-range wireless protocol suitable for home automation. It operates in the 2.4 GHz ISM band and provides low power consumption and high security. XBee modules can be used for wireless communication, creating networks, and remotely controlling devices. In a wireless sensor network (WSN), XBee modules connect sensor nodes to a central controller like Raspberry Pi or Jetson Nano. They enable data transmission and remote control of devices within the smart home. XBee modules can also be configured as a mesh network, ensuring robust and fault-tolerant communication. XBee S2C documentation and datasheet is available in the link given here ([https://www.digi.com/resources/library/data-sheets/ds_xbee-s2c-802-15-4](https://www.digi.com/resources/library/data-sheets/ds_xbee-s2c-802-15-4)).
The methodology involves connecting and configuring these hardware components according to the desired system architecture. The Arduino microcontrollers interface with the sensors and XBee modules to collect sensor data and transmit it to the Jetson Nano. The Jetson Nano then processes and analyzes the data, utilizing machine learning algorithms if necessary, to make decisions and control the connected devices within the smart home.
The XBee modules enable wireless communication between the Jetson Nano and the Arduino microcontrollers, as well as between the Arduino microcontrollers and the sensor nodes. This wireless communication ensures seamless data exchange and control signal transmission throughout the smart home system.
It is important to note that, unlike traditional datasets, the data in this project is collected in real time from the interconnected Zigbee modules. The collected data is then utilized for analysis and decision-making within the smart home application.
This methodology involves setting up the hardware components, configuring the Arduino microcontrollers and XBee modules, and establishing wireless communication between them. The sensors are integrated into the system to collect real-time data, which is processed by the Jetson Nano for decision-making and control. This hardware-based approach enables the development of a functional smart home application with the capability to monitor and control various aspects of the home environment.
### _Setup Description_
In this section, we describe the implementation of our work. The proposed project involves using two Xbee modules to transmit data wirelessly from one location to another. The Xbee modules will be connected to multiple sensors and Arduino microcontrollers, which will collect data from the sensors and send it to the Xbee modules for transmission. The Image 2 the architecture of a smart home system in which Bee is connected to sensors in every room. These sensors collect data on temperature, humidity, air quality, and other environmental factors.
We use a library called software serial library in our project enables us to use any digital pin on the micro controller as an RX or TX pin, which provides flexibility in terms of pin usage and allows us to communicate with other devices using serial communication without being limited to the hardware serial pins. Additionally, the library provides a number of useful functions for reading and writing data to the software serial port, making it easy to implement serial communication in our project.
* **Living Room** : a sound sensor to measure sound levels, a photoressitor to measure ambient light levels, and two pins for a disco light effect. The sound level is read using the sound sensor, the photoressitor is read to measure ambient light levels, and the DHT sensor is used to measure temperature and humidity. The function takes a string as input and sends it over the serial connection by breaking it down into bytes and sending each byte one at a time.The image depicts the sensors used in this room 3. If the ambient light level is greater than 500, the disco light effect is activated and the two pins are set to opposite states. if the sound level is greater than 30, the led pin is
Fig. 1: XBee Module
Fig. 2: Setup Architecture
turned on for 2 seconds. Finally, the loop function waits for 1 second before repeating.
* **Kitchen** : uses two analog sensors, a flame sensor and a gas sensor, and a buzzer for an auditory output. The flame sensor and gas sensor are used to measure the values of flame and gas in the surrounding. If the flame sensor value is less than 800 or the gas sensor value is greater than 600, the buzzer sounds for 1 second. Finally, the loop function waits for 1 second before repeating.The image depicts the sensors used in this room 4.
* **Porch** : Sensors used are ultrasonic sensor to measure distance, a PIR sensor to detect motion, and a shock sensor to detect shocks. It also uses a LED and a buzzer for visual and auditory output. The distance is measured using the ultrasonic sensor, the PIR sensor is read to detect motion, and the shock sensor is utilized to detect shocks. If a shock is detected, the LED is turned on. The distance, motion detection, and shock-sensor state are then printed to the Serial console and sent over the serial connection to another device using the "send message" function. The function takes a string as input and sends it over the serial connection by breaking it down into bytes and sending each byte one at a time. The image depicts the sensors used in this room 5.
* **Terrace garden** : We use a DHT11 sensor to measure temperature and humidity, an analog sensor to measure the moisture level of the soil, and another sensor to measure the water level in a tank. It also uses a buzzer for an auditory output. The moisture sensor and water level sensor are read to measure the values of moisture
Fig. 4: Kitchen
Fig. 3: Living Room
and water level. The DHT sensor is used to measure the temperature and humidity. If the water level sensor value is greater than 600, the buzzer sounds for 1 second. Finally, the loop function waits for 1 second before repeating. The image depicts the sensors used in this room 6.
The data will then be received by the second Xbee module, which can be connected to another Jetson Nano for further processing or storage (MongoDB Cluster).
Overall, the system will allow for wireless monitoring and data collection from multiple sensors in different locations using Xbee wireless communication technology and Arduino microcontroller boards.
### _Cloud Integration_
In this scenario, a Wireless Sensor Network (WSN) is being set up using XBee radios, a Jetson Nano single-board computer, and MongoDB Atlas. The XBee radios are used to wirelessly transmit sensor data from the sensor nodes to the Jetson Nano, which acts as a gateway to the MongoDB Atlas database.MongoDB Atlas is a cloud-based, fully managed version of MongoDB, which is a popular NoSQL database. The sensor data collected by the XBee radios are stored in MongoDB Atlas, where it can be easily queried, analyzed, and visualized.
The Jetson Nano is programmed to communicate with the XBee radios and collect sensor data. It also uses the PyMongo library to interact with MongoDB Atlas. The Jetson Nano can be configured to establish a connection with the MongoDB Atlas server and insert the sensor data into a specific database and collection. The sensor data stored in MongoDB Atlas can be accessed and visualized using various tools such as MongoDB Compass, a graphical user interface for MongoDB, or by using the MongoDB query language (MQL) to create custom queries and aggregations.
### _Gui_
Streamlit is a Python library that allows you to create interactive web apps for machine learning and data science. To deploy a Streamlit app that uses data collected from Xbee transmitted to Jetson board and stored in MongoDB Atlas,the following steps are to be followed. The app starts by connecting to the MongoDB database using the pymongo library and the MongoDB connection string. It then presents the user with a dropdown menu to select a room to analyze, which sets the collection to retrieve data from House.py. The app then presents the user with another dropdown menu to select a field to analyze. It then displays the current time when the data was retrieved, and a line chart of the time series data for the selected field using the st.line-chart() function. Next, we define a function get-data() that retrieves data from the selected collection, converts the timestamp field to a datetime object, sets it as the index and returns the current time and dataframe. This function is called to initialize the data when the app starts. We create a Streamlit app that connects to a MongoDB database, allows the user to select a room and field to analyze, retrieves and displays the data in a line chart, and provides the ability to refresh the data.
## IV Results and Discussions
We have successfully combined Jetson Nano, Arduino microcontrollers, XBee modules, and numerous sensors. The main controller and data processing component is a Jetson Nano, while the bridges to the sensor nodes are Arduino microcontrollers. Wireless connection is made possible via XBee modules, making it easier to exchange control signals and send and receive data. A wide variety of sensors are used in the system to collect data on environmental factors, and LED lights, buzzers, and a 7-color LED are used to provide visual and acoustic feedback. Real-time monitoring and control are made possible by the Jetson Nano's processing and analysis of the sensor data. In order to link sensor nodes to the Jetson Nano and enable remote device control, XBee modules create a wireless sensor network. MongoDB Atlas is used to store and manage sensor data, making it simple to query, analyse, and visualise. MongoDB is accessed through a graphical user interface (GUI) created with Streamlit, which offers an interactive platform for real-time analysis and visualisation of the sensor data. Overall, using this methodology allows for the creation of a smart home system that is functional and has monitoring, controlling, and analytical capabilities.
Table II discusses that zigbee is suitable for home automation with low-power mesh networks and a maximum of 65,000 nodes. WiFi offers high-speed data transmission and easy setup, but with shorter battery life and limited range. LoRa excels in low-power, long-range communication,
Fig. 5: Porch
Fig. 6: Terrace Garden
while Bluetooth provides short-range communication, ease of use, and compatibility. The choice of wireless technology depends on factors such as range, power consumption, data rate, security, and scalability.
## V Conclusion and Futureworks
We have successfully integrated Jetson Nano, Arduino microcontrollers, XBee modules, and various sensors to create a comprehensive smart home system. It enables real-time monitoring, control, and analysis of environmental variables through wireless communication, data processing, and cloud integration.
Potential future works might include implementing a secured sytem as Zigbee networks are vulnerable to various types of cyber-attacks, so future work in this area will focus on developing more secure Zigbee protocols and systems. Edge computing: Zigbee networks can be integrated with edge computing architectures to enable data processing and decision-making at the network edge, which can lead to low latency and more efficient use of network resources. The data collected can be used for implementing Machine learning based prediction Big Data Analytics: Zigbee networks generate large amounts of data, future work will focus on developing efficient algorithms to process, analyze and extract useful information from these data
|
2309.03179 | SLiMe: Segment Like Me | Significant strides have been made using large vision-language models, like
Stable Diffusion (SD), for a variety of downstream tasks, including image
editing, image correspondence, and 3D shape generation. Inspired by these
advancements, we explore leveraging these extensive vision-language models for
segmenting images at any desired granularity using as few as one annotated
sample by proposing SLiMe. SLiMe frames this problem as an optimization task.
Specifically, given a single training image and its segmentation mask, we first
extract attention maps, including our novel "weighted accumulated
self-attention map" from the SD prior. Then, using the extracted attention
maps, the text embeddings of Stable Diffusion are optimized such that, each of
them, learn about a single segmented region from the training image. These
learned embeddings then highlight the segmented region in the attention maps,
which in turn can then be used to derive the segmentation map. This enables
SLiMe to segment any real-world image during inference with the granularity of
the segmented region in the training image, using just one example. Moreover,
leveraging additional training data when available, i.e. few-shot, improves the
performance of SLiMe. We carried out a knowledge-rich set of experiments
examining various design factors and showed that SLiMe outperforms other
existing one-shot and few-shot segmentation methods. | Aliasghar Khani, Saeid Asgari Taghanaki, Aditya Sanghi, Ali Mahdavi Amiri, Ghassan Hamarneh | 2023-09-06T17:39:05Z | http://arxiv.org/abs/2309.03179v4 | # SLiMe: Segment Like Me
###### Abstract
Significant advancements have been recently made using Stable Diffusion (SD), for a variety of downstream tasks, e.g., image generation and editing. This motivates us to investigate SD's capability for image segmentation at any desired granularity by using as few as only _one_ annotated sample, which has remained largely an open challenge. In this paper, we propose _SLiMe_, a segmentation method, which frames this problem as a one-shot optimization task. Given a single image and its segmentation mask, we propose to first extract our novel _weighted accumulated self-attention map_ along with cross-attention map from text-conditioned SD. Then, we optimize text embeddings to highlight areas in these attention maps corresponding to segmentation mask foregrounds. Once optimized, the text embeddings can be used to segment unseen images. Moreover, leveraging additional annotated data when available, i.e., few-shot, improves _SLiMe_'s performance. Through broad experiments, we examined various design factors and showed that _SLiMe_ outperforms existing one- and few-shot segmentation methods. The project code is publicly available1.
Footnote 1: [https://github.com/aliasgharkhani/SLiMe](https://github.com/aliasgharkhani/SLiMe)
## 1 Introduction
Image segmentation is a multifaceted problem, with solutions existing at various levels of granularity. For instance, in applications like expression recognition or facial alignment, segmenting images of faces into basic regions like nose and eyes might suffice. However, in visual effects applications, more detailed segments such as eye bags, forehead, and chin are necessary for tasks like wrinkle
Figure 1: _SLiMe._ Using just one user-annotated image with various granularity (as shown in the leftmost column), _SLiMe_ learns to segment different unseen images in accordance with the same granularity (as depicted in the other columns).
## 1 Introduction
The _SLiMe_ exhibits strong performance across a wide variety of objects. The _SLiMe_ exhibits strong performance across a wide variety of objects. The images, along with their corresponding annotations used for optimization, are displayed on the left.
Figure 2: **Part segmentation results on different objects.**_SLiMe_ exhibits strong performance across a wide variety of objects. The images, along with their corresponding annotations used for optimization, are displayed on the left.
removal. Moreover, from the perspective of an end-user, a straightforward and effective approach to guide a segmentation method is determining what to segment and the desired level of detail across a broad set of images by providing only one or a few segmented examples for the method to use for training. Meanwhile, the user should not need to curate a large dataset with segmentation masks, train a large segmentation model, or encode elaborate and specific properties of target objects into the model. As a result, a customizable segmentation method that can adapt to different levels of granularity, using a few annotated samples, and provide users with the ability to intuitively define and refine the target segmentation according to their specific requirements, is of high importance.
Recent research has tackled the lack of segmentation data by delving into few-shot learning, introducing promising methods such as ReGAN (Tritrong et al., 2021). ReGAN first trains a GAN (Goodfellow et al., 2014) on the data of a specific class they aim to segment. Following this, they generate data by this GAN and the user manually annotates the generated data. Then both the generated data's features from the GAN and the annotations are utilized to train a segmentation model. In contrast, SegDDPM (Baranchuk et al., 2021) extracts features from a pre-trained diffusion model (DM) and trains an ensemble of MLPs for segmentation using few labeled data. Both excel in segmentation with 10-50 examples but struggle with extremely limited samples. Furthermore, these models require training on data specific to each category. For instance, to segment horses, it is necessary to collect a large dataset of horse images, a task that can be inherently cumbersome. Whereas, SegGPT (Wang et al., 2023) employs one-shot learning, training on color-randomized segmentation data which includes both instance and part-level masks. During inference, it segments only one region in a target image using a reference image and its binary segmentation mask. While SegGPT is effective, it demands a significant amount of annotated segmentation data for initial training, keeping the challenge of training effectively with a single annotation still unaddressed.
In this paper, we propose Segment Like Me (_SLiMe_), which segments any object/part from the same category based on a given image and its segmentation mask with an arbitrary granularity level in a one-shot manner, avoiding the need for extensive annotated segmentation data or training a generative model like GAN for a specific class (see Figure 1 and Figure 2 for some examples). For this purpose, we leverage the rich knowledge of existing large-scale pre-trained vision/language model, Stable Diffusion (SD) (Rombach et al., 2022). Recent studies like (Hertz et al., 2022) have shown that the cross-attention maps of models like SD highlight different regions of the image when the corresponding text changes. This property has been utilized to modify generated images (Hertz et al., 2022) and to achieve image correspondence (Hedlin et al., 2023). Expanding on this idea, we present two key insights. First, the multifaceted segmentation problem can be framed as a one-shot optimization task where we fine-tune the text embeddings of SD to capture semantic details such as segmented regions guided by a reference image and its segmentation mask, where each text embedding corresponds to a distinct segmented region. Second, we observed that using standalone cross-attention maps lead to imprecise segmentations, as depicted in Figure 3. To rectify this, we propose a novel weighted accumulated self (WAS)-attention map (see Section 4). This attention map incorporates crucial semantic boundary information and employs higher-resolution self-attention maps, ensuring enhanced segmentation accuracy.
Based on these insights, _SLiMe_ uses a single image and its segmentation mask to fine-tune SD's text embeddings through cross- and WAS-attention maps. These refined embeddings emphasize segmented regions within these attention maps, and are used to segment real-world images during inference, mirroring the granularity of the segmented region from the image used for optimization. Through various quantitative and qualitative experiments, we highlight the efficacy of our approach.
Figure 3: **Our proposed weighted accumulated self-attention maps’ sample results. Employing cross-attention naively without the self-attention for segmentation leads to inaccurate and noisy output (a and c). Using self-attention map along with cross-attention map to create WAS-attention map enhances the segmentation (b and d).**
_SLiMe_, even when reliant on just one or a handful of examples, proves to be better or comparable to supervised counterparts demanding extensive training. Furthermore, despite not being trained on a specific category, _SLiMe_ outperforms other few-shot techniques on average and on most parts, across almost all the datasets. For instance, we outperform ReGAN (Trittrong et al., 2021) by nearly \(10\%\) and SegDDPM (Baranchuk et al., 2021) by approximately \(2\%\) in a 10-sample setting. Additionally, in a 1-sample context, we exceed SegGPT by around \(12\%\) and SegDDPM by nearly \(11\%\).
## 2 Related Work
**Semantic Part Segmentation.** In computer vision, semantic segmentation, wherein a class label is assigned to each pixel in an image, is an important task with several applications such as scene parsing, autonomous systems, medical imaging, image editing, environmental monitoring, and video analysis (Sohail et al., 2022; He et al., 2016; Chen et al., 2017; Zhao et al., 2017; He et al., 2017; Chen et al., 2017; Sandler et al., 2018; Chen et al., 2018). A more fine-grained derivative of semantic segmentation is semantic part segmentation, which endeavors to delineate individual components of objects rather than segmenting the entirety of the objects. Algorithms tailored for semantic part segmentation find applications in subsequent tasks such as pose estimation (Zhuang et al., 2021), activity analysis (Wang and Yuille, 2015), object re-identification (Cheng et al., 2016), autonomous driving and robot navigation (Li et al., 2023). Despite notable advancements in this domain (Li et al., 2023; 2022), a predominant challenge faced by these studies remains the substantial need for annotated data, a resource that is often difficult to procure. Hence, to address these challenges, research has pivoted towards exploring alternative inductive biases and supervision forms. However, a limitation of such methodologies is their reliance on manually curated information specific to the object whose parts they aim to segment. For example, authors of (Wang and Yuille, 2015) integrate inductive biases by harnessing edge, appearance, and semantic part cues for enhanced part segmentation. Compared to these approaches, our method only necessitates a single segmentation mask and doesn't rely on ad-hoc inductive biases, instead leveraging the knowledge embedded in SD.
**Few-shot Semantic Part Segmentation.** One approach to reduce the need for annotated data is to frame the problem within the few-shot part segmentation framework. There is a large body of work on few-shot semantic segmentation (Catalano and Matteucci, 2023; Xiong et al., 2022; Johnander et al., 2022; Zhang et al., 2022; Li et al., 2022), however, they mostly focus on the object- (not part-) level. A recent paper, ReGAN (Trittrong et al., 2021), proposed a few-shot method for part segmentation. To achieve this, the researchers leveraged a large pre-trained GAN, extracting features from it and subsequently training a segmentation model using these features and their associated annotations. While this approach enables the creation of a semantic part segmentation model with limited annotated data, it suffers from a drawback. Specifically, to train a model to segment parts of a particular object category, first a GAN is required to be trained from scratch on data from the same category. For instance, segmenting parts of a human face would necessitate a GAN trained on generating human face images. Thus, even though the method requires minimal annotated data, it demands a substantial amount of images from the relevant category. Following that, a few images, which are generated by the GAN, need to be manually annotated to be used for training the segmentation model. Afterward, a multitude of images should be generated by the GAN and segmented by the trained segmentation model. Finally, all the annotated data and pseudo-segmented data are used for training a segmentation model from scratch. Instead, we leverage pre-trained DMs that are trained on large general datasets, eliminating the need to curate category-specific datasets.
**Diffusion models for semantic part segmentation.** DMs (Sohl-Dickstein et al., 2015) are a class of generative models that have recently gained significant attention because of their ability to generate high-quality samples. DMs have been used for discriminative tasks such as segmentation, as shown in SegDDPM (Baranchuk et al., 2021). Given a few annotated images, they use internal features of DM, to train several MLP modules, for semantic part segmentation. Compared to SegDDPM, we utilize the semantic knowledge of text-conditioned SD, and just optimize the text embeddings. This way, we have to optimize fewer parameters for the segmentation task, which makes it possible to optimize using just one segmentation sample.
SD (Rombach et al., 2022a) has been used for several downstream tasks such as generating faithful images (Chefer et al., 2023), inpainting, outpainting (Rombach et al., 2022a), generating 3D shapes
using text (Tang, 2022), and editing images guided by a text prompt (Brooks et al., 2023). In addition to these, a large body of work fine-tune SD or use its cross-attention modules to perform interesting tasks. For instance, (Gal et al., 2022) fine-tunes SD's text embeddings to add a new object or style to its image generation space. Another example, (Hertz et al., 2022) uses SD's cross-attention modules to impose more control over the generation process. Moreover, in a third instance, authors of (Mokady et al., 2023) edit a real image using SD's cross-attention modules. SD's cross-attention maps have been used for image correspondence by (Hedlin et al., 2023). Lastly, a recent paper (Patashnik et al., 2023), uses SD's self-attention and cross-attention modules for object level shape variations. Although these papers explore the applicability of SD in different tasks, its utilization in semantic part segmentation is not fully explored. Therefor, in this work, we take advantage of SD's self-attention and cross-attention modules and fine-tune its text embeddings through these attention mechanisms to perform semantic part segmentation even with just one annotated image.
## 3 Background
**Latent Diffusion Model (LDM)**. One category of generative models are LDMs, which model the data distribution by efficiently compressing it into the latent space of an autoencoder and utilizing a DM to model this latent space. An appealing feature of LDMs is that their DM, denoted as \(\epsilon(.;\theta)\), can be extended to represent conditional distributions, conditioned on text or category. To train a text-conditioned LDM, a natural language prompt is tokenized to obtain \(P\). Then \(P\) is passed to a text encoder \(\mathcal{G}(.;\theta)\) to get \(\mathcal{P}=\mathcal{G}(P;\theta)\). Afterward, the input image \(I\) is encoded to obtain \(\mathcal{I}\), and a standard Gaussian noise \(\epsilon\) is added to it with respect to time step \(t\) to get \(\mathcal{I}_{t}\). Finally, the following objective is used to optimize the parameters of both \(\mathcal{G}(.;\theta)\) and \(\epsilon(.;\theta)\), with the aim of enabling the model to acquire the capability to predict the added noise \(\epsilon\):
\[\mathcal{L}_{\textit{LDM}}=\mathbb{E}_{\mathcal{I},\epsilon\sim\mathcal{N}(0, 1),t}[\|\epsilon-\epsilon(\mathcal{I}_{t},t,\mathcal{P};\theta)\|_{2}^{2}]. \tag{1}\]
In this work, we use text-conditioned SD (Rombach et al., 2022), as our LDM, for two reasons. First, SD is conditioned on the text using the cross-attention modules, which have shown to exhibit rich semantic connections between the text and the image embeddings (Hertz et al., 2022). Second, the internal features of SD are semantically meaningful and preserve the visual structure of the input image, enhancing the interrelation between text and image.
**Attention Modules**. SD's DM employs a UNet structure, which has two types of attention modules (Vaswani et al., 2017): self-attention and cross-attention. The self-attention module calculates attention across the image embedding, capturing relationships between a specific element and other elements within the same image embedding. On the other hand, the cross-attention module computes relationships between the latent representations of two different modalities, like text and image in the case of text-conditioned SD.
An attention module comprises three components: query, key, and value. It aims to transform the query into an output using the key-value pair. Therefore, given query \(Q\), key \(K\), and value \(V\) vectors with the dimension of \(d\), the output \(O\) of an attention module is defined as follows:
\[O=\text{Softmax}\left(\frac{QK^{\intercal}}{\sqrt{d}}\right)\cdot V. \tag{2}\]
In the self-attention module, the query, key, and value vectors are derived from the image embedding, while in the cross-attention module, the query vector is derived from the image embedding, and the key and value vectors are derived from the text embedding. In our scenario, we extract the normalized attention map denoted as \(S=\text{Softmax}\left(\frac{QK^{\intercal}}{\sqrt{d}}\right)\), which is applicable to both the self-attention and cross-attention modules, and we note them as \(S_{sa}\in\mathbb{R}^{H^{\prime}\times W^{\prime}\times H^{\prime}\times W^{ \prime}}\) and \(S_{ca}\in\mathbb{R}^{H^{\prime}\times W^{\prime}\times T}\), respectively. In this context, \(H^{\prime}\) and \(W^{\prime}\) represent the height and width of the image embedding and \(T\) denotes the total number of text tokens. \(S_{sa}\) shows the pairwise similarity of the elements in its input image embedding. Hence, each element \(p\) in its input, is associated with an activation map, highlighting the similar elements to \(p\)(Patashnik et al., 2023). Moreover, the intensity of the similar elements decrease as we move farther away from \(p\). On the other hand, for
each text token, \(S_{ca}\) has an activation map, which effectively spotlights elements within the image embedding that align with that token within the model's semantic space. For example, if the model is instructed to generate an image of a bear with the text prompt "a bear", the activation map associated with "bear" token within \(S_{ca}\), will emphasize on those elements that correspond to the bear object within the generated image.
## 4 Method
We introduce _SLiMe_, a method that enables us to perform segmentation at various levels of granularity, needing only one image and its segmentation mask. Prior research has demonstrated that SD's cross-attention maps can be used in detecting coarse semantic objects during the generation process for more control in generation (Hertz et al., 2022) or finding correspondence between images (Hedlin et al., 2023). However, there remains uncertainty regarding the applicability of cross-attention maps for finer-grained segmentation of objects or parts, especially within real-world images. To resolve this, we frame the segmentation problem as a one-shot optimization task where we extract the cross-attention map and our novel WAS-attention map to fine-tune the text embeddings, enabling each text embedding to grasp semantic information from individual segmented regions (Figure 4). During the inference phase, we use these optimized embeddings to obtain the segmentation mask for unseen images. In what follows, we will first delve into the details of the text embedding optimization and then the inference process.
### Optimizing Text Embedding
Given a pair of an image (\(I\in\mathbb{R}^{H\times W\times 3}\)) and a segmentation mask (\(M\in\{0,1,2,...,K-1\}^{H\times W}\)) with \(K\) classes, we optimize the text embeddings using three loss terms. The first loss term is a cross entropy loss between the cross-attention map and the ground truth mask. The second one, is the Mean Squared Error (MSE) loss between the WAS-attention map and the ground truth mask. These loss terms refine the text embeddings and enable them to learn to emphasize segmented regions within both cross- and WAS-attention maps. Additionally, there is a subsequent SD regularization term to ensure that the optimized text embeddings remain within the trained distribution of SD.
To optimize the text embeddings, it is necessary to extract the cross-attention and self-attention maps. These maps are derived from SD's UNet by initially encoding the training image \(I\) into the image embedding, \(\mathcal{I}\). Subsequently, a standard Gaussian noise is added to this embedding with respect to the time step \(t_{\text{opt}}\), resulting in \(\mathcal{I}_{t}\). Next, a text prompt is converted to a sequence of text embeddings denoted as \(\mathcal{P}\). We then take the first \(K\) text embeddings and optimize them. The corresponding text embedding of each class is denoted by \(\mathcal{P}_{k}\). It is essential to note that SD is configured to handle 77 text tokens. Consequently, our method can accommodate up to 77 segmentation classes, which is sufficient for most applications. Finally, \(\mathcal{P}\) and \(\mathcal{I}_{t}\) are fed into the UNet to obtain the denoised image embedding \(\mathcal{I}^{\prime}\) and extract the cross- and self-attention maps.
SD has multiple cross-attention modules distributed across various layers. We denote the normalized cross-attention map of the \(l^{th}\) layer as \(\{S_{ca}\}_{l}\in\mathbb{R}^{H^{\prime}_{l}\times W^{\prime}_{l}\times T}\) and average them over different layers, as we have empirically observed that this averaging improves the results. However, since \(H^{\prime}_{l}\) and \(W^{\prime}_{l}\) vary across different layers, we resize all \(\{S_{ca}\}_{l}\) to a consistent size for all the utilized layers.
Figure 4: **Optimization step.** After extracting text and image embeddings, adding noise to the image embedding, and passing both through the UNet to obtain cross- and WAS-attention maps, we calculate two losses using these maps and the ground truth mask. Additionally, we incorporate SD’s loss, derived from the comparison between the added noise and the UNet’s predicted noise.
Finally, the attention map employed in our loss function is calculated as follows:
\[A_{ca}=Average_{l}(Resize(\{S_{ca}\}_{l})), \tag{3}\]
where \(A_{ca}\in\mathbb{R}^{H^{\prime\prime}\times W^{\prime\prime}\times T}\), \(Average_{l}\) computes the average across layers, and \(Resize\) refers to bilinear interpolation for resizing to dimensions \(H^{\prime\prime}\times W^{\prime\prime}\). Figure 5 visually depicts this procedure. Finally, we compute the cross-entropy loss between the resized ground truth mask \(M\) to \(H^{\prime\prime}\times W^{\prime\prime}\) (referred to as \(M^{\prime}\)) and first \(K\) channels in the resized cross-attention map \(A_{ca}\) for \(k=\{0,...,K-1\}\), as outlined below:
\[\mathcal{L}_{\textit{CE}}=\textit{CE}(A_{ca}^{[0:K-1]},M^{\prime}), \tag{4}\]
where _CE_ refers to cross-entropy. Using this loss, we optimize \(k^{th}\) text embedding such that \(A_{ca}^{k}\) highlights the \(k^{th}\) class's region in the segmentation mask, for \(k=\{1,...,K-1\}\). Note that we do not optimize the first text embedding and assign \(A_{ca}^{0}\) to the background class, as empirically we have found that optimizing it yields suboptimal performance.
However, as the resolution of \(\{S_{ca}\}_{l}\) we use are lower than the input image, object edges are vague in them. To enhance segmentation quality, we propose WAS-attention map, which integrates both self-attention and cross-attention maps. Besides possessing pairwise similarity between the image embedding's elements, the self-attention map has two additional features that make it suitable to be used for improving the segmentation results. First, the self-attention maps that we use, have higher resolution of feature maps compared to utilized cross-attention maps. Second, it shows the boundaries in more detail. Like the cross-attention maps, we extract self-attention maps from multiple layers and compute their average as follows:
\[A_{sa}=\textit{Average}_{l}(\{S_{sa}\}_{l}), \tag{5}\]
where \(A_{sa}\in\mathbb{R}^{H^{\prime}_{l}\times W^{\prime}_{l}\times H^{\prime}_{l} \times W^{\prime}_{l}}\) and \(Average_{l}\) calculates the average across layers. In equation 5 there is no need for a \(Resize\) function as the self-attention maps that we use, all have the same size.
To calculate WAS-attention map, we first resize \(A_{ca}^{k}\) to match the size of \(A_{sa}\) using bilinear interpolation and call it \(R_{ca}^{k}\). Consequently, for each element \(p\) in \(R_{ca}^{k}\) we have a channel in \(A_{sa}\) that highlights relevant elements to \(p\). Finally, we calculate the weighted sum of channels of \(A_{sa}\) to obtain \(S_{\textit{WAS}}^{k}\) (WAS-attention map). The weight assigned to each channel is the value of the corresponding element of that channel in \(R_{ca}^{k}\) (Figure 5). This process can be outlined as follows:
\[S_{\textit{WAS}}^{k}=sum(flatten(R_{ca}^{k})\odot A_{sa}). \tag{6}\]
This refinement enhances the boundaries because \(A_{sa}\) possesses rich understanding of the semantic region boundaries (see the cross-attention and WAS-attention maps in Figure 4). At the end, we
Figure 5: **Attention-Extraction module. To extract WAS-attention map of \(k^{th}\) text embedding with respect to an image, we follow these three steps: (1) We feed the \(k^{th}\) text embedding (\(\mathcal{P}_{k}\)) together with the noised embedding of the image (\(\mathcal{I}_{t}\)) to the UNet. Then calculate \(A_{ca}^{k}\) by extracting the cross-attention maps of \(\mathcal{P}_{k}\) from several layers, resizing and averaging them. (2) We extract the self-attention maps from several layers and average them (\(A_{sa}\)). (3) Finally, we flatten \(A_{ca}^{k}\) to get \(F_{ca}^{k}\) and calculate a weighted sum of channels of \(A_{sa}\), by weights coming from \(F_{ca}^{k}\), and call it “Weighted Accumulated Self-attention map” (\(S_{\textit{WAS}}^{k}\)). The UNet also produces an output that represents the predicted noise, which is used for calculating the loss of the SD.**
resizee \(S_{\text{WAS}}^{k}\) to \(H^{\prime\prime}\times W^{\prime\prime}\) and calculate the _MSE_ loss this way:
\[\mathcal{L}_{\text{MSE}}=\sum_{k=0}^{K-1}\|Resize(S_{\text{WAS}}^{k})-M_{k}^{ \prime}\|_{2}^{2}, \tag{7}\]
where \(M_{k}^{\prime}\) is a binary mask coming from the resized ground truth mask \(M^{\prime}\), in which only the pixels of the \(k^{th}\) class are 1.
The last loss we use is the SD's loss function (\(\mathcal{L}_{LDM}\)), which is the _MSE_ loss between the added noise and the predicted noise. We use this loss to prevent the text embeddings from going too far from the understandable space by SD. Finally, our objective to optimize the text embeddings is defined as:
\[\mathcal{L}=\mathcal{L}_{CE}+\alpha\mathcal{L}_{\text{MSE}}+\beta\mathcal{L}_ {\text{LDM}}, \tag{8}\]
where \(\alpha\) and \(\beta\) are the coefficients of the loss functions.
### Inference
During inference, our objective is to segment unseen images at the same level of details as the image used during optimization. To achieve this, we begin with the unseen image and encode it into the latent space of SD. Following this, a standard Gaussian noise is introduced to the encoded image, with the magnitude determined by the time parameter \(t_{\text{test}}\). Subsequently, we use the optimized text embeddings along with the encoded image to derive corresponding cross-attention and self-attention maps from the UNet model. These attention maps, as shown in Figure 5, enable us to obtain WAS-attention maps for each text embedding. Afterward, we select the first \(K\) WAS-attention maps that correspond to \(K\) classes. These selected maps are then resized using bilinear interpolation to match the dimensions of the input image and are stacked along the channel dimension. Subsequently, we generate a segmentation mask by performing an argmax across the channels. It is important to note that this process can be repeated for multiple unseen images during inference, without requiring a new optimization.
An analysis of the selection of various parameters used in our method is provided in the Appendix A.2.
## 5 Experiments
In this section, we demonstrate the superiority of _SLiMe_ in semantic part segmentation. We use mIoU to compare our approach against three existing methods: ReGAN (Tritrong et al., 2021), SegDDPM (Baranchuk et al., 2021), and SegGPT (Wang et al., 2023) on two datasets: PASCAL-Part (Chen et al., 2014) and CelebAMask-HQ (Lee et al., 2020). ReGAN and SegDDPM utilize pre-trained GAN and DDPM models, respectively, training them on FFHQ and LSUN-Horse datasets for face and horse part segmentation. Additionally, ReGAN employs a pre-trained GAN from the LSUN-Car dataset for car part segmentation. We present the results for both 10-sample and 1-sample settings, utilizing a single validation sample for 10-sample experiments of _SLiMe_. Also, all experiments are conducted three times with different initializations, reporting their mean and standard
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & Body & Light & Plate & Wheel & Window & Background & Average \\ \hline CNN\({}^{*}\) & 73.4 & 42.2 & 41.7 & 66.3 & 61.0 & 67.4 & 58.7 \\ CNN+CRF\({}^{*}\) & 75.4 & 36.1 & 35.8 & 64.3 & 61.8 & 68.7 & 57 \\ \hline ReGAN & 75.5 & 29.3 & 17.8 & 57.2 & 62.4 & 70.7 & 52.15 \\ _SLiMe_ & **81.5 \(\pm\) 1.0** & **56.8 \(\pm\) 1.2** & **54.8 \(\pm\) 2.7** & **68.3 \(\pm\) 0.1** & **70.3 \(\pm\) 0.9** & **78.4 \(\pm\) 1.6** & **68.3 \(\pm\) 1.0** \\ \hline SegGPT\({}^{*}\) & 62.7 & 18.5 & 25.8 & **65.8** & **69.5** & **77.7** & 53.3 \\ _SLiMe_ & **79.6 \(\pm\) 0.4** & **37.5 \(\pm\) 5.4** & **46.5 \(\pm\) 2.6** & 65.0 \(\pm\) 1.4 & 65.6 \(\pm\) 1.6 & 75.7 \(\pm\) 3.1 & **61.6 \(\pm\) 0.5** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Segmentation results for class car. _SLiMe_ consistently outperforms ReGAN, even though ReGAN utilized generated data alongside 10 annotated data for training. Furthermore, our method exhibits superior performance to SegGPT on average, despite SegGPT being supervised. The first two rows show the supervised methods, for which we use the reported numbers in ReGAN. The second two rows show the 10-sample setting and the last two rows, refer to the 1-sample scenario. \({}^{*}\) indicates the supervised methods.**
deviation. We conduct experiments for SegDDPM and SegGPT using the custom version of test sets of the above-mentioned datasets, which are based on ReGAN settings, and report their results accordingly. For the remaining methods, we reference the results reported by ReGAN. Note that ReGAN and SegDDPM are not universally applicable to arbitrary classes, unless a large dataset for the given class is collected and a generative model is trained. However, _SLiMe_ does not require collecting large category specific data and training an additional generative model, because of the inherent semantic knowledge embedded in SD (Figure 2). Whereas SegGPT requires a large segmentation dataset to be trained initially.
**PASCAL-Part.** This dataset provides detailed annotations of object parts. For our experiments, we focus on car and horse classes (for more details, please refer to Appendix B.1). Table 1 presents results for the car class. As there is no available pre-trained model for the car class in SegDDPM, we couldn't make a comparison with this model for this category. As evident from Table 1, _SLiMe_ outperforms ReGAN in the 10-sample setting on average and all the part segments by a significant margin. Moreover, in the 1-sample setting, _SLiMe_ either outperforms SegGPT by a large margin or performs comparably. Likewise, Table 2 displays our results for the horse class, where it is evident that our method, _SLiMe_, outperforms ReGAN, SegDDPM, and SegGPT on average and for most of the parts. It is worth noting that, even though SegGPT only requires a single segmentation sample for inference, it is a fully supervised method and demands a large segmentation dataset for training. In contrast, _SLiMe_ is truly a _one-shot_ technique, where only a single sample is needed for optimization.
**CelebAMask-HQ.** This is a dataset of the facial part segmentation, and we report results on the parts used in ReGAN for comparison (for more details, please consult Appendix B.1). Figure 6 and Table 3 showcase our qualitative and quantitative results. In the 1-sample setting, _SLiMe_ outperforms other methods on average and for the majority of parts, demonstrating its superiority in 1-sample scenario. On the other hand, in the 10-sample setting, except for three parts, our method either performs better or comparably to other methods. As mentioned earlier, note that SegGPT benefits from training on a large segmentation dataset. Also, the other two methods employ class-specific pre-trained models. In contrast, _SLiMe_ utilizes a model pre-trained on general data, equipping it with the ability to work across a wide range of categories rather than being limited to a specific class.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline & Head & Leg & Neck+Torso & Tail & Background & Average \\ \hline Shape+Appereance\({}^{*}\) & 47.2 & 38.2 & 66.7 & - & - & - \\ CNN+CRF\({}^{*}\) & 55.0 & 46.8 & - & 37.2 & 76 & - \\ \hline ReGAN & 50.1 & 49.6 & **70.5** & 19.9 & 81.6 & 54.3 \\ SegDDPM & 41.0 & 59.1 & 69.9 & 39.3 & **84.3** & 58.7 \\ _SLiMe_ & **63.8 \(\pm\) 0.7** & **59.5 \(\pm\) 2.1** & 68.1 \(\pm\) 4.4 & **45.4 \(\pm\) 2.4** & 79.6 \(\pm\) 2.5 & **63.3 \(\pm\) 2.4** \\ \hline SegGPT\({}^{*}\) & 41.1 & 49.8 & **58.6** & 15.5 & 36.4 & 40.3 \\ SegDDPM & 12.1 & 42.4 & 54.5 & 32.0 & 74.1 & 43.0 \\ _SLiMe_ & **61.5 \(\pm\) 1.0** & **50.3 \(\pm\) 0.7** & 55.7 \(\pm\) 1.1 & **40.1 \(\pm\) 2.9** & **74.4 \(\pm\) 0.6** & **56.4 \(\pm\) 0.8** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Segmentation results for class horse.**_SLiMe_ outperforms ReGAN, SegDDPM, and SegGPT on average and most of the parts. The first two rows show the supervised methods, for which we use the reported numbers in ReGAN. The middle three rows show the 10-sample setting and the last three rows, are the results of the 1-sample scenario. \({}^{*}\) indicates the supervised methods.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline & Cush & Eyebour & Ear & Eye & Hair & Mouth & Neck & None & Face & Background & Average \\ \hline ReGAN & 15.5 & **68.2** & 37.3 & **75.4** & 84.0 & **85.5** & **89.3** & **84.6** & **90.0** & 84.7 & 69.9 \\ SegDDPM & 41.6 & 60.3 & **71.3** & 75.5 & **84.0** & 83.5 & 79.2 & 81.9 & 89.2 & 84.5 & 69.9 \\ _SLiMe_ & **63.1 \(\pm\) 1.6** & 62.0 \(\pm\) 1.6 & 64.2 \(\pm\) 1.9 & 65.3 \(\pm\) 0.8 & 85.3 \(\pm\) 0.4 & 82.1 \(\pm\) 1.6 & 79.4 \(\pm\) 2.2 & 79.1 \(\pm\) 1.4 & 88.3 \(\pm\) 0.2 & **87.1 \(\pm\) 0.0** & 75.7 \(\pm\) 0.4 \\ \hline ReGAN & & 57.8 & 71.1 & 76.0 & & & & & & \\ SegGPT & 24 & **48.5** & 32.3 & 51.7 & **82.7** & 66.7 & 77.3 & 73.6 & 85.7 & 28.0 & 57.1 \\ SegDDPM & 28.9 & 46.6 & **57.3** & **61.5** & 72.3 & 44.0 & 66.6 & 69.4 & 77.5 & 76.6 & 60.1 \\ _SLiMe_ & **52.6 \(\pm\) 1.4** & 44.2 \(\pm\) 2.1 & 57.1 \(\pm\) 3.6 & 61.3 \(\pm\) 4.6 & 80.9 \(\pm\) 0.5 & **74.8 \(\pm\) 2.9** & **78.9 \(\pm\) 1.3** & **77.5 \(\pm\) 1.8** & **86.5 \(\pm\) 0.3** & **81.6 \(\pm\) 0.8** & **69.6 \(\pm\) 0.3** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Segmentation results of CelebAMask-HQ10.** Our method consistently outperforms ReGAN, SegDDPM, and SegGPT in the majority of parts in 1-sample setting in the last four rows. Additionally, _SLiMe_ either outperforms or performs comparably to ReGAN and SegDDPM in 10-sample setting in the first three rows. \({}^{*}\) is used to denote supervised methods.
**Additional Results**. We also showcase the versatility of our method, which can be optimized on an occluded object and infer images without the occlusion, or conversely, be optimized on a fully visible object and make predictions on occluded objects. This shows our method's capability to comprehend part and object semantics. Figure 7 illustrates that despite occlusion of the target region caused by the person in the image used for optimization, our method performs well. It is also possible to segment occluded objects using a visible reference object (see Figure 12). Moreover, in Figure 8, we compare our method against SegGPT (Wang et al., 2023) using two camouflaged animals, namely a crab and a lizard. Remarkably, _SLiMe_ achieves precise segmentation of these animals, even in situations where they were challenging to be detected with naked eye. This shows that _SLiMe_ learns rich semantic features about the target object that do not fail easily due to the lack of full perception.
## 6 Conclusion
We proposed _SLiMe_, a _one-shot_ segmentation method capable of segmenting _various objects/parts_ in _various granularity_. Through an extensive set of experiments and by comparing it to state-of-the-art few-shot and supervised image segmentation methods, we showed its superiority. We showed that, although _SLiMe_ does not require training on a specific class of objects or a large segmentation dataset, it outperforms other methods. On the other hand, _SLiMe_ has some limitations. For example, it may result in noisy segmentations when the target region is tiny. This can be attributed to the fact that the attention maps, which we extract from SD for segmentation mask generation, have a
Figure 8: **Segmentation results of camouflaged objects.** The larger images are used for optimizing _SLiMe_, and as the source image for SegGPT. Notably, _SLiMe_ outperforms SegGPT.
Figure 6: **Qualitative face segmentation results.** Results of _SLiMe_ optimized with 10 samples.
Figure 7: **Segmentation results of occluded objects.** Although _SLiMe_ is optimized using an occluded car’s image (the leftmost image), it demonstrates proficiency in car segmentation on unseen images (the remaining images on the right). Particularly noteworthy is its ability to accurately segment all three cars in the top-right image.
smaller size than the input image. To counter this, we employed bilinear interpolation for upscaling. Nonetheless, due to scaling, some pixels might be overlooked, leading to the undesired noisy outcomes. For visual examples of this case, please refer to Appendix A.1. Resolving the mentioned limitation, and making it applicable to 3D and videos, would be an interesting future direction. |
2309.14405 | Joint Audio and Speech Understanding | Humans are surrounded by audio signals that include both speech and
non-speech sounds. The recognition and understanding of speech and non-speech
audio events, along with a profound comprehension of the relationship between
them, constitute fundamental cognitive capabilities. For the first time, we
build a machine learning model, called LTU-AS, that has a conceptually similar
universal audio perception and advanced reasoning ability. Specifically, by
integrating Whisper as a perception module and LLaMA as a reasoning module,
LTU-AS can simultaneously recognize and jointly understand spoken text, speech
paralinguistics, and non-speech audio events - almost everything perceivable
from audio signals. | Yuan Gong, Alexander H. Liu, Hongyin Luo, Leonid Karlinsky, James Glass | 2023-09-25T17:59:05Z | http://arxiv.org/abs/2309.14405v3 | # Joint Audio and Speech Understanding
###### Abstract
Humans are surrounded by audio signals that include both speech and non-speech sounds. The recognition and understanding of speech and non-speech audio events, along with a profound comprehension of the relationship between them, constitute fundamental cognitive capabilities. For the first time, we build a machine learning model, called LTU-AS, that has a conceptually similar universal audio perception and advanced reasoning ability. Specifically, by integrating Whisper [1] as a perception module and LLaMA [2] as a reasoning module, LTU-AS can _simultaneously_ recognize and _jointly_ understand spoken text, speech paralinguistics, and non-speech audio events - almost everything perceivable from audio signals.
Yuan Gong\({}^{1}\), Alexander H. Liu\({}^{1}\), Hongyin Luo\({}^{1}\), Leonid Karlinsky\({}^{2}\), James Glass\({}^{1}\)\({}^{1}\)MIT CSAIL, USA
{yuangong,alexhliu,hyluo,glass}@mit.edu, [email protected]
Interactive demo available at huggingface.co/spaces/yuangongfdu/flu-2.
## 1 Introduction
Humans live in a multifarious environment of audio signals, encompassing both speech and a wide variety of non-speech sounds. The ability to accurately discern, interpret, and integrate these speech and non-speech audio elements, in conjunction with a profound understanding of the interrelationships they entail, represents a fundamental cognitive capability of humans. When we hear "watch out!" and a car horn simultaneously, we can infer the danger. If we hear birds chirping and someone says "that's a rare one," we know there is an unusual bird nearby. Understanding music usually requires paying attention to both the lyrics and the melody.
However, most existing machine learning models can only recognize either speech or audio events. Further, while being strong in audio or speech perception, these models possess limited reasoning and understanding capabilities. This motivates us to build a _joint audio and speech understanding_ model that is able to simultaneously recognize and jointly understand speech and audio events. Particularly, as shown in Figure 1, our model integrates pretrained Whisper [1] automatic speech recognizer (ASR) and a time and layer-wise Transformer (TLTR) [3] as the perception module and LLaMA [2] large language model (LLM) as the reasoning module. In addition, we formulate the training data as (audio, question, answer) (AQA) tuples, which allows us to combine 13 audio and speech datasets of various tasks with different label sets into a single 9.6M Open-ASQA dataset, among which 6.9 million data are open-ended AQA tuples generated by GPT [4] with _audio instruction tuning_[5]. We call our model LTU-AS (listen to, think of, and understand audio and speech). Performance-wise, we show LTU-AS is strong on all audio/speech tasks. But more importantly, as shown in Fig. 1 and Table 6, LTU-AS can answer free-form open-ended questions about the audio and speech with an instruction following rate over 95% (evaluated by GPT-4), and exhibits emerging joint audio and speech reasoning ability.
**Related Work:** LTU-AS substantially improves the recent audio large language model LTU [5] that only understands non-speech audio. Particularly, LTU-AS adopts Whisper [1] and TLTR [3] as the audio encoder instead of the AST [6] audio encoder in LTU. This change enables LTU-AS to recognize both speech and audio events. We also augment the LTU OpenAQA-5M dataset with 4 million speech and audio/speech understanding AQAs in creating the 9.6M Open-ASQA dataset. There are a few recent efforts on joint audio and speech recogntion [7, 8, 9, 10] but none of them exhibit advanced joint reasoning ability. Other recent audio LLMs [11, 12, 13, 14] primarily focus on only speech. To the best of our knowledge, LTU-AS is the first joint audio and speech understanding model.
Figure 1: Illustration of the LTU-AS model and real samples showing its _joint_ audio and speech understanding ability.
## 2 LTU-AS model architecture
### Design Overview
The architecture of LTU-AS is depicted in Fig. 1. The system input is a pair of audio and question in natural language form. The audio is first input to the Whisper audio encoder. Then, the output of the Whisper encoder is fed to the Whisper decoder to transcribe it to _discrete_ spoken text (if there is no speech, then the output of the decoder will be empty, which is as expected). Meanwhile, we feed the output of all 32 Whisper encoder intermediate layers to an AudioSet-pretrained Time and Layer-Wise Transformer (TLTR) [3] to encode "soft" audio events and speech paralinguistic information, and then project to a series of _continuous_ audio tokens \(\{A\}\) with a linear layer.
During training, the entire Whisper model is frozen. Only the TLTR model and projection layer are trainable. This design is due to a few reasons: First, training a large language model as an automatic speech recognizer (ASR) can be very expensive but the benefit is unclear [14], we thus choose to freeze the entire Whisper model to inherit its strong and robust ASR ability. Second, although the Whisper encoder encodes rich audio events and speech paralinguistic information [3, 15, 16], it encodes information in the representations of different layers. Since we anticipate LTU-AS being a universal perception model, we use the TLTR model to apply attention mechanisms over both time and layers.
The key advantage of this setting is that the audio is encoded to both text and continuous tokens, so both linguistic and non-linguistic information is kept. We then tokenize and embed the spoken text and input question to a sequence of text tokens \(\{S\}\) and \(\{Q\}\), respectively. Finally, we concatenate and input \(\{A\}\), \(\{S\}\), and \(\{Q\}\) to the LLaMA LLM. Due to the computational limit, we trim the length of the audio token \(\{A\}\) to 25 (corresponding to 10-seconds of audio), but allow \(\{S\}\) and \(\{Q\}\) to be of variable length.
### Audio Encoder
**Whisper**[1] is a recently proposed robust ASR model that features a standard Transformer [17]-based encoder-decoder architecture trained with a massive 680k hour labeled speech corpus recorded in diverse conditions. Notably, it was found that the Whisper encoder features not only encode linguistic information, but also encode rich general background sound information [3] and paralinguistic and other information (e.g., emotion [15] and language development [16]). In this paper, we use the Whisper-large model whose encoder and decoder are both 32-layer, 1280-dimensional Transformer networks.
**Time and Layer-Wise Transformer (TLTR)**: We use the AudioSet pretrained TLTR for Whisper proposed in [3], originally for audio event detection. We empirically find there is no need to pretrain it further on speech classification tasks before training together with LTU-AS. Whisper and TLTR pool the audio with a factor of 40, i.e., for each 10-second audio (1000 frames), the length of the TLTR output is 25 (2.5Hz).
**Projection Layer**: We use a single linear layer to project the TLTR output from 1280-dimensional to 4096-dimensional to match the embedding dimension of the LLaMA LLM.
### LLaMA Large Language Model
We use the LLaMA-7B LLM [2] with Vicuna [18] instruction following for fine-tuning. To mitigate catastrophic forgetting [19] and save computation, we freeze the entire LLaMA model and adopt Low-rank Adaptation [20] (LoRA), which introduces a small set of auxiliary learnable weights on top of the pre-trained LLaMA model. Specifically, we inject LoRA adapters (rank=8 and \(\alpha\)=16) to the projection layers for all keys and queries in all LLaMA self-attention layers [17].
**Training Objective:** As an audio LLM, LTU-AS is trained on the next token prediction task conditioning on the past tokens and the reference audio, i.e., maximizing \(P(O_{t}\mid O_{1:t-1},A,S,Q)\), through cross-entropy for all \(1<t\leq T\) given the tokenized ground truth text sequence (i.e., output) \(O_{1:T}\) and the reference audio token \(A\), spoken text \(S\), and question \(Q\). This training objective allows us to unify nearly all audio and speech tasks except audio/speech generation into a single training framework.
**Generation Setting:** We use a plain generation setting of Temperature\(=\)0.1, Top K\(=\)500, and Top P\(=\)0.95 with a repetition penalty of 1.1 [21, 22] for all tasks.
**Model Parameters:** As a LLM, LTU-AS has about 8.5 billion parameters. However, only 49 million parameters are actually trainable (40M for TLTR, 4.2M for LoRA adapters, and 5M for the projection layer), which is only about 0.6% of the total number of parameters. This significantly lowers the computation requirement to train LTU-AS. Practically, LTU-AS is trained on 4\(\times\) A6000 GPUs for about 80 hours.
## 3 The Open-ASQA dataset
We aim to build LTU-AS to address a wide range of open-ended audio and speech tasks, and understand the audio and speech jointly. To achieve this objective, we need a training dataset to provide such joint audio and speech supervision. Unfortunately, there is no existing dataset that meets our needs. The closest one is the OpenAQA dataset used to train LTU [5], which is an audio question-answering dataset consisting of 1.9 million closed-ended and 3.7 million open-ended AQAs. However, OpenAQA lacks speech related, and joint audio-speech questions. Therefore, on the basis of OpenAQA-5M, we add an additional 2.7 million speech-related AQAs (0.9 million closed-ended and 1.8 million open-ended) and 1.2 million joint audio and speech AQAs (almost all open-ended), and build a new 9.6M Open-ASQA dataset. Note that we do not collect new audio and speech data, but instead relabel 13 existing public datasets summarized in Table 2. For all these datasets, we only include data marked as training and validation samples and exclude any data marked as test or evaluation.
As with OpenAQA, all Open-ASQA samples are formatted as (audio, question, answer) tuples, where "audio" and "question" are the model inputs, and "answer" is the ground truth label. By unifying all training samples in this format, we not only map all labels to a semantic space, but are also able to train LTU-AS with a variety of different tasks easily.
### Closed-Ended AQA Generation
For each task and dataset, we paraphrase the question (e.g., "What is the audio event") with GPT-3.5-Turbo assistance to generate a diverse question set, so LTU-AS won't overfit to a specific question for a task. However, the answers are generated with a rule-based algorithm based on the original label of the dataset, and thus have a fixed format. We thus call such AQAs closed-ended AQAs. The upper section of Table 1 shows samples of closed-ended AQA pairs.
**Closed-Ended Audio AQA**: Closed-ended audio questions are from the original OpenAQA dataset, which consists of 1.9 million AQAs about the audio event labels, acoustic features, audio captioning, and audio temporal analysis. The audio tracks are from 8 audio datasets. Please refer to Table 2 and [5] for more details.
**Closed-Ended Speech AQA**: We created 941k closed-ended speech AQAs based on 4 commonly used speech datasets. The first category of questions asks the original labels of the datasets, e.g., speaker emotion/gender for IEMOCAP [23], speaker emotion and sentiment score for MOSEI [24], speaker gender for LibriTTS [25], and speaker age and gender for VoxCeleb2 [26, 27]. In addition to these original labels, we further annotate the speaker style of speech speed (computed with Whisper time stamps), pitch, and energy (computed with librosa [28]), and generate AQAs asking the speaker style. Finally, we also mix in about 150k ASR AQAs that have questions asking about the spoken text, and the answers are the transcriptions. Note that since LTU-AS has an internal Whisper model feeding the transcribed text to LLaMA, the ASR task is no more than an identity mapping for LTU-AS, which is fundamentally different from SpeechGPT [14]. We include ASR AQAs just to guide the model following ASR instructions.
**Closed-Ended Joint Audio and Speech AQA**: Most joint audio and speech AQAs in this project are open-ended. The only 93k closed-ended joint audio and speech AQAs are of the music genre prediction task on the FMA [29] dataset, which requires LTU-AS to consider both lyrics (text) and acoustic information to make the prediction.
### Open-Ended AQA Generation
Generating diverse open-ended AQA pairs at a large scale poses challenges with human-based efforts being impractical, and rule-based methods limiting output diversity. We thus use _Audio Instruction Tuning_ (AIT) proposed in [5] to generate open-ended AQAs with GPT-3.5-Turbo assistance. Specifically, since GPT does not take audio or speech as input, we input the meta information of the audio (e.g., audio events, speech style, emotion, and spoken text) to the GPT-3.5-Turbo model in the form of pure text as a surrogate, and then use the prompt shown in Table 1 to let the GPT model generate AQA pairs. As shown in Table 1, the generated open-ended QA pairs are diverse and of high quality.
Note that AIT is only used for data generation; during model training, only the raw audio and generated QA pairs are input to the LTU-AS model. Thus, the model is forced to learn directly from the raw audio that contains richer and more fine-grained information compared to the extracted meta-information. Similarly, during inference, LTU-AS solely uses raw audio to answer the question.
**Open-Ended Audio AQA**: We use the approximately 3.7 million AQAs about non-speech audio events from the original OpenAQA dataset [5].
\begin{table}
\begin{tabular}{l} \hline \hline
**Closed-Ended QA Samples** \\ \hline Q: Identify the sound from the clip? A: Dog, Animal. \\ Q: Write an audio caption describing the sound? \\ A: An engine revving and then tires squealing. \\ Q: What emotion does the speaker convey? \\ A: Happy, with a sentiment score of 2 (positive). \\ Q: Can you determine the music genre? A: Folk, Pop. \\ \hline \hline
**Open-Ended QA Samples** \\ \hline
**GPT Prompt*:** Based on the following audio/speech, generate 10 different types of complex open-ended questions that require step-by-step thinking, and corresponding answers. Questions can be e.g., How are speech content and background sounds related?... etc. Format each QA pair in a single line as a JSON dictionary. (varies for each task/dataset) \\ \hline
**GPT-Input**: In the recording, background sound of domestic animals, dog and speech of “Get him going, get him going with it.” is heard. \\
**GPT-Output**: Q: What can be inferred from the audio? A: It can be inferred that the speech is addressing the dog in the background. \\ \hline
**GPT-Input**: In the recording, background sound of explosion and speech of “Flames engulfed the tank and began to heat the propane inside. Twenty volunteer firefighters...” is heard. \\
**GPT-Output**: Q: Why do the speech content and explosion sound create a tense atmosphere? A: The speech content describes a tragic event and the explosion sound adds intensity and suspense to the narrative. \\ \hline
**GPT-Input**: Speech: “Hi, my name is Jamie and I’m a Master Sales Consultant here at Northine Volkswagen...”; Speaker gender: Male; Pitch: low; Volume: very high; Speed: very slow; Emotion: happy. \\
**GPT-Output**: Q: What can we infer from the speech content and emotion? A: The speaker seems happy to introduce himself and his company, and he is delivering his words in a slow and clear manner to make sure everyone understands what he is saying. \\ \hline
**GPT-Input**: Music genre: Folk; Music Lyrics: “The man of his dreams is all that the seems Flowers in sunshine...” \\
**GPT-Output**: Q: What mood does this music convey? \\ A: The music conveys a cheerful and relaxed mood because of the lively melody and lyrics about flowers and sunshine. \\ \hline \hline \end{tabular}
\end{table}
Table 1: Sample closed- and open-ended training QA pairs. Open-ended QA pairs are generated by _Audio Instruction Tuning_ (AIT) with the shown GPT prompt and input. *GPT prompt is shortened due to space limitation.
**Open-Ended Speech AQA**: We generate open-ended AQAs about speech using the four commonly used datasets IEMOCAP [23], MOSEI [24], LibriTTS [25], and VoxCeleb2 [26, 27]. We input all speech meta information including the original dataset labels (e.g., speaker emotion, gender, and age), extracted speech style features (e.g., pitch, speed, and volume), and Whisper transcribed spoken text, all in text form, to GPT-3.5-Turbo with the prompt shown in Table 1. For age, pitch, speed, and volume, we also quantize each of them into 5 categories (e.g., very low - very high) to help GPT understand the value. The input meta information to GPT of each dataset is marked as "x" in Table 2. Our intent was to input as much information as possible to GPT to generate high-quality AQAs.
**Open-Ended Joint Audio and Speech AQA**
We use two datasets containing both speech and non-speech audio to generate joint audio and speech AQAs. The first dataset we use is AudioSet [31]. Although AudioSet-2M has about 1M samples containing speech, and it has already been used in the original OpenAQA dataset, the spoken text was ignored. Specifically, a single label "speech" rather than the actual spoken text is input to GPT-3.5-Turbo for OpenAQA generation. In this work, we first sample a 500k subset from AudioSet-2M using the sound class balancing algorithm proposed in [38] to guarantee the diversity of non-speech audio events. We then use Whisper to transcribe the 500k AudioSet subset and select samples having no_speech_prob\(<\)0.2 and spoken text length over 5. This heuristic made it quite likely that the spoken text was transcribed correctly and had sufficient length to encompass substantive content. This resulted in 82k samples meeting the requirement. They were used to generate joint audio and speech AQAs with GPT assistance. As shown in Table 1, GPT can generate AQAs for joint audio and speech understanding, e.g., in the first sample, GPT outputs an answer explaining the speech is addressing the dog by understanding the speech content and the dog sound.
The second dataset we use is the FMA [29] music dataset. We input the list of music genres, title (if provided), and Whisper transcribed lyrics of each music clip to GPT and let it generate AQAs about music understanding with joint lyrics and melody analysis. In total, we generated about 1.1 million open-ended joint audio and speech AQAs.
## 4 Training LTU-AS
As for the LTU model [5], we use a three-stage training curriculum shown in Table 3 to train LTU-AS. In the first stage, only the randomly initialized projection layer is trainable. The TLTR and LoRA adapters are unfrozen in the second and third stages to stabilize training. In addition, in the first and second stages, we only train LTU-AS with AQAs of classification tasks where the model gets a high penalty for wrong predictions. The model is thus forced to attend to the audio input rather than using its language ability to hallucinate.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline Dataset & Audio & Audio & Spoken & Speaker & Speaker & Speech & Speaker & Music & \# Audio & \# Closed- & \# Open- \\ Event & Caption & Text* & Gender & Age & Style & Emotion & Genre & Clips & Ended QAs & Ended QAs \\ \hline \multicolumn{12}{l}{_Audio Datasets (OpenQA)_[5]} \\ \hline AS-Strong [30] & x & x & x & x & - & - & - & - & 102k & 683k & 901k \\ AudioSet [31] & x & - & x & x & - & - & - & x & 500k & 538k & 184k \\ VGGSound [32] & x & - & x & x & - & - & - & x & 184k & 367k & 907k \\ FSD50K [33] & x & - & x & x & - & - & - & x & 41k & 82k & 403k \\ AudioCaps [34] & x & x & x & - & - & - & - & x & 46k & 97k & 478k \\ FreeSound [35] & - & x & x & - & - & - & - & - & 91k & 91k & 791k \\ Clotho [36] & - & x & x & - & - & - & - & - & 5k & 48k & 89k \\ Sound Bible [37] & - & x & x & - & - & - & - & - & 1.2k & 12k & 10k \\ Sum & & & & & & & & & 845k & 1,918k & 3,763k \\ \hline \multicolumn{12}{l}{_Speech Datasets_} \\ \hline IEMOCAP [23] & - & - & x & x & - & x & x & - & 4.3k & 26k & 83k \\ LibriTTS [25] & - & - & x & x & - & x & - & - & 22k & 167k & 418k \\ VoxCeleb2 [26] & - & - & x & x & x & x & - & - & 107k & 194k & 926k \\ MOSEI [24] & - & - & x & - & - & x & x & - & 18k & 554k & 355k \\ Sum & & & & & & & & & 151k & 941k & 1,784k \\ \hline \multicolumn{12}{l}{_Joint Audio and Speech Datasets_} \\ \hline AudioSet [31] & x & - & x & x & - & - & - & x & 82k & - & 747k \\ FMA [29] & - & - & x & - & - & - & x & 93k & 93k & 396k \\ Sum & & & & & & & & & 175k & 93k & 1,143k \\ \hline \hline \multicolumn{12}{l}{_Total_} \\ \hline \hline \end{tabular}
\end{table}
Table 2: The statistics of the 9.6-million Open-ASQA dataset. “x” denotes the corresponding label is used.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Stage & Tr. Params & Tr. Task & Tr. Samples & LR & \# Epochs \\ \hline
1 & Proj. & Cla. & 2.1M & 1e-3 & 2 \\
2 & Proj. + TLTR + LoRA & Cla. & 2.1M & 2e-4 & 2 \\
3 & Proj. + TLTR + LoRA & All & 9.6M & 2e-4 & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The LTU-AS training curriculum.
## 5 Experiments
### Closed-Ended Tasks Evaluation
Although the main novelty of LTU-AS is open-ended audio and speech understanding, we first rigorously evaluate its performance on seven standard closed-ended audio and speech tasks because these tasks serve as the foundation for advanced reasoning. Specifically, for each task, we use a fixed prompt (e.g., "write an audio caption describing the sound." for audio classification) and either apply a regular expression to the LTU-AS to get the prediction (for ASR, audio captioning, gender classification, and age prediction), or compute the cosine similarity between the text embedding (gpt-text-embedding-ada) of LTU-AS output and each label, and use label that has the highest similarity score as the prediction (for other classification tasks).
The results are summarized in Table 4. First, as a foundational model, LTU-AS performs well on both audio and speech tasks. It works particularly well on tasks requiring both audio and speech understanding, which exactly meets our expectations. E.g., the accuracy of LTU-AS is nearly twice that of CLAP [48] on the zero-shot GTZAN music genre classification task; the MAE of speaker age prediction is even lower than the SOTA specialized model that only works for the task. Compared with CLIP-like models [48, 47], LTU-AS does not require any pre-defined label set and directly outputs predictions in natural language, which makes it a more practical system for real-world applications.
Second, training with both non-speech audio and speech data is crucial for LTU-AS to become a unified sound perception model. In Ablation Study 1, we compare LTU-AS with LTU models trained with only audio and only speech datasets. Though audio- and speech-specialized LTUs perform slightly better on tasks in their respective training domain, they almost fail on tasks in the domain they are not trained on.
Third, to take a closer look at how LLaMA attends to continuous audio token input \(\{A\}\) and spoken text token input \(\{S\}\) on different tasks, we manually remove one input modality for Ablation Study 2. For most tasks, a missing modality leads to a performance drop, indicating that LLaMA takes both \(\{A\}\) and \(\{S\}\) into its decision-making. Even on audio classification and gender classification tasks where \(\{S\}\) is not useful, including \(\{S\}\) leads to only a slight performance drop, demonstrating that LTU-AS can correctly attend to \(\{A\}\) and \(\{S\}\) based on the input audio and question. Finally, we observe the ASR performance of LTU-AS (4.9% WER) is worse than its internal Whisper model (3.5% WER) due to occasionally not following instructions and changing spelling.
### Open-Ended Audio Question Answering Evaluation
In addition to the good performance on closed-ended tasks, LTU-AS also exhibits superior performance when it comes to answering open-ended questions. We quantitatively measure the instruction following rate of LTU-AS on audio and speech questions and compare it with LTU models trained with only audio data and speech data. Specifically, we use GPT-4 to generate 100 audio and speech questions based on AudioSet and VoxCeleb evaluation sets, respectively, and input the questions and corresponding audios to LTU models and collect its answer. Finally, we use GPT-4 to evaluate if the LTU model output answers the given questions by the prompt "Below is a pair of question and response. Identify if the response directly answers the question and give a clear
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Model} & Audio & Speech & Emotion & Gender & Age & Music Genre \\ & Classif. & Caption & Recognition & Recognition & Classif. & Pred. & Classif. \\ \hline \multirow{3}{*}{Model} & ESC-50 [39] & AudioCaps & Librispeech [40] & IEMOCAP & Voxceleb2 & Voxceleb2 & GTZAN [41] \\ & (ACC \(\uparrow\)) & (SPICE \(\uparrow\)) & (test-clean WER \(\downarrow\)) & (ACC \(\uparrow\)) & (macro-F1 \(\uparrow\)) & (MAE \(\downarrow\)) & (ACC \(\uparrow\)) \\ \hline \hline \multicolumn{8}{l}{Best specialized models trained supervisedly on each dataset. Not generalizable to unseen label sets and tasks.} \\ Best Supervised \& Specialized & 97.0 [42] & 17.7 [43] & 1.4 [44] & 70.6 [45] & 98.3 [27] & 9.4 [27] & 93.9 [46] \\ \hline \multicolumn{8}{l}{CLIP-like audio-text model. Generalizable to unseen labels, but a pre-defined label set is required for inference} \\
**AudioCip [47]** & **69.4** & - & - & - & - & - \\
**CLAP [48]** & **82.6** & - & - & - & - & 25.2 \\ \hline \hline \multicolumn{8}{l}{(Proposed) One single model for all tasks. Directly output label names, no pre-defined label set is needed at inference.} \\
**LTU-AS** & **800.8\({}^{\text{no.}}\)** & **15.0** & **4.9** & **65.2** & **90.8** & **7.3** & **50.3\({}^{\text{no.}}\)** \\ \hline \hline \multicolumn{8}{l}{Ablation Study 1 - Train with only speech or audio data} \\
**LTU (Audio Training Only) [5]** & **82.8** & **17.0** & 104.2 & 38.2 & 77.0 & Fail* & 29.8 \\
**LTU (Speech Training Only)** & 10.9 & 0.5 & 12.9 & **69.8** & 90.1 & 7.9 & 23.5 \\ \hline \multicolumn{8}{l}{Ablation Study 2 - Inference with missing modality} \\
**LTU-AS (Audio Input Only)** & **81.9** & **14.9** & **97.2** & **58.6** & **95.6** & 8.2 & 48.2 \\
**LTU-AS (Spoken Text Input Only)** & 7.7 & 3.5 & 20.0 & 45.4 & 42.0 & 11.9* & 21.5 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Closed-ended task performance. ZS: Zero-shot evaluation; ZS-: The dataset is not used in training, but it is sourced from the same project as part of the training data. * Model does not follow instructions on part of or entire of the dataset.
\begin{table}
\begin{tabular}{l c c} \hline \hline Model & Audio Question & Speech Question \\ \hline LTU-Audio Training Only & 96\% & 69\% \\ LTU-Speech Training Only & 65\% & 93\% \\ LTU-AS & **96\%** & **94\%** \\ \hline \hline \end{tabular}
\end{table}
Table 5: The instruction following rate of LTU model trained with only audio, only speech, and both audio and speech data.
answer." As shown in Table 5, LTU-AS has an instruction following rate over 94% for both audio and speech questions, while LTU trained with only audio/speech dataset does not follow instruction well on questions out of its training domain. In Table 6, we show four real samples of LTU-AS on unseen audios. Key findings are as follows:
**LTU-AS understands the world by combing audio and speech information**: In example 1, LTU-AS correctly identifies the job of the speaker as a basketball coach because the spoken text is about instructing while bouncing basketballs are heard in the background. Without understanding the spoken text, the speaker could be a basketball player, while without understanding the audio, the speaker could be a football coach. Similarly, in example 2, LTU-AS knows the speaker is anxious because of spoken content and expresses concern about public speaking while the speaker speaks fast with a high pitch. **LTU-AS exhibits remarkable reasoning ability and connects sounds to actions**: In Sample 2, LTU-AS can provide suggestions to the speaker based on his situation; in Sample 3, LTU-AS can suggest a title for the music, and does not recommend to play it in a primary school because the lyrics and music tone are not suitable for children; in Sample 4, LTU-AS not only correctly extracts the information about the boarding platform and transfer line, but also suggests boarding the next train when we hear the announcement. All these demonstate LTU-AS can not only listen, but also think and understand.
## 6 Conclusions
In this paper, we present LTU-AS, a novel joint audio and speech understanding model that can simultaneously recognize and jointly understand spoken text, speech paralinguistics, and non-speech audio events. We identify three key components in successfully building LTU-AS. First, LTU-AS uses a strong audio encoder (Whisper) and a strong reasoning model (LLaMA). The former provides precise perception ability while the latter provides advanced reasoning ability. Second, LTU-AS is trained with the new large-scale dataset Open-ASQA with a wide range of diverse audio and speech tasks. In particular, the open-ended questions generated with GPT assistance are crucial to empower LTU-AS to answer free-form questions. Third, LTU-AS is trained with a multi-stage training curriculum to militate hallucination. Our experiments demonstrate the advantages of LTU-AS, as it achieves good performance on all tested audio and speech benchmarks, particularly on tasks requiring joint audio and speech understanding. More importantly, when answering free-form, open-ended questions, LTU-AS effectively combines information from audio and speech, and exhibits emerging reasoning abilities.
**Ethics Statement**: To prevent the potential misuse of the proposed audio LLM, we intentionally do not include speaker identification in our task list. The music used in model training is Creative Commons-licensed [29].
\begin{table}
\begin{tabular}{l} \hline \hline
**Sample 1 Basketball Coaching (AS-Eval)** \\ Transcribed speech: To the left, the second time through they’ll do the same move to the right. We’ll do a series of about five moves. Here we’ve got a 6’9 kid, I expect him to make the same move. \\ \hline Q: What can be inferred from the audio events and spoken text? \\ A: The audio event is a basketball bounce while the spoken text suggests that someone is giving instructions. \\ Q: what is the job of the speaker? \\ A: The speaker is coaching or instructing someone on how to play basketball based on their speech content and the background sounds of basketball bounce heard. \\ Q: Where could this audio be recorded? \\ A: This audio clip could have been recorded in a basketball court or gymmavian where people are playing basketball and speaking. \\ \hline Sample 2 Interview (VoxCeleb2-Eval) \\ Transcribed speech:... I was so anxious about getting up and talking in front of people that I thought I still sort of lose... \\ \hline Q: What is the emotion of the speaker? Why? \\ A: Based on the speech content, the speaker seems to express anxiety or nervousness about public speaking. The high pitch and fast speed suggest excitement or tension as well. Therefore, the emotion of the speaker can be inferred as nervousness or anxiety. \\ Q: Is the speaker a good public speaker? \\ A: No, the speaker feels anxious about speaking in front of people and thinks they will still feel lose-lose about it. \\ Q: Provide the speaker 5 suggestions. \\ A: I. Take a deep breath before speaking. 2. Practice public speaking regularly. 3. Use humor or lightheafendness to ease nerves. 4. Focus on the audience rather than yourself during speeches. 5. Seek feedback from others to improve your delivery. \\ \hline Sample 3 Rock Music (GTZAN) \\ Transcribed Lyrics: Gotta be tonight, my days are blue, babe Mama says you don’t, daddy says you won’t. And I’m falling off her side No way I’m gonns lose out this time. Tie your mother down... \\ \hline Q: What did you hear? A: Music, punk rock. \\ Q: Why the music is rock? Consider both lyrics and sounds. \\ A: The music has a strong beat, distorted guitar riffs, and aggressive vocals that are typical characteristics of rock music. The lyrics also contain themes of rebellation and defiance which align with the genre’s cultural significance. \\ Q: Can you suggest a title for this music? \\ A: The Your Mother Down - A Punk Rock Anthem. \\ Q: Is this music appropriate to play in a primary school? \\ A: No, the lyrics and overall tone are not suitable for children. \\ Q: What mood does this music convey? \\ A: The music conveys a rebellious and energetic mood with its fast-paced tempo and aggressive guitar riffs. The lyrics also add to the sense of rebellion. \\ \hline Sample 4 Subway Station (FSD50K) \\ Transcribed speech: This is the transfer point for the Metro Purple Line to Wishlive Western. Boarding platform is located downstairs... \\ \hline Q: What did you hear? A: Subway, metro, underground. \\ Q: What spoken text is heard? \\ A: The spoken text is “This is the transfer point for the Metro Purple Line to Wishlive Western. Boarding platform is located downstairs...” \\ Q: What can be inferred from the sound and announcement? \\ A: The subway sound suggests that it is a busy station with many people coming and going, while the announcement provides information about the train schedule. \\ Q: Where is the boarding platform located? \\ A: The boarding platform is located downstairs. \\ Q: Which line does this station transfer to? \\ A: The transfer point is to the Metro Purple Line. \\ Q: What should I do if I hear this? \\ A: You should board the next train and proceed to your destination. \\ \hline \hline \end{tabular}
\end{table}
Table 6: Real samples of LTU-AS open-ended question answering on evaluation samples. Acoustic and speech information in the answers are marked in green and red, respectively, to highlight how LTU-AS understand them jointly. |
2309.04616 | Knowledge Distillation-Empowered Digital Twin for Anomaly Detection | Cyber-physical systems (CPSs), like train control and management systems
(TCMS), are becoming ubiquitous in critical infrastructures. As safety-critical
systems, ensuring their dependability during operation is crucial. Digital
twins (DTs) have been increasingly studied for this purpose owing to their
capability of runtime monitoring and warning, prediction and detection of
anomalies, etc. However, constructing a DT for anomaly detection in TCMS
necessitates sufficient training data and extracting both chronological and
context features with high quality. Hence, in this paper, we propose a novel
method named KDDT for TCMS anomaly detection. KDDT harnesses a language model
(LM) and a long short-term memory (LSTM) network to extract contexts and
chronological features, respectively. To enrich data volume, KDDT benefits from
out-of-domain data with knowledge distillation (KD). We evaluated KDDT with two
datasets from our industry partner Alstom and obtained the F1 scores of 0.931
and 0.915, respectively, demonstrating the effectiveness of KDDT. We also
explored individual contributions of the DT model, LM, and KD to the overall
performance of KDDT, via a comprehensive empirical study, and observed average
F1 score improvements of 12.4%, 3%, and 6.05%, respectively. | Qinghua Xu, Shaukat Ali, Tao Yue, Zaimovic Nedim, Inderjeet Singh | 2023-09-08T22:13:03Z | http://arxiv.org/abs/2309.04616v2 | # KDDT: Knowledge Distillation-Empowered Digital Twin for Anomaly Detection
###### Abstract.
Cyber-physical systems (CPSs), like train control and management systems (TCMS), are becoming ubiquitous in critical infrastructures. As safety-critical systems, ensuring their dependability during operation is crucial. Digital twins (DTs) have been increasingly studied for this purpose owing to their capability of runtime monitoring and warning, prediction and detection of anomalies, etc. However, constructing a DT for anomaly detection in TCMS necessitates sufficient training data and extracting both chronological and context features with high quality. Hence, in this paper, we propose a novel method named KDDT for TCMS anomaly detection. KDDT harnesses a language model (LM) and a long short-term memory (LSTM) network to extract contexts and chronological features, respectively. To enrich data volume, KDDT benefits from out-of-domain data with knowledge distillation (KD). We evaluated KDDT with two datasets from our industry partner Alstom and obtained the F1 scores of 0.931 and 0.915, respectively, demonstrating the effectiveness of KDDT. We also explored individual contributions of the DT model, LM, and KD to the overall performance of KDDT, via a comprehensive empirical study, and observed average F1 score improvements of 12.4%, 3%, and 6.05%, respectively.
digital twin, knowledge distillation, anomaly detection, Train Control and Management System +
Footnote †: Corresponding: **E2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 979-8-4007-0327-0/23/12**, $15.00 [https://doi.org/10.1145/3611643.3613879](https://doi.org/10.1145/3611643.3613879)**
+
Footnote †: Corresponding: **E2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 979-8-4007-0327-0/23/12**, $15.00 [https://doi.org/10.1145/3611643.3613879](https://doi.org/10.1145/3611643.3613879)**
+
Footnote †: Corresponding: **E2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 979-8-4007-0327-0/23/12**, $15.00 [https://doi.org/10.1145/3611643.3613879](https://doi.org/10.1145/3611643.3613879)**
+
Footnote †: Corresponding: **E2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 979-8-4007-0327-0/23/12**, $15.00 [https://doi.org/10.1145/3611643.3613879](https://doi.org/10.1145/3611643.3613879)**
+
Footnote †: Corresponding: **E2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 979-8-4007-0327-0/23/12**, $15.00 [https://doi.org/10.1145/3611643.3613879](https://doi.org/10.1145/3611643.3613879)**
## 1. Introduction
Cyber-physical Systems (CPSs) play a vital role in Industry 4.0. Various CPSs have been deployed in critical infrastructures, e.g., water treatment plant (Shi et al., 2017; Wang et al., 2018) and elevator systems (Krishnan et al., 2018; Wang et al., 2018; Wang et al., 2018), whose safe operation concerns our daily lives. One such safety-critical CPS is railway systems (RWS). Our industrial partner Alstom manufactures intricate RWS, which involves not only vehicles and tracks (the physical part) but also network communication inside the vehicles themselves and across them (the cyber part). RWS get increasingly complex when they become more heterogeneous and integrated, i.e., with more software systems added to provide rich functionalities. However, such complexity renders RWS susceptible to broader threats (e.g., anomalies (Bahdan et al., 2018) and software faults (Krishnan et al., 2018)). In particular, anomalies are one of the most severe threats that might lead to system failures or even catastrophic consequences (Krishnan et al., 2018).
Inspired by their success in computer vision and natural language processing (NLP) (Krishnan et al., 2018), machine learning methods have been widely applied to address various anomaly detection tasks in RWS (Krishnan et al., 2018; Wang et al., 2018; Wang et al., 2018). However, training these methods involves collecting abundant data from RWS, which might interfere with the safe operation of RWS. To reduce such interference, digital twin (DT), as a novel technology, has been intensively studied for CPSs anomaly detection (Krishnan et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). Specifically, a DT can be considered as a digital replica of a CPS and enables rich functionalities with this replica instead of the real CPS. Early DTs are predominantly based on software/system models, requiring manual effort from domain experts (Krishnan et al., 2018). To mitigate this challenge, machine learning methods are increasingly used to enable data-driven DT constructions, which require much less manual effort and domain knowledge while performing accurate anomaly detection in CPSs (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018).
However, to the best of our knowledge, no previous works have focused on building data-driven DTs for network anomaly detection in RWS. In this paper, we focus on detecting packet loss anomalies on IP networks in the Train Control and Management System (TCMS). When a packet loss anomaly occurs, a certain pattern is exhibited where certain signal values abruptly drop to 0 and rebound quickly. Such network anomalies can induce misjudment and incorrect reactions in the control unit of RWS, leading to unexpected and risky behaviours of RWS. Despite its significance, constructing a DT for network anomaly detection remains unsolved due to the following two challenges listed below.
**Challenge 1: High data complexity.** Datasets for network anomaly detection are packets arranged in chronological order. The packets' contents and order reflect the network state and the oblivion of either aspect diminishes the performance of the DT. Sequential models such as recurrent neural networks can extract chronological features, while the packets' content features are intrinsically more difficult to extract due to their textual format. Unlike structural data, textual data does not follow a fixed format or schema and can vary greatly in length. To extract high-quality features from textual data, both the syntax and semantics of the data should be understood by the feature extractor without ambiguity.
**Challenge 2: Training data insufficiency.** Data-driven DT construction entails training with abundant in-domain labelled data, i.e., data related to the network packet loss. However, such datasets are difficult to collect since anomalies during the operation of safety-critical systems like RWS rarely occur. Moreover, detecting anomalies manually is non-trivial. The presence of packet dropping can be difficult to detect without rich domain knowledge since normal network fluctuations might resemble an anomaly to a great extent. Such resemblance prevents non-experts, without sufficient training, from manually labelling the packets.
To address the aforementioned challenges, we propose a novel DT-based method KDDT for network anomaly detection in TCMS. To tackle the first challenge, KDDT extracts features from packets' contents and order with a contextualized language model (LM) and an LSTM, respectively (Zhu et al., 2018; Wang et al., 2019). The second challenge is the insufficiency of training data. We circumvent this challenge by distilling knowledge from the out-of-domain (OOD) dataset as a supplement, which is relatively cheaper than the in-domain (ID) dataset acquisition. Specifically, we pretrain a Variational Autoencoder (VAE) to encode and reconstruct a network packet from OOD data. We posit a well-trained VAE possesses rich knowledge of encoding network packets with high quality, which can be leveraged by KD to supplement the ID dataset. Similar to the popular pretraining+fine-tuning paradigm, KD is also a transfer learning technique. However, KD asks the pretrained VAE to act like a teacher rather than merely providing a set of better-initialized parameters for KDDT. The key advantage of KD is a reduction in model complexity, which requires fewer resources and can potentially mitigate the risk of overfitting.
We evaluated KDDT with two TCMS network packet datasets from Alstom. KDDT achieves average F1 scores: 0.931 and 0.915. Moreover, we posit that the benefits of KDDT's sub-components extend beyond the scope of network anomaly detection. To investigate their individual contributions, we studied the effectiveness of DTM, LM, and KD. Evaluation results show that the absence of DTM, LM or KD leads to decreases of 12.4%, 3%, and 6.05%, respectively, in terms of the average F1 score on both datasets.
## 2. Industrial Context
Alstom Rail Sweden AB is the second largest company in the rail industry, with over 150 000 vehicles in service. One essential train component is TCMS, a high-capacity infrastructure that controls communications among different subsystems on the train and between the train and the ground(Wang et al., 2019). Figure 1 depicts the overview of the TCMS. Four buses and two gateways constitute the backbone of TCMS, namely Multi-function Vehicle Bus (MVB), Ethernet Consist Network (ECN), Ethernet Train Bus (ETB), Wired Train Bus (WTB), WTB GW, and ETB GW. The MVB and ECN buses connect various internal devices and other subsystems, while the WTB and ETB buses connect multiple vehicles. Conventional Train Control (CTC) implements hard-wire logic in the control system. The Modular Input/Output (MIO) devices tackle Input/Output analog signals. Human Machine Interfaces (HMIs) provide control interfaces for train drivers. TCMS relies on Central Control Units (CCUs) to implement various train functions. CCUO, CCUS, and CCUD control the basic, safety-critical, and diagnostic functions, respectively. Each function is performed with one or multiple devices and buses in TCMS.
Take the standstill determination process as an example. Multiple train functions rely on accurately assessing standstill, such as door control. Concretely, the TCMS communicates with two sub-systems: the Brake Control Units (BCUs) and the Traction Control Units (TCUs). The BCUs measure the axles' speed and inform the CTC about the standstill condition by setting specific digital inputs on the MIO. In addition to the BCU's hardware-based standstill determination, the TCMS uses the speed of each driven axle, which is sent by the TCUs via the Multifunction Vehicle Bus (MVB), for the software-based standstill determination. This signal is then transmitted to the Door Control Units (DCUs) and other functions via MVB/IP. Such processes entail packet exchanges in the TCMS network, which is vulnerable to packet loss anomalies. This paper aims to develop an effective DT for anomaly detection, facilitating the safety check of RWS and reducing future packet loss risks.
Figure 1. Overview of TCMS. MVB, ECN, ETB, and WTB stand for Multi-function Vehicle Bus, Ethernet Consist Network, Ethernet Train Bus, and Wired Train Bus, respectively. WTB GW and ETB GW are two gateways. CTC, MIO, and HMI represent Conventional Train Control, Modular Input/Output, Human Machine Interface, respectively. CCUs are control unit, including CCUO, CCUs, and CCUD.
## 3. Methodology
As depicted in the center circle of Figure 2, KDDT has three major components: a pretrained LM, a pretrained VAE, and a DT. _A pretrained LM_ extracts contextualized features \(f\) from textual data (i.e., network packet), which are leveraged by \(DT\) for anomaly detection. \(DT\) comprises a digital twin model (DTM) and a digital twin capability (DTC). DTM simulates the TCMS, while DTC implements one or more functionalities of DT, e.g., anomaly detection in our context. The interaction between DTM and DTC is bidirectional, where DTM sends information about the system state to DTC and receives feedback from DTC, e.g., anomaly alerts. To further improve the DT performance, we utilize the pretrained VAE as a teacher model to guide DTM's training.
The four boxes of Figure 2 illustrate the training workflow of KDDT: data preparation, LM pretraining, VAE pretraining, and DT training. In _Stage 1_, we first prepare both out-of-domain \(\mathcal{D}_{OOD}\) and in-domain \(\mathcal{D}_{ID}\) datasets. As mentioned in Section 2, we collect anomaly-unrelated data as \(\mathcal{D}_{OOD}\), which can be leveraged to pretrain some universal models such as LM (Stage 2) and VAE (Stage 3). In-domain dataset \(\mathcal{D}_{ID}\), on the other hand, are directly related to anomalies, which can be used to train DT for anomaly detection (Stage 4). In _Stage 2_, we pretrain a contextualized LM with the \(\mathcal{D}_{OOD}\) dataset to extract inner-packet context features by predicting the next token. For instance, given the context of "RWS is an important Cyber-physical", a well-trained LM can extract a discrete vector to represent the context and use the vector to predict the next token, i.e., "system" with a high probability. In _Stage 3_, we design a VAE comprising an encoder and a decoder and pretrain it with packets from the \(\mathcal{D}_{OOD}\) dataset. The encoder extracts context features with the pretrained LM and encodes them into a hidden vector (\(h\)). The decoder then takes \(h\) as the input and attempts to reconstruct the original network packets. A well-trained VAE can effectively encode a network packet into a high-quality hidden vector and reconstruct the packet with high fidelity. In _Stage 4_, we train DTM and DTC of KDDT with the \(\mathcal{D}_{ID}\) dataset but under the guidance of the pretrained VAE. In particular, DTM and the pretrained VAE both aim to encode a given packet with high quality. VAE contains richer knowledge due to its complex architecture and pretraining on the \(\mathcal{D}_{OOD}\) dataset. Consequently, the hidden vectors produced by the pretrained VAE convey richer information, which can be used as a soft target (as opposed to a hard target, i.e., ground truth target) to guide the DTM training process.
### Data Preparation
As depicted in Figure 2, Stage 1, we sequentially perform two processes in the data preparation stage: data collection and domain expert labelling.
**Data collection.** During operation, TCMS communicates with various devices by exchanging large numbers of packets per second. Engineers in Alstom deployed Wireshark (Wireshark, 2017) to monitor such exchanges and capture packets as _.PCAP_ files. Let \(X_{i}\) be the packet captured at timestep \(l\). We collect out-of-domain dataset \(\mathcal{D}_{OOD}=[X_{0},X_{1},...,X_{NOOD-1}]\) and in-domain dataset \(\mathcal{D}_{ID}=[X_{0},X_{1},...,X_{NDID-1}]\), where \(N_{OOD}\) and \(N_{ID}\) represent the dataset sizes. The \(\mathcal{D}_{ID}\) dataset refers to any data related to the anomaly detection task, i.e., suspicious anomalous data. \(\mathcal{D}_{OOD}\) dataset is task-agnostic, which includes any network data collected in TCMS. **Domain expert labelling.** KDDT, as a supervised learning method, requires labelled data for accurate network anomaly detection. In our context, we study the phenomenon of packet loss. Hence we assign an abnormal label to a packet if it is experiencing a packet loss incident. We ask Alstom to provide us with labelled packet data. They manually examined the signal logs with the help of an in-house built monitoring tool named DCUTerm. DCUTerm displays signal changes over time, providing a rough time boundary for packet loss anomalies, which was further analyzed and narrowed down by checking the Wireshark packets directly. We formally denote the label for \(X_{i}\) as \(y_{i}\in\{0,1\}\), which takes the value of 1 when a packet loss incident occurs.
### LM Pretraining
The quality of feature engineering dramatically influences the performance of machine learning models. One of the most salient features of the packets is the inner-packet context features, which represent the semantics and syntax of the packet content. We aim to pretrain a contextualized LM to extract such features from packet contents. The LM treats the raw packet content as natural language text and extracts semantic and syntactic features automatically. We will demonstrate the model structure of LM in Section 3.2.1 and the loss function in Section 3.2.2.
#### 3.2.1. LM model structure
As shown in Figure 2, the LM consists of three layers:
**Embedding layer.** We first build a vocabulary \(\mathcal{V}\) for all the packet tokens and randomly initialize an N-dimensional embedding vector for each token. Given a training sample \((ctx,tgt)\), the embedding layer takes the context sequence \(ctx\) as input and fetches the corresponding embedding for each token inside this sequence, as shown in Equation 1. \(L\) denotes the length of the context.
\[X=Embed(ctx)=[x_{0},x_{1},...,x_{L-1}] \tag{1}\]
Figure 2. KDDT’s model structure and training workflow. \(\mathcal{D}_{OOD}\) and \(\mathcal{D}_{ID}\) are out-of-domain and in-domain datasets.
**Bi-directional LSTM layer.** We feed the embedded context \(X\) into a bi-directional LSTM (Wang et al., 2017) for context information extraction:
\[h=LSTM(\overleftarrow{X})+LSTM(\overrightarrow{X}) \tag{2}\]
**Softmax layer.** This layer transforms the output of the bi-directional LSTM into logits \(z\) that sum up to be 1 as in Equation 3:
\[z=\text{softmax}(h)=\frac{\exp(h_{i})}{\sum_{j=1}^{N}\exp(h_{j})} \tag{3}\]
#### 3.2.2. Loss Calculation
The output of LM is a probability distribution vector, whose element represents each token's probability of appearing at the next position. In Equation 4, We calculate Cross Entropy Loss \(\mathcal{L}_{CE}\) by comparing logits z (Equation 3) with true values \(tgt\), which is minimized to train the LM. \(|\mathcal{V}|\) denotes the vocabulary size.
\[\mathcal{L}_{CE}=-\sum_{i=1}^{|\mathcal{V}|}\text{tgt}_{i}\log z_{i} \tag{4}\]
### VAE Pretraining
To leverage the \(\mathcal{D}_{OOD}\) dataset, we train VAE as illustrated in Stage 3, Figure 2. The underlining hypothesis of VAE is that each packet is an observed data point generated based on a hidden vector that summarises this packet. VAE aims to find this hidden vector with an encoder and evaluate its quality by reconstructing the packet with the decoder. Higher reconstruction quality indicates better encoding capability. Given a packet \(X_{i}\in\mathcal{D}_{OOD}\), the encoder extracts a hidden state vector, with which the decoder reconstructs a packet similar to (but necessarily the same as) \(X_{i}\). We will present the details about the encoder and decoder in Section 3.3.1 and Section 3.3.2, respectively.
#### 3.3.1. VAE Encoder
The encoder aims to induce the hidden vector from \(\mathcal{D}_{OOD}\) packets, with an assumption that the hidden vector follows a Gaussian distribution \(N(\mu,\sigma^{2})\). The high expressiveness of the Gaussian distribution allows it to describe many phenomena in the real world. According to (Wang et al., 2017), the Gaussian distribution assumption allows VAE to utilize the reparameterization trick, which enhances training efficiency without reducing its fitting capability. VAE first approximates the Gaussian distribution \(N(\mu,\sigma^{2})\) by computing \(\mu\) and \(\sigma\) with two neural network models. Then we sample a hidden vector from \(N(\mu,\sigma^{2})\) with the reparameterization trick. The upper part of the purple box in Figure 3 illustrates the internal layers of the VAE encoder:
**LM layer.** We utilize the trained LM from Stage 2 to convert the packet input \(X\) into embedding vectors as in Equation 5.
\[X=LM(X) \tag{5}\]
**Linear layers for \(\mu\) and \(\sigma\).** We perform two separate linear transformations for \(\mu\) and \(\sigma\) as in Equations 6 and 7 respectively, where \(W\) and \(b\) are weight and bias matrices.
\[\mu=W_{\mu}X+b_{\mu} \tag{6}\]
\[\sigma=W_{\sigma}X+b_{\sigma} \tag{7}\]
**Activation layers for \(\mu\) and \(\sigma\).** We introduce non-linearity into the model by adding an activation layer. Following the common practice in (Wang et al., 2017), we choose ReLU and sigmoid as the activation function for \(\mu\) and \(\sigma\), respectively (Equations 8 and 9).
\[\mu=ReLU(\mu) \tag{8}\]
**Sampling layer.** Given the expectation \(\mu\) and standard deviation \(\sigma\) for a Gaussian distribution, it is trivial to sample a hidden vector \(h_{\text{oE}}\) from it. However, the practice of sampling breaks the gradient propagation chain, which is the foundation of deep learning model training. Therefore, VAE uses a reparametrization trick to address this problem as in Equation 10, where \(\epsilon\) is random noise.
\[h_{\text{oE}}=\mu+\epsilon*\sigma \tag{10}\]
#### 3.3.2. VAE Decoder
The decoder aims to reconstruct the packet from the hidden vector \(h_{\text{oE}}\). We build it as a convolution neural network consisting of the following layers:
**Linear layer.** Equation 11 shows that we first perform a linear transformation on the hidden vector \(h_{\text{oE}}\), where \(W_{d}\) and \(b_{d}\) are weight and bias matrices.
\[h_{\text{oE}}=W_{d}h_{\text{oE}}+b_{d} \tag{11}\]
**Convolution layer.** After the linear transformation, we use a convolution layer to capture spatial features in \(h_{\text{o}d}\):
\[h_{\text{oE}}=Conv(h_{\text{oE}}) \tag{12}\]
**Pooling layer.** We add a maximum pooling layer to increase the fitting capability of the decoder as in Equation 13.
\[h_{\text{oE}}=max\_pool(h_{\text{oE}}) \tag{13}\]
**Softmax layer.** Finally, we project the reconstructed vectors into the packet space with a softmax layer as in Equation 14.
\[\hat{X}=softmax(h_{\text{oE}}) \tag{14}\]
#### 3.3.3. Loss Calculation
As pointed out in the literature (Wang et al., 2017), an overall loss should be minimized to train VAE, consisting of a KL divergence loss for the encoder and a maximum likelihood loss (MLL) for the decoder (Equation 15).
\[\mathcal{L}_{VAE}=\mathcal{L}_{\text{KL}}+\mathcal{L}_{MLL} \tag{15}\]
We compute the KL divergence loss by comparing the hidden vector distribution with a standard Gaussian distribution:
\[\mathcal{L}_{KL}=KL(N(\mu,\sigma^{2})||N(0,1))\] \[=-\frac{1}{2}\sum_{i=1}^{k}\left(1+\log\sigma_{i}^{2}-\mu_{i}^{2} -\sigma_{i}^{2}\right) \tag{16}\]
The maximum likelihood loss aims to assess how close the reconstructed packet \(\hat{X}_{i}\) is to the real \(\hat{X}_{i}\). As shown in Equation 17, we calculate the cross entropy as the MLL loss.
\[\mathcal{L}_{MLL}=-\sum_{j=1}^{n}X_{i+1}^{j}\log(\hat{X}_{i+1}^{j}) \tag{17}\]
### DT Training
The last stage in Figure 2 is DT training, which takes advantage of the collected data, pretrained LM and VAE from Stages 1, 2 and 3, respectively. Figure 3 presents model details of DT, comprising a DTM, a DTC, and a KD module.
#### 3.4.1. DTM Structure
DTM simulates the TCMS network by predicting the subsequent packet. As mentioned in Section 1, the predictive performance of anomaly detection hinges on extracting inner-packet context and inter-packet chronological features. A well-trained VAE can effectively extract the inner-packet context feature, but the inter-packet chronological feature is not considered in the VAE structure. Therefore, we adapt the VAE from Stage 3 by combining VAE and LSTM to extract both features.
At timestep \(i-1,\text{DTM}\) sequentially invokes three sub-models to predict a packet \(X_{i}\): an encoder, an LSTM and a decoder (Figure 3). **DTM Encoder.** The internal structure of the encoder is identical to the VAE encoder. At timestep \(i-1\), the encoder encodes the input packet from previous timestep \(X_{i-1}\) into a hidden state vector \(h_{i-1}^{dttmE}\) (Equation 18). See details of the encoder in Section 3.3.
\[h_{i-1}^{dttmE}=encoder(X_{i}) \tag{18}\]
However, we reduce the complexity of the DTM encoder by using a smaller hidden vector size compared to the VAE encoder. KD can benefit from a complexity discrepancy between the teacher and student models (Chen et al., 2018).
**DTM LSTM.** We utilize an LSTM to capture the inter-packet chronological features. The LSTM takes the hidden vector \(h_{i-1}^{dttmE}\) as input and outputs a hidden vector \(h_{i}^{dttmE}\) for timestep \(i\) as in Equation 19.
\[h_{i}^{dttmE}=LSTM(h_{i-1}^{dttmE}) \tag{19}\]
**DTM Decoder.** The decoder aims to generate a packet \(\hat{X}_{i}\) with the hidden vector \(h_{i}\) as in Equation 20.
\[\hat{P}_{i}=dttm\_decoder(h_{i}^{dttmE}) \tag{20}\]
#### 3.4.2. DTC Structure
DTC performs anomaly detection with the input from the packet \(X_{i}\) and the hidden vector \(h_{i}^{dttmE}\) from DTM. \(h_{i}^{dttmE}\) is an indicator of the system state based on historical data, whereas \(X_{i}\) contains information about the signal values in the TCMS. As shown in Figure 3, DTC consists of the following layers: **LM layer.** Similar to the LM layer in DTM, we use the pretrained LM to convert packet \(P_{i}\) into embedding vectors as in Equation 21.
\[X_{i}=LM(X_{i}) \tag{21}\]
**Linear layer.** This layer linearly transforms the embeddings into the hidden space of DTM as in Equation 22, where \(W_{11}\) and \(b_{11}\) are weight matrices.
\[h_{i}^{dttc}=W_{11}X_{i}+b_{11} \tag{22}\]
**Convolution layer.** We then concatenate hidden vectors from DTM \(h_{i}^{M}\) and DTC \(h_{i}^{dttc}\) and feed them into a convolution layer as in Equation 23.
\[h_{i}^{dte}=Conv([h_{i}^{dttmE},h_{i}^{dte}]) \tag{23}\]
**Sigmoid layer.** Non-linearity is introduced with a sigmoid activation function as in Equation 24.
\[h_{i}^{dte}=Sigmoid(h_{i}^{dte}) \tag{24}\]
Figure 3. Model Structure of KDDT
**Pooling layer.** We reduce the dimensionality of \(h_{i}^{dtc}\) with a pooling layer as in Equation 25.
\[h_{i}^{dtc}=max\_pool(h_{i}^{dtc}) \tag{25}\]
**Linear layer.** We transform the hidden vector \(h_{i}^{dtc}\) into the output space as in Equation 26, where \(W_{I2}\) and \(W_{I2}\) are weight matrices.
\[o_{i}^{dtc}=W_{I2}h_{i}^{dtc}+b_{I2} \tag{26}\]
**Softmax layer.** Finally, we perform a softmax operation to convert the output \(o_{i}\) into logits as in Equation 27.
\[z_{i}^{dtc}=softmax(o_{i}^{dtc}) \tag{27}\]
#### 3.4.3. Knowledge Distillation
The pretrained VAE learns from extra OOD data about extracting inner-packet context features. To make use of the pretrained models, many researchers adopt a rather intuitive strategy to use the pretrained parameters directly and fine-tune them with ID datasets. KD, however, tailors the complex teacher model into a less complex student model to focus on a specific context, i.e., anomaly detection in our context. Instead of direct parameter sharing, the student model uses the hidden vectors produced by the teacher model as a soft target and optimizes its own parameters to encode similar vectors as the teacher model. Figure 3 shows the pretrained VAE takes the packet from the previous timestep \(X_{i-1}\) as input and produces a hidden vector \(h_{i-1}^{\text{eE}}\). The DTM encoder uses \(h_{i-1}^{\text{eE}}\) as a soft target and calculates a cosine similarity loss:
\[\mathcal{L}_{KD}=\frac{h_{i-1}^{\text{eE}}\cdot h_{i-1}^{dttmE}}{|h_{i-1}^{ \text{eE}}|\times|h_{i-1}^{dttmE}|} \tag{28}\]
#### 3.4.4. Loss Calculation
The overall loss to train KDDT comprises the ground truth loss and KD loss as depicted in Equation 29.
\[\mathcal{L}=\mathcal{L}_{GT}+\mathcal{L}_{KD} \tag{29}\]
(29) \(\mathcal{L}_{KD}\) is calculated as in Equation 28, while \(\mathcal{L}_{GT}\) entails loss calculation on both DTM and DTC (Equation 30).
\[\mathcal{L}_{GT}=\mathcal{L}_{DTM}+\mathcal{L}_{DTM} \tag{30}\]
Since DTM resembles VAE in the model structure, we calculate the DTM loss \(\mathcal{L}_{DTM}\) similarly as a sum of the KL loss and MLL loss (Equation 31). Detailed calculation is in Equations 16 and 17.
\[\mathcal{L}_{DTM}=\mathcal{L}_{KL}+\mathcal{L}_{MLL} \tag{31}\]
We compute a commonly-used cross entropy loss for DTC as:
\[\mathcal{L}_{CE}=-\sum_{i=1}^{N}y_{i}\log z_{i}^{dtc} \tag{32}\]
## 4. Experiment Design
To evaluate KDDT, we propose four research questions (RQs) in Section 4.1. Section 4.2 presents details on the subject system and collected dataset. In Section 4.3 and Section 4.4, we discuss the evaluation metrics and experiment setups.
### Research Questions
**RQ1:** Is KDDT effective in detecting anomalies in TCMs? **RQ2:** Is DTM effective in improving DTC performance? **RQ3:** Is LM effective in extracting inner-packet features? **RQ4:** Is KD effective in improving DTM's encoding capability?
In RQ1, we investigate the effectiveness of KDDT in anomaly detection. With RQ2 - RQ4, we delve into the contribution of each sub-component (i.e., DTM, LM and KD) of KDDT to its overall effectiveness. Specifically, in RQ2, we compare the effectiveness of KDDT with/without DTM. In RQ3, we study the effectiveness of LM and compare it to Word2vec, another text feature extraction method effective for packet-related tasks (Hu et al., 2019). In RQ4, we compare the effectiveness of KDDT with/without KD.
### Industrial Subject System
Our industrial partner Alstom provided us with packet data captured in their TCMS. Anomalies, i.e., packet loss incidents, occur on different devices at different times due to various environmental uncertainties. They collected packet data and manually identified two periods of time when anomalies tend to appear more frequently, i.e., 2021/02/10 09:00 pm - 09:20 pm and 2021/02/11 05:35 am - 05:42 am. As mentioned in Section 3.1, we collect both OOD dataset \(\mathcal{D}_{OOD}\) and ID dataset \(\mathcal{D}_{ID}\). We further divide \(\mathcal{D}_{ID}\) into a training dataset and a testing dataset for the evaluation purpose with a ratio of 0.8.0.2. Table 1 reports key statistics about the dataset.
### Evaluation metrics and statistical tests
#### 4.3.1. Evaluation metrics
We evaluate the effectiveness of KDDT with packet-level metrics in RQ1 - RQ4. We also demonstrate the practical implications of KDDT by reporting incident-level metrics in RQ1. Moreover, the perplexity metric is presented in RQ3 to illustrate the quality of LM.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Metric & Dataset & \(\mathcal{D}_{OOD}\) & \(\mathcal{D}_{ID}\)-Train & \(\mathcal{D}_{ID}\)-Test \\ \hline \multirow{2}{*}{\(\mathcal{N}\)} & Day 10 & 625525 & 750631 & 187658 \\ & Day 11 & 313007 & 375609 & 93903 \\ \hline \multirow{2}{*}{\(\mathcal{N}_{NP}\)} & Day 10 & 625525 & 713990 & 137808 \\ & Day 11 & 313007 & 339395 & 70225 \\ \hline \multirow{2}{*}{\(\mathcal{N}_{AI}\)} & Day 10 & - & 186 & 122 \\ & Day 11 & - & 64 & 21 \\ \hline \multirow{2}{*}{\(\mathcal{N}_{AP}\)} & Day 10 & - & 36641 & 49850 \\ & Day 11 & - & 36214 & 23678 \\ \hline \multirow{2}{*}{\(\mathcal{L}_{AI}\)} & Day 10 & - & 197.00 & 407.23 \\ & Day 11 & - & 565.84 & 1127.52 \\ \hline \multirow{2}{*}{\(\mathcal{T}_{AI}\)} & Day 10 & - & 198353.88 \(\mu_{\text{s}}\) & 420050.60 \(\mu_{\text{s}}\) \\ \cline{1-1} & Day 11 & - & 616352.65 \(\mu_{\text{s}}\) & 1207371.71 \(\mu_{\text{s}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1. Statistics about the dataset. The \(\mathcal{D}_{OOD}\) dataset contains only anomaly-unrelated data, while \(\mathcal{D}_{ID}\) contains anomaly-related data. \(N\) represents the total number of packets in the dataset. \(\mathcal{N}_{NP}\) and \(\mathcal{N}_{AP}\) are the numbers of normal and abnormal packets in the dataset. \(\mathcal{N}_{AI}\) is the number of anomaly incidents in the dataset. \(\mathcal{L}_{AI}\) and \(\mathcal{T}_{AI}\) denote the average length (number of packets in an anomaly) and time duration of anomalies in the dataset.
**Packet-level effectiveness metrics** evaluate the predictive performance of KDDT on a single packet. Following the common practices in the literature, we adopt three commonly used classification metrics: precision, recall, and F1 score. _Precision_ measures the accuracy of the positive predictions. _Recall_ measures the proportion of actual positive cases correctly classified by the model. _F1 score_ is the harmonic average of precision and recall, evaluating the model from both perspectives.
**Incident-level effectiveness metrics** aim to evaluate KDDT from a higher and more practical perspective by focusing on the effectiveness with the anomaly incidents as the minimum unit. We define the following four metrics.
_Packet-Incident Coverage \(C_{PI}\)_ calculates the percentage of abnormal packets identified in a single incident. Let \(\mathcal{N}_{totalP}\) denote the total number of packets of an anomaly incident and \(\mathcal{N}_{correctP}\) denote the number of packets correctly identified by KDDT. We formally define \(C_{PI}\) as in Equation 33.
\[C_{PI}=\frac{\mathcal{N}_{correctP}}{\mathcal{N}_{totalP}} \tag{33}\]
_Incident Coverage \(C_{I}\)_ measures how many anomaly incidents are identified. We assume an incident is identified if at least half of the abnormal packets are correctly classified. We denote the number of identified and total incidents as \(\mathcal{N}_{correctI}\) and \(\mathcal{N}_{totalI}\). We formally define \(C_{I}\) in Equation 34.
\[C_{I}=\frac{\mathcal{N}_{correctI}}{\mathcal{N}_{totalI}} \tag{34}\]
_Detection Time Rate \(\mathcal{DTR}_{I}\)_ assesses the time percentage needed to detect an anomaly incident. Let \(t_{s}\) be the starting time of an incident and \(\hat{t}_{s}\) be the time that KDDT identifies the first abnormal packet correctly. We denote the time duration for an anomaly incident as \(\mathcal{T}_{totalT}\). Formally, \(\mathcal{TR}_{I}\) is defined in Equation 35.
\[\mathcal{DTR}_{I}=\frac{t_{s}-t_{s}}{\mathcal{T}_{totalT}} \tag{35}\]
_Root Mean Square Error of Anomaly Length (\(\mathit{RMSE}_{L}\))_ evaluates how well KDDT can predict the anomaly length. Let the length of anomaly incident \(i\) be \(L_{i}\) and the predicted incident length be \(\hat{L}_{i}\), we formally define \(\mathit{RMSE}_{L}\) as:
\[\mathit{RMSE}_{L}=\sqrt{\frac{1}{n}\sum_{i=1}^{n}(L_{i}-\hat{L}_{i})^{2}} \tag{36}\]
**Perplexity of LM.** Perplexity is commonly used to evaluate probabilistic models, particularly LMs (Beng et al., 2017). In the context of an LM, perplexity measures the average likelihood of its predictions for the next token. Higher perplexity indicates the model has more confidence in its prediction and vice versa. We calculate perplexity as in Equation 37, where \(n\) denotes the sequence length and \(w_{i}\) represents the \(i\)th token in the sequence.
\[\mathit{PPL}=\exp\left(-\frac{1}{n}\sum_{i=1}^{n}\log P(w_{i}|w_{1},\dots,w_{i -1})\right) \tag{37}\]
#### 4.3.2. Statistical testing
Using deep learning models introduces randomness into KDDT, which might threaten our empirical study's validity. Therefore, we repeat each experiment 30 times and perform the Mann-Whitney U test with a significance level of 0.01 as suggested in (Beng et al., 2017). Furthermore, we evaluate the A12 effect size of the improvement (Beng et al., 2017). _Method A_ has a higher chance of getting better values if the A12 value is greater than 0.5 and vice versa. We consider the effect size in the range \([0.56,0.64)\) as _Small_, \([0.64,0.71)\) as _Medium_, and \([0.71,1]\) as _Large_.
### Experiment settings and execution
KDDT involves several hyper-parameters that potentially introduce human biases. We performed a 10-fold cross validation for the hyper-parameter selection with less bias (Beng et al., 2017). The code is built with the Pytorch framework (Pasz, 2017). All the experiments were performed on one node from a national, experimental, heterogeneous computational cluster called eX3. This node is equipped with 2x Intel Xeon Platinum 8186, 1x NVIDIA V100 GPUs.
## 5. Experiment results
### RQ1-KDDT effectiveness
Figure 4 delineates KDDT's performance on the packet level effectiveness metrics. The average F1 scores on both datasets are above 0.91, which demonstrates that KDDT is effective comprehensively. The precision results on both datasets are above 0.9, which represents more than 90% of predicted anomalous packets are true anomalies. The recall results reach 0.927 on the Day 11 dataset and 0.937 on the Day 10 dataset, which indicates more than 92% anomalous packets are successfully detected by KDDT.
\begin{table}
\begin{tabular}{l c|l|c} \hline \hline Hyper-parameter & Value & Hyper-parameter & Value \\ \hline optimizer & AdamW & lr & 0.001 \\ betas & (0.9, 0.999) & Embedd size & 64 \\ \(h^{oE}\) size & 32 & \(h^{d\mathit{t}mE}\) size & 16 \\ \(h^{\mathit{a}D}\) size & 32 & \(h^{d\mathit{t}mD}\) size & 16 \\ CNN out channels & 6 & batch size & 12 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Hyperparameter values in KDDT. \(lr\) denotes the learning rate of the optimizer, and \(betas\) are also arguments of AdamW (Pasz, 2017). \(h^{oE}\), \(h^{\mathit{d}tmE}\), \(h^{\mathit{a}D}\) and \(h^{\mathit{d}tmD}\) represent the hidden vector of VAE encoder, VAE decoder, DTM encoder, and DTM decoder, respectively.
Figure 4. Results of the packet-level effectiveness metrics as suggested in (Beng et al., 2017). Furthermore, we evaluate the A12 effect size of the improvement (Beng et al., 2017). _Method A_ has a higher chance of getting better values if the A12 value is greater than 0.5 and vice versa. We consider the effect size in the range \([0.56,0.64)\) as _Small_, \([0.64,0.71)\) as _Medium_, and \([0.71,1]\) as _Large_.
We also observe that the performance of KDDT on the Day 11 dataset is generally inferior to that on the Day 10 dataset. One possible reason for this discrepancy is that the number of anomaly labels on the Day 11 dataset is smaller than that on the Day 10 dataset, indicating a higher training difficulty.
Table 3 presents the experiment results for the incident-level effectiveness metrics. \(\mathcal{C}_{PI}\) results indicate that, on average, 87.36% and 81.09% abnormal packets in each anomaly incident are detected on Day 10 and Day 11, respectively. The incident coverages are both 100%, showing that all incidents are successfully identified. \(\mathcal{DTR}_{I}\) on both datasets are relatively low (\(<7\%\)), implying that KDDT can detect an anomaly incident after the first 7% packets. A low \(\mathcal{DTR}_{I}\) indicates KDDT can detect anomalies near instantly after they take place, facilitating _live_ detection of various types of anomalies in TCMS, consequently curbing the potential damage to the whole system. \(RMSE_{L}\) signifies the standard deviation of the residuals, which represent the discrepancy between the predicted and the real incident duration. We can observe from the last column of Table 3 that the \(RMSE_{L}\) on both datasets (i.e., 43.70 and 185.06) is substantially lower than the average lengths of incidents (407.23 and 1127.52 as reported in Table 1). This indicates that KDDT can predict the duration of an anomaly incident with a relatively small residual. The advantage of small residual can be harnessed for other applications, such as incident boundary determination, where a dedicated model can be established to predict the start and the end of an anomaly incident.
**Concluding remarks on RQ1:** Regarding the packet-level effectiveness metrics (precision, recall and F1 score), KDDT achieves more than 90% predictive performance on the Day 10 and Day 11 datasets. Regarding the incident-level effectiveness metrics, KDDT demonstrates high packet and incident coverages, low detection time rate, and relatively low RMSE of anomaly length prediction, implying KDDT's potential applications in incident detection, live detection, and incident boundary determination.
### RQ2-DTM Effectiveness
KDDT is a DT-based method, where DTM simulates TCMS and provides information about the system state to DTC. To evaluate the contribution of DTM, we compare KDDT and KDDT without DTM (denoted as KDDT-NoDTM).
**Concluding remarks on RQ2:** We observe a substantial decrease in the two datasets in terms of precision (12.7%), recall (12.1%), and F1 score (12.4%) when we remove the DTM from KDDT. The DTM extracts inter-packet chronological features from the packets, which is indispensable for accurate anomaly detection.
### RQ3-LM Effectiveness
KDDT employs LM to extract inner-packet context features. To evaluate its effectiveness, we employ the complexity metric (Section 4.3.1) to compare the performance of KDDT's LM with the baseline Word2vec model. Results are shown in Figure 5. We can observe that the perplexity first decreases sharply and then gradually converges to around 30 at the end of training (after 400k batches). Low perplexity indicates that the LM has high confidence in predicting the next token and extracting inner-packet context features.
Table 5 shows the comparison results of KDDT and KDDT-Word2Vec. We can observe decreases in all three metrics on both datasets. The minimum decrease is 0.011 (\(0.926-0.915\)) on the precision of the Day 10 dataset. The decreases are all significant except for the precision of the Day 10 dataset (p-value = \(0.066>0.01\)). We also observe a large effect size in recall and F1 score on both datasets (\(A12>0.71\) as defined in Section 4.3.2).
**Concluding remarks on RQ3:** LM presents low perplexity after training, suggesting strong feature extraction capability. KDDT outperforms KDDT-Word2vec significantly on both the Day 10 and Day 11 datasets (average F1 score improvement of 0.3).
### RQ4-Knowledge Distillation Effectiveness
The training of KDDT is guided by the knowledge distilled from a teacher model, i.e., the VAE encoder. To demonstrate the effectiveness of KD, we compare KDDT and KDDT without KD (denoted as KDDT-NoKD). Table 6 depicts the comparison results. All metrics experienced a remarkable decrease with a minimum of 0.027
\begin{table}
\begin{tabular}{l|l c c c c} \hline \hline Dataset & Metric & KDDT & Word2Vec & p-value & A12 \\ \hline \multirow{3}{*}{Day 10} & Precision & 0.926 & 0.915 & 0.066 & 0.200 \\ & Recall & 0.937 & 0.894 & \textless{}0.01 & 0.733 \\ & F1 score & 0.931 & 0.904 & \textless{}0.01 & 0.867 \\ \hline \multirow{3}{*}{Day 11} & Precision & 0.903 & 0.882 & \textless{}0.01 & 0.600 \\ & Recall & 0.927 & 0.882 & \textless{}0.01 & 0.933 \\ \cline{1-1} & F1 score & 0.915 & 0.882 & \textless{}0.01 & 0.933 \\ \hline \hline \end{tabular}
\end{table}
Table 4. Comparison of KDDT and KDDT-NoDTM
Figure 5. Results of perplexity during LM training
\begin{table}
\begin{tabular}{l l c c c} \hline \hline Dataset & \(\mathcal{C}_{PI}\) & \(\mathcal{C}_{I}\) & \(\mathcal{DTR}_{I}\) & \(RMSE_{L}\) \\ \hline Day 10 & 87.36\% & 100\% & 3.81\% & 43.70 \\ Day 11 & 81.09\% & 100\% & 6.22\% & 185.06 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Results of the incident-level effectiveness metrics
(\(0.926-0.899\) on the precision of the Day 10 dataset). All the p-values in this table are smaller than 0.01, indicating the significance of the performance decrease brought by removing KD. We observe large effect sizes (\(A12>0.71\)) in all metrics except precision on Day 10 dataset, indicating KDDT-NoKD is highly probable to yield worse results than KDDT. Such results are consistent with our expectations as we posit KD can incorporate extra knowledge extracted from OOD data. Specifically, the pretrained VAE has a larger capacity in terms of model complexity compared to DTM, allowing it to fit a large \(\mathcal{D}_{OOD}\) dataset without significant underfitting issues. The hidden vectors produced by the pretrained VAE epito-mize its knowledge about feature extraction from a network packet. Therefore, using these hidden vectors as soft targets to guide the training of DTM entails a distillation process from the pretrained VAE to DTM. The extra knowledge distilled from the pretrained VAE supplements the training of DTM on the \(\mathcal{D}_{ID}\) dataset, hence the predictive performance boost with KD.
**Concluding remarks on RQ4:** Removing KD from KDDT leads to a remarkable decrease (\(>2.7\%\)) in terms of average precision (\(3.75\%\)), recall (\(8.3\%\)), and F1 score (\(6.05\%\)). Most of the decreases are significant (p-value\(<0.01\)) with large effect sizes (\(A12\geqslant 0.71\)). We conclude that KD is effective in improving the encoding capability of DTM.
### Threats to Validity
**Construct Validity** concerns whether our chosen metrics accurately represent the anomaly detection quality. To be comprehensive, we include two sets of metrics: packet-level effectiveness metrics (i.e., precision, recall and F1 score) commonly used for evaluating anomaly detection methods and the incident-level effectiveness metrics (i.e., the packet-incident coverage, incident coverage, length RMSE and detection time rate) focusing on practical implications and assess KDDT from a domain perspective. We argue that these metrics together enable a more holistic assessment of KDDT.
**Internal Validity** refers to the credibility between cause and effect. One possible threat to internal validity is the selection of hyperparameters. The performance of DT may diminish in different settings. To reduce such threats, we choose these hyperparameters with cross-validation, which yields generic and optimal hyperparameters. The selection of hyperparameters does not introduce any human bias against the experiment datasets.
**Conclusion Validity** pertains to the validity of the conclusions drawn from the experiments. One common threat to conclusion validity is the randomness introduced in the model. KDDT harnesses neural networks for anomaly detection, inevitably introducing randomness. To mitigate this threat, we repeated each experiment 30 times. We performed statistical testing to study the significance of each improvement to ensure that the conclusions derived from our study are reliable.
**External Validity** concerns the extent to which KDDT can generalize to other contexts. To reduce threats to external validity, we design our method to be generic, assuming no prior knowledge of the dataset distribution. Moreover, despite the high cost of collecting data from the real system, we obtained two separate datasets from different interfaces. Furthermore, we pretrain the VAE with the OOD dataset comprising anomaly-unrelated data samples, which is considered a universal task rather than bound to anomaly detection. The pretrained VAE pertains to knowledge of encoding a network packet into a hidden vector with high quality, benefitting various downstream tasks such as intrusion detection, network traffic analysis, and robustness analysis.
## 6. Practical Implications
**Automating anomaly detection.** Our experiment results show that KDDT is effective for anomaly detection of Alstom's TCMS network, comprehensively measured with commonly used metrics for evaluating predictive performance (e.g., precision) at the packet level and our newly proposed incident-level metrics such as anomaly incident coverage. Especially, we observed that KDDT reaches a 100% incident coverage for both datasets, indicating that all anomaly incidents can be successfully detected. The practical implication of this observation is that a high incident coverage can release domain experts from excessive manual work of pinpointing anomaly incidents from vast network packets.
**Enabling live monitoring.** Moreover, KDDT exhibited a low detection time rate, critical for enabling live monitoring of the TCMS network by providing near-instant alerts and warnings. An automatic protocol can be established to react to these warnings and alerts and, in turn, prevents the anomaly's further influence on the TCMS. Furthermore, we observe a relatively low RMSE of the anomaly length predicted by KDDT. Low RMSE demonstrates the effectiveness of utilizing KDDT for anomaly boundary determination. An accurate boundary determination facilitates the TCMS to react accordingly and appropriately. For example, the TCMS can neglect a short-length anomaly, thanks to the network's re-transmission mechanism, whereas a long-length anomaly can lead to function failures or even system failures, requiring resources and reactions from the TCMS to mitigate the impact of the anomaly.
**Leveraging DT for improving CPS dependability.** Our experiment results highlight that KDDT highly benefits from having DTM in the DT structure. First, by simulating the TCMS, DTM can provide valuable information about the operation state of the TCMS (e.g., unstable network states where frequent re-transmissions occur). Second, developing the DTM is cost-efficient since it is automatically developed to simulate the TCMS by predicting the next network packet (Section 3.4.1). Predicting the next packet requires encoding the packet into a high-quality hidden vector, which is a universal task that can benefit many downstream tasks, such as intrusion detection and network traffic analysis. In other words, we can reuse the same DTM to support different DTCs.
**Benefiting from language models.** Our results show that LM can provide contextualized features compared to Word2vec. Extracting
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline Dataset & Metric & KDDT & NoKD & p-value & A12 \\ \hline \multirow{3}{*}{Day 10} & Precision & 0.926 & 0.899 & \(<\)0.01 & 0.333 \\ & Recall & 0.937 & 0.874 & \(<\)0.01 & 1.000 \\ & F1 score & 0.931 & 0.886 & \(<\)0.01 & 0.933 \\ \hline \multirow{3}{*}{Day 11} & Precision & 0.903 & 0.855 & \(<\)0.01 & 0.933 \\ & Recall & 0.927 & 0.824 & \(<\)0.01 & 1.000 \\ \cline{1-1} & F1 score & 0.915 & 0.839 & \(<\)0.01 & 1.000 \\ \hline \hline \end{tabular}
\end{table}
Table 6. Comparison of KDDT and KDDT-NoKD
features from text is extensively researched in NLP. Though we believe pretraining an LM with network packet data is sufficient and cost-efficient for the network anomaly detection tasks in the TCMS, the potential benefit of exploring Large Language Models (LLMs) such as Elmo (Elmo, 2019), Bert (Bert, 2019) and GPT (Zhu et al., 2020), is intriguing. As Wei et al. (Wei et al., 2020) pointed out, LLMs show emergent capability, representing high predictive performance and sample efficiency on downstream tasks that small LLMs do not possess. We are motivated to investigate the future application of LLMs with network packet data.
**Using KD to alleviate the need for abundant in-domain data.** KDDT employs KD to make use of OOD data. The nature of supervised machine learning requires abundant labelled data for its training, which is expensive to collect. In some cases, collecting merely ID dataset is non-trivial as well. For instance, collecting anomaly-related data in the TCMS network (denoted as \(\mathcal{D}_{ID}\)) requires manual examination and collection by domain experts with the help of an in-house signal analysis tool DCUTerm. Moreover, extra work is required for labelling the ID dataset. To alleviate the need for ID data, KD can distil Knowledge from the OOD dataset to DTM. Incorporating the OOD dataset unless KDDT from the sole dependency of the ID dataset at a relatively low cost since the OOD dataset (i.e., any normal network packets) is easier to collect in the TCMS network.
## 7. Related Work
We discuss the related works from two aspects: DT for CPS and network anomaly detection.
**Digital Twin for CPS.** DTs have been investigated to enhance the security and safety of CPSs. DT originates from the Apollo program for mission training and support (Zhu et al., 2020). Later, DT was generalized to accommodate various CPSs, such as water treatment plants and power grids (Zhu et al., 2020; Zhu et al., 2020). Eckhart and Ekelhart designed a rule-based DT dedicated to CPS intrusion detection (Eckhart and Ekelhart, 2019), which constantly checks rules violations that a CPS must adhere to under normal conditions. Instead of relying on prior knowledge, Damjanovic-Behrendt (Damjanovic-Behrendt, 2019) proposed to build a machine learning-based DT to learn privacy-related features directly from automobile industry datasets, including both historical and real-time data.
Yue et al. (Zhu et al., 2020) proposed a DT conceptual model and further decoupled the two sub-components of DT: DTM and DTC. They define the DTM as a live replica of the CPS, while the DTC is the functionality of the DT. Xu et al. (Xu et al., 2020) realized this conceptual model by building the DTM as a timed automata machine and the DTC as a Generative Adversarial Network (GCN). Experimental results on public testbeds demonstrate the effectiveness of such decoupled DT structures. Latsou et al. (Latsou et al., 2020) extended the concept of DT and proposed a multi-agent DT architecture for anomaly detection and bottleneck identification in complex manufacturing systems. In addition to DT construction for a specific CPS, Xu et al. (Xu et al., 2020) and Lu et al. (Lu et al., 2020) introduced transfer learning for DT evolution to synchronize with the changes in CPSs and software systems, respectively. Despite the success of these methods, building a data-driven DT inevitably requires sufficient ID data. In contrast, KDDT adapts a VAE as the DTM structure and utilizes KD to distill knowledge from OOD data. Doing so significantly increases the data volume, improving the performance of DT.
**Network Anomaly detection.** Many approaches exist for network anomaly detection (Beng et al., 2019), where this task is generally formulated as a classification task. Early research on network anomaly detection originates in statistical or rule-based approaches. Machine learning methods were used to estimate the data distribution and employ statistical testing to detect anomalies (Kon et al., 2019). A rule-based network anomaly detector was proposed in (Kon et al., 2019) to detect predefined rule violations and consider them anomalies.
Statistical and rule-based approaches require extensive effort from domain experts. To mitigate such problems, neural networks have been widely explored for network anomaly detection due to their capability of automatic feature extraction (Kon et al., 2019). Kwon et al. (Kon et al., 2019) evaluated three CNN architectures for network anomaly detection and concluded that shallow CNN performs best. Li et al. (Li et al., 2020) divide network fields into categorical and numerical fields and extract features separately. The feature vectors are then converted into images with 8*8 grayscale pixels and fed into a CNN to extract features automatically.
Few works have considered network packets as raw text and utilized NLP for feature extraction. Goodman et al. (Goodman et al., 2019) modified the Word2vec model and generated embeddings for each token in a network packet. The proposed method then takes these embeddings as input features to classify network traffics into benign or malicious. Our method KDDT follows this research line and uses contextualized LM as an alternative to Word2vec. LM provides valuable context information that can help extract high-quality features from intricate packets. Our results show that the contextualized LM outperforms Word2vec and leads to high predictive performance.
## 8. Conclusion and Future Work
In this study, we introduced a digital twin (DT) based approach, referred to as KDDT, for tackling the anomaly detection task within Train Control and Management System (TCMS) networks. KDDT leverages a language model (LM) and an LSTM to extract in-packet context and inter-packet chronological features. To capitalize on the information present in out-of-domain (OOD) data, we train a Variational Autoencoder with OOD data and use it to guide the training process of the digital twin model (DTM). We evaluate KDDT with two datasets from Alstom. Experimental results show the effectiveness of KDDT regarding packet-level and incident-level metrics. We also investigate the individual contribution of each component. Experimental results demonstrate the effectiveness of DTM, LM and KD, which can be leveraged to address various tasks other than anomaly detection. We plan to explore more contextualized LM, such as ElMo, Bert, and GPT. We are particularly interested in the potential application of ChatGPT in this domain despite of obvious drawbacks mentioned in Section 5.5.
## 9. Acknowledgements
The project is supported by the security project funded by the Norwegian Ministry of Education and Research, the Horizon 2020 project ADEPTNESS (871319) funded by the European Commission, and the Co-tester (#314544) project funded by the Research Council of Norway (RCN). This work has benefited from the Experimental Infrastructure for Exploration of Exascale Computing (eX3), which is financially supported by RCN under contract 270053. |
2309.05508 | Lie-Yamaguti Algebra Bundle | We introduce the notion of Lie-Yamaguti algebra bundle, define its cohomology
groups with coefficients in a representation and show that such bundles
appeared naturally from geometric considerations in the work of M. Kikkawa,
which motivates us to introduce this object in the proper mathematical
framework. We also study abelian extensions of Lie-Yamaguti algebra bundles and
investigate their relationship with suitable cohomology group. | Saikat Goswami, Goutam Mukherjee | 2023-09-11T14:51:32Z | http://arxiv.org/abs/2309.05508v1 | # Lie-Yamaguti algebra bundle
###### Abstract.
We introduce the notion of Lie-Yamaguti algebra bundle, define its cohomology groups with coefficients in a representation and show that such bundles appeared naturally from geometric considerations in the work of M. Kikkawa, which motivates us to introduce this object in the proper mathematical framework. We also study abelian extensions of Lie-Yamaguti algebra bundles and investigate their relationship with suitable cohomology group.
Key words and phrases:Vector bundle, Lie-Yamaguti algebra, Non-associative algebra, Cohomology 2020 Mathematics Subject Classification: 53B05, 58A05, 16E99, 17A30, 17A40
## 1. Introduction
Triple systems in algebra may be traced back to the works of P. Jordan, J. v. Neumann and E. Wigner [9] in quantum mechanics, and N. Kemmer [10, 11] in particle physics. The notion of Lie triple system was formally introduced as an algebraic object by Jacobson [8] in connection with problems which arose from quantum mechanics.
Nomizu [18] proved that affine connections with parallel torsion and curvature are locally equivalent to invariant connections on reductive homogeneous spaces, and that each such space has a canonical connection for which parallel translation along geodesics agrees with the natural action of the group.
Let \(M\) be a smooth manifold equipped with a linear connection \(\nabla.\) Let \(e\in M\) be a given fixed point. Then there is a local multiplication \(\mu\) at \(e\) compatible with \(\nabla,\) which is given by
\[\mu(x,y)=exp_{x}\circ\tau_{e,x}\circ exp_{e}^{-1}(y),\]
where \(exp_{x}\) denotes the exponential mapping at \(x\) and \(\tau_{e,x}\) denotes the parallel displacement of tangent vectors along the geodesic joining \(e\) to \(x\) in a normal neighbourhood of \(e\)[13].
If \(M\) is a reductive homogeneous space \(A/K\) with the canonical connection, due to K. Nomizu, then the local multiplication \(\mu\) given above satisfies some special property (cf. [18]). In particular, if M is a Lie group \(A\) itself, then the canonical connection is reduced to the connection of [3] and the local multiplication \(\mu\) coincides with the multiplication of \(A\) in local.
Motivated by this fact, M. Kikkawa [13] investigated the problem of the existence of a global differentiable binary system on a reductive homogeneous space \(A/K,\) which coincides locally with the above geodesic local multiplication \(\mu\) and observed that the problem is related to the canonical connection and to the general Lie triple system defined on the tangent space \(T_{e}M.\) In this paper, Kikkawa renamed the notion of general Lie triple system
as _Lie triple algebra_. Kinyon and Weinstein [15] observed that Lie triple algebras, which they called Lie-Yamaguti algebras in their paper, can be constructed from Leibniz algebras. Leibniz algebras are non anti-symmetric analogue of Lie algebras introduced by J. L. Loday [16].
**Organization of the paper**: In SS2, we set up notations, recall some known definitions and results. In SS3, we introduce the main object of study of the present paper, namely, the notion of a _Lie-Yamaguti algebra bundle_, illustrate examples of such bundles and describe a general method of constructing such bundles. In SS4, we introduce the concept of representation of Lie-Yamaguti algebra bundles which is required to introduce cohomology groups of Lie-Yamaguti algebra bundles. In SS5, we define cohomology groups of a Lie-Yamaguti algebra bundle with coefficients in a given representation. Finally, in SS6, we study (abelian) extensions of Lie-Yamaguti algebra bundles and establish its connection to cohomology.
## 2. Preliminaries
The aim of this section is to recall some basic definitions and set up notations to be followed throughout the paper. Let \(\mathbb{K}\) be a given field.
**2.1 Definition**.: A Lie algebra is a vector space \(\mathfrak{g}\) over \(\mathbb{K}\) equipped with a \(\mathbb{K}\)-bilinear operation \([\,\ ]:\mathfrak{g}\times\mathfrak{g}\to\mathfrak{g}\) satisfying
1. (Anti-symmetry): \([x,y]=-[y,x]\) for all \(x,y\in\mathfrak{g}\);
2. (Jacobi identity): \([[x,y],z]+[[y,z],x]+[[z,x],y]=0\) for all \(x,y,z\in\mathfrak{g}\).
**2.2 Definition**.: A Leibniz algebra is a vector space \(\mathfrak{g}\) over \(\mathbb{K}\) equipped with a \(\mathbb{K}\)-bilinear operation \(\cdot:\mathfrak{g}\times\mathfrak{g}\to\mathfrak{g}\) satisfying the Leibniz identity
\[x\cdot(y\cdot z)=(x\cdot y)\cdot z+y\cdot(x\cdot z)\]
for all \(x,y,z\in\mathfrak{g}\).
It is easy to see that in presence of the anti-symmetric condition the Leibniz identity reduces to Jacobi identity. Thus, Lie algebras are examples of Leibniz algebras. See [16] for many other non-trivial examples of Leibniz algebras.
**2.3 Definition**.: A Lie triple system is a vector space \(\mathfrak{g}\) over \(\mathbb{K}\) equipped with a \(\mathbb{K}\)-trilinear operation
\[\{\,\,\ \}:\mathfrak{g}\times\mathfrak{g}\times\mathfrak{g}\to\mathfrak{g}\]
satisfying
1. \(\{x,y,z\}=-\{y,x,z\}\) for all \(x,y,z\in\mathfrak{g}\);
2. \(\{x,y,z\}+\{y,z,x\}+\{z,x,y\}=0\) for all \(x,y,z\in\mathfrak{g}\);
3. \(\{x,y,\{u,v,w\}\}=\{\{x,y,u\},v,w\}+\{u,\{x,y,v\},w\}+\{u,v,\{x,y,w\}\}\) for all \(x,y,u,v,w\in\mathfrak{g}\).
The following is an interesting example of a Lie triple system which arose from Physics [8].
### Example
We denote by \(M_{n+1}(\mathbb{R})\), the set of all \((n+1)\times(n+1)\) matrices over the field \(\mathbb{R}\), which is an associative algebra with respect to matrix multiplication. Let \(\delta_{ij}\) denote the Kronecker delta symbol
\[\delta_{ij}=\left\{\begin{array}{ll}0&i\neq j\\ 1&i=j\end{array}\right.\]
and \(e_{i,j}\) denote the elementary matrix which has \(1\) in the \((i,j)\)-entry as its only non-zero entry. Let \(\mathfrak{m}\) be the subspace of \(M_{n+1}(\mathbb{R})\) spanned by the matrices \(G_{i}\) for \(i=1,2,\cdots,n\), where \(G_{i}=e_{i,n+1}-e_{n+1,i}.\) As an example, for \(n=3\), the matrix \(G_{2}\in M_{4}(\mathbb{R})\) is given by
\[G_{2}=\begin{pmatrix}0&0&0&0\\ 0&0&0&1\\ 0&0&0&0\\ 0&-1&0&0\end{pmatrix}.\]
Then, the subspace \(\mathfrak{m}\) is closed under the ternary product
\[\{A,B,C\}:=[[A,B],C],\ A,B,C\in\mathfrak{g}\]
where \([A,B]:=AB-BA\) is the commutator bracket. Explicitly, the trilinear product of the basis elements are given by
\[[[G_{i},G_{j}],G_{k}]=\delta_{ki}G_{j}-\delta_{kj}G_{i}.\]
It turns out that \((\mathfrak{m},\{\,\,\ \})\) is a Lie triple system, first used in [6] to provide a significant and elegant algebraic formalism of Meson equations. It was introduced formally as a Lie triple system by Jacobson [8] and hence known as Meson field.
### Remark
Note that any Lie algebra \((\mathfrak{g},[\,\ ])\) can be viewed as a Lie triple system with the trilinear operation
\[\{x,y,z\}:=[[x,y],z]\]
for all \(x,y,z\in\mathfrak{g}\).
### Definition
A Lie-Yamaguti Algebra \((\mathfrak{g},[\,\ ],\{\,\,\ \})\) is a vector space \(\mathfrak{g}\) equipped with a \(\mathbb{K}\)-bilinear operation
\[[\,\ ]:\mathfrak{g}\times\mathfrak{g}\rightarrow\mathfrak{g}\]
and a \(\mathbb{K}\)-trilinear operation
\[\{\,\,\ \}:\mathfrak{g}\times\mathfrak{g}\times\mathfrak{g}\rightarrow\mathfrak{g}\]
such that for all \(x,y,z,u,v,w\in\mathfrak{g}\) the following relations hold:
(LY1) \[[x,y]=-[y,x];\]
(LY2) \[\{x,y,z\}=-\{y,x,z\};\]
(LY3) \[\Sigma_{\circlearrowrightright,z}([[x,y],z]+\{x,y,z\})=0;\]
(LY4) \[\Sigma_{\circlearrowright,y,z}\{[x,y],z,u\}=0;\]
(LY5) \[\{x,y,[u,v]\}=[\{x,y,u\},v]+[u,\{x,y,v\}];\]
(LY6) \[\{x,y,\{u,v,w\}\}=\{\{x,y,u\},v,w\}+\{u,\{x,y,v\},w\}+\{u,v,\{x,y,w\}\}.\]
Here, \(\Sigma_{\odot(x,y,z)}\) denotes the sum over cyclic permutations of x, y, and z.
_2.7 Remark_.: Notice that if the trilinear product in a Lie-Yamaguti algebra is trivial, that is, if \(\{\,\,\ \}=0\), then (LY2), (LY4), (LY5), and (LY6) are trivial, and (LY1) and (LY3) define a Lie algebra structure on \(\mathfrak{g}\). On the other hand, if the binary product is trivial, that is, \([\,\ ]=0\), then (LY1), (LY4), and (LY5) are trivial, and (LY2), (LY3), together with (LY6) define a Lie triple system on \(\mathfrak{g}\).
The following result is well-known.
_2.8 Lemma_.: _Let \((\mathfrak{g},[\,\ ])\) be a Lie algebra over \(\mathbb{K}\). Then, \(\mathfrak{g}\) has a Lie-Yamaguti algebra structure induced by the given Lie bracket, the trilinear operation being:_
\[\{a,b,c\}=[[a,b],c]\]
_for all \(a,b,c\in\mathfrak{g}\)._
_2.9 Example_.: Let \((\mathfrak{g},\cdot)\) be a Leibniz algebra. Define a bilinear operation and a trilinear operation as follows:
\[[\,\ ]:\mathfrak{g}\times\mathfrak{g}\rightarrow\mathfrak{g},\ [a,b]:=a \cdot b-b\cdot a,\ a,b\in\mathfrak{g};\]
\[\{\,\,\ \}:\mathfrak{g}\times\mathfrak{g}\times\mathfrak{g}\rightarrow \mathfrak{g},\ \{a,b,c\}:=-(a\cdot b)\cdot c,\ a,b,c\in\mathfrak{g}.\]
Then, \((\mathfrak{g},[\,\ ],\{\,\,\ \ \})\) is a Lie-Yamaguti algebra.
Let \((\mathfrak{g},\langle\,\ \rangle)\) be a Lie algebra. Recall that a reductive decomposition of \(\mathfrak{g}\) is a vector space direct sum \(\mathfrak{g}=\mathfrak{h}\oplus\mathfrak{m}\) satisfying \(\langle\mathfrak{h},\mathfrak{h}\rangle\subseteq\mathfrak{h}\) and \(\langle\mathfrak{h},\mathfrak{m}\rangle\subseteq\mathfrak{m}.\) In this case, we call \((\mathfrak{h},\mathfrak{m})\) a _reductive pair_.
_2.10 Example_.: Let \((\mathfrak{g},\langle\,\ \rangle)\) be a Lie algebra with a reductive decomposition \(\mathfrak{g}=\mathfrak{h}\oplus\mathfrak{m}\). Then, there exist a natural binary and a ternary product on \(\mathfrak{m}\) defined by
\[[a,b]:=\pi_{\mathfrak{m}}(\langle a,b\rangle),\ \{a,b,c\}:=\langle\pi_{ \mathfrak{h}}(\langle a,b\rangle),c\rangle,\]
where \(\pi_{\mathfrak{m}}\) and \(\pi_{\mathfrak{h}}\) are the projections on \(\mathfrak{m}\) and \(\mathfrak{h}\), respectively. These products endow \(\mathfrak{m}\) with the structure of a Lie-Yamaguti algebra [2].
_2.11 Example_.: Consider the vector space \(\mathfrak{g}\) over \(\mathbb{K}\) generated by \(\{e_{1},e_{2},e_{3}\}.\) Define a bilinear operation \([\,\ ]\) and a trilinear operation \(\{\,\,\ \}\) on \(\mathfrak{g}\) as follows.
\[[e_{1},e_{2}]=e_{3};\ \{e_{1},e_{2},e_{1}\}=e_{3}.\]
All other brackets of the basis elements are either determined by the definition of Lie-Yamaguti algebra or else are zero. Then, \(\mathfrak{g}\) with the above operations is a Lie-Yamaguti algebra.
See [1] for classification of some low dimensional Lie-Yamaguti algebras.
**2.12 Definition**.: Let \((\mathfrak{g},[\,\ ],\{\,\,\ \})\), \((\mathfrak{g}^{\prime},[\,\ ]^{\prime},\{\,\,\ \}^{\prime})\) be two Lie-Yamaguti algebras. A homomorphism
\[\phi:(\mathfrak{g},[\,\ ],\{\,\,\ \})\rightarrow(\mathfrak{g}^{\prime},[\, \ ]^{\prime},\{\,\,\ \ \}^{\prime})\]
of Lie-Yamaguti algebras is a \(\mathbb{K}\)-linear map \(\phi:\mathfrak{g}\rightarrow\mathfrak{g}^{\prime}\) satisfying
\[\phi([x,y])=[\phi(x),\phi(y)]^{\prime},\ \phi(\{x,y,z\})=\{\phi(x),\phi(y), \phi(z)\}^{\prime}\]
for all \(x,y,z\in\mathfrak{g}.\)
A homomorphism
\[\phi:(\mathfrak{g},[\,\ ],\{\,\,\ \})\rightarrow(\mathfrak{g}^{\prime},[ \,\ ]^{\prime},\{\,\,\ \ \}^{\prime})\]
of Lie-Yamaguti algebras is an isomorphism if there exists a homomorphism
\[\phi^{\prime}:(\mathfrak{g}^{\prime},[\,\ ]^{\prime},\{\,\,\ \ \}^{\prime}) \rightarrow(\mathfrak{g},[\,\ ],\{\,\,\ \ \})\]
such that \(\phi^{\prime}\circ\phi=id_{\mathfrak{g}}\) and \(\phi\circ\phi^{\prime}=id_{\mathfrak{g}^{\prime}}.\) The set of all self-isomorphisms of a Lie-Yamaguti algebra \((\mathfrak{g},[\,\ ],\{\,\,\ \})\) is obviously a group under composition of maps and is denoted by \(Aut_{LY}(\mathfrak{g}).\)
The notion of Lie algebra bundle was introduced in [5]. For smooth Lie algebra bundle we refer [17]. Other notions of algebra bundles are available in the literature and appeared in various context.
Let \(M\) be a smooth manifold (Hausdorff and second countable, hence, paracompact). Let \(C^{\infty}(M)\) be the algebra of smooth functions on \(M\). Let \(TM\) be the tangent bundle of \(M\). Recall that a vector field on \(M\) is a smooth section of the tangent bundle \(TM.\) Let us denote the space of vector fields on \(M\) by \(\chi(M).\) It is well-known that \(\chi(M)\) is a \(C^{\infty}(M)\)-module. Moreover, \(\chi(M)\) is a Lie algebra with the commutator bracket:
\[[\alpha,\beta]:=\alpha\beta-\beta\alpha\]
for \(\alpha,\beta\in\chi(M).\) Here, for \(\alpha,\beta\in\chi(M)\) and \(p\in M,\) the action of \(\alpha\beta(p)\) on a smooth function \(f\in C^{\infty}(M)\) is given by
\[\alpha\beta(p)(f)=\alpha_{p}(\beta f),\]
where \(\beta f\in C^{\infty}(M)\) is given by \(\beta f(m)=\beta_{m}(f),\ m\in M.\)
For a (smooth) vector bundle \(p:L\to M,\) often denoted by \(\xi=(L,p,M),\) we denote the space of smooth sections of \(L\) by \(\Gamma L.\) It is well-known that \(\Gamma L\) is a \(C^{\infty}(M)\)-module. For any \(m\in M,\) we denote the fibre of the vector bundle \(\xi\) over \(m\) by \(L_{m}\) or sometimes by \(\xi_{m}.\)
Henceforth, we will work in the smooth category and with \(\mathbb{K}=\mathbb{R}.\)
**2.13 Definition**.: Let \((L,p,M)\) be a vector bundle and let \([\,\ ]\) be a section of the bundle \(Alt^{2}(L)\) such that for each \(m\in M,\)
\[[\,\ ]_{m}:L_{m}\times L_{m}\to L_{m}\]
is a Lie algebra bracket on \(L_{m}.\) We call such a section a field of Lie algebra brackets in \(L.\)
**2.14 Definition**.: A Lie algebra bundle is a vector bundle \((L,p,M)\) together with a field of Lie algebra brackets
\[m\mapsto[\,\ ]_{m},\ m\in M.\]
Thus, for a Lie algebra bundle \((L,p,M)\), each fibre \(L_{m}\) is a Lie algebra which varies smoothly as \(m\in M\) varies over \(M.\) In other words, the assignment \(m\mapsto[\,\ ]_{m},\ m\in M\) is smooth.
**2.15 Definition**.: Let \(\mathfrak{g}\) be a given Lie algebra. A locally trivial Lie algebra bundle with fibre \(\mathfrak{g}\) is a vector bundle \((L,p,M)\) together with a field of Lie algebra brackets
\[m\mapsto[\,\ ]_{m},\ m\in M\]
such that \(M\) admits an open covering \(\{U_{i}\}\) equipped with local trivializations \(\{\psi_{i}:U_{i}\times\mathfrak{g}\to p^{(-1)}(U_{i})\}\) for which each \(\psi_{i,m},\ m\in M\) (\(\psi_{i}\) restricted to each fibre \(L_{m}\)) is a Lie algebra isomorphism.
A homomorphism \(\phi:(L,p,M)\rightarrow(L^{\prime},p^{\prime},M^{\prime})\) of Lie algebra bundles is a vector bundle morphism \((\phi,\phi_{0}),\) where \(\phi_{0}:M\to M^{\prime}\) such that \(\phi|_{L_{m}}:L_{m}\to L^{\prime}_{\phi_{0}(m)},\ m\in M\) is a Lie algebra homomorphism.
## 3. Lie-Yamaguti Algebra Bundle
In this section we introduce the notion of Lie-Yamaguti algebra bundle and related results. All vector bundles and vector bundle maps are assumed to be smooth and \(\mathbb{K}=\mathbb{R}.\)
**3.1 Definition**.: Let \(\xi=(L,p,M)\) be a (real) vector bundle. Let \(\langle\,\ \cdots,\ \rangle\) be a section of the bundle \(Hom(\xi^{\otimes k},\xi).\) We call such a section a \(k\)-field of (\(\mathbb{K}\)-mulinear) brackets in \(\xi.\) Thus, a \(k\)-field of brackets in \(\xi\) is a smooth assignment
\[m\mapsto(\langle\,\cdots,\ \rangle_{m}:\xi_{m}\times\cdots\times\xi_{m} \rightarrow\xi_{m})\]
of multilinear operation on \(\xi_{m},\)\(m\in M.\)
**3.2 Definition**.: A Lie-Yamaguti algebra bundle is a vector bundle \(\xi=(L,p,M)\) together with a \(2\)-field of brackets
\[m\mapsto[\,\ ]_{m},\ m\in M\]
and a \(3\)-field of brackets
\[m\mapsto\{\,\,\ \}_{m},\ m\in M\]
which make each fibre \(\xi_{m},\)\(m\in M\) a Lie-Yamaguti algebra.
**3.3 Definition**.: Let \((\mathfrak{g},[\,\ ]_{\mathfrak{g}},\{\,\,\ \}_{\mathfrak{g}})\) be a given Lie-Yamaguti algebra. A locally trivial Lie-Yamaguti algebra bundle is a vector bundle \(\xi=(L,p,M)\) together with a \(2\)-field of brackets
\[m\mapsto[\,\ ]_{m},\ m\in M\]
and a \(3\)-field of brackets
\[m\mapsto\{\,\,\ \}_{m},\ m\in M\]
such that \(M\) admits an open covering \(\{U_{i}\}\) equipped with local trivializations \(\{\psi_{i}:U_{i}\times\mathfrak{g}\to p^{(-1)}(U_{i})\}\) for which each \(\psi_{i,m},\ m\in M\) (\(\psi_{i}\) restricted to each fibre \(\xi_{m}\)) is a Lie-Yamaguti algebra isomorphism.
_3.4 Remark_.: Thus, for a Lie-Yamagutil algebra bundle as defined above each fibre \(\xi_{m}=p^{-1}(m),\ m\in M,\) together with the binary operation \([\,\ ]_{m}\) and the ternary operation \(\{\,\,\ \}_{m}\) is a Lie-Yamaguti algebra isomorphic to \(\mathfrak{g}\), and the assignments
\[m\mapsto[\,\ ]_{m},\ m\mapsto\{\,\,\ \}_{m}\]
varies smoothly over \(M.\)
In other words, a Lie-Yamaguti algebra bundle over \(M\) is a vector bundle over \(M\) such that each fibre of the bundle has a Lie-Yamaguti algebra structure isomorphic to \(\mathfrak{g}\).
An obvious example of a Lie-Yamaguti algebra bundle is the trivial bundle over a smooth manifold \(M\) with fibres a Lie-Yamaguti algebra.
_3.5 Example_.: Let \((\mathfrak{g},[\,\ ],\{\,\,\ \})\) be a given Lie-Yamaguti algebra and \(M\) be any smooth manifold. Then the trivial vector bundle \(\xi=M\times\mathfrak{g}\) with the projection onto the first factor \(\pi_{1}:M\times\mathfrak{g}\to M\) is a Lie-Yamaguti algebra bundle, called the product Lie-yamaguti algebra bundle.
We have the following example from the Lemma 2.8.
_3.6 Example_.: Any Lie algebra bundle \((L,p,M,[\,\ ])\) is a Lie-Yamaguti algebra bundle, where the 3-field of brackets on \(M\) induced by the 2-field of Lie brackets
\[m\mapsto[\,\ ]_{m},\ m\in M\]
is defined by
\[\{a,b,c\}_{m}:=[[a,b]_{m},c]_{m},\ m\in M,\]
for \(a,b,c\in L_{m},\ m\in M.\)
_3.7 Definition_.: Let \(\xi=(L,p,M)\) be a Lie algebra bundle with the field of Lie algebra bracket \(m\mapsto[\,\ ]_{m},\ m\in M.\) A reductive decomposition of \(\xi\) is a pair \((L^{1},L^{2})\) of subbundles of \(L\) such that \(L\) is a Whitney sum \(L=L^{1}\oplus L^{2}\) satisfying \([L^{1}_{m},L^{1}_{m}]_{m}\subseteq L^{1}_{m}\) and \([L^{1}_{m},L^{2}_{m}]_{m}\subseteq L^{2}_{m}.\) In this case, we call \((L^{1},L^{2})\) a reductive pair.
For a reductive pair as above, let \(\pi^{i}:L\to L^{i},\ i=1,2\) denote the vector bundle projection maps.
_3.8 Example_.: Let \((L^{1},L^{2})\) be a reductive decomposition of a Lie algebra bundle \(\xi=(L,p,M)\) as described in the above definition. Then, define a 2-field of brackets and a 3-field of brackets
\[m\mapsto\langle\,\ \rangle_{m},\ m\mapsto\{\,\,\ \}_{m},\ m\in M\]
on the vector bundle \((L^{2},p|_{L^{2}},M)\) as follows. Let \(a,b,c\in L^{2}_{m},\ m\in M.\)
\[[a,b]_{m}:=\pi^{1}(\langle a,b\rangle_{m}),\ \{a,b,c\}:=\langle\pi^{2}(\langle a,b\rangle_{m}),c]_{m}.\]
Then, as in the case of Example 2.10, the vector bundle \((L^{2},p|_{L^{2}},M)\) is a Lie-Yamaguti algebra bundle equipped with the \(2\)-field of brackets and the \(3\)-field of brackets as defined above.
Next, we discuss an interesting example of a Lie-Yamaguti algebra bundle that arose from the work of M. Kikkawa [12, 13, 14] to characterize some local geometric properties. We recall some definitions which are necessary to describe our next example.
Recall that a linear connection on a smooth manifold \(M\) is an \(\mathbb{R}\)-bilinear map
\[\nabla:\chi(M)\times\chi(M)\to\chi(M)\]
written \(\nabla_{X}Y\) for \(\nabla(X,Y)\), satisfying two properties stated below: For all \(X,Y\in\chi(M)\)
* \(\nabla_{X}Y\) is a \(C^{\infty}(M)\)-linear in \(X\).
* (Leibniz rule) \(\nabla_{X}Y\) satisfies the Leibniz rule in \(Y\): For all \(f\in C^{\infty}(M)\), \[\nabla_{X}(fY)=(Xf)Y+f(\nabla_{X}Y).\]
Now, let \(M\) be a smooth manifold along with linear connection \(\nabla\).
Recall that
* a torsion tensor of the connection \(\nabla\) is a \(C^{\infty}(M)\)-bilinear map \(S:\chi(M)\times\chi(M)\to\chi(M)\) defined as \[S(X,Y):=\nabla_{X}Y-\nabla_{Y}X-[X,Y],\ X,Y\in\chi(M),\] where \([X,Y]\) is the Lie bracket of \(\chi(M)\) and
* a curvature tensor of the connection \(\nabla\) is a \(C^{\infty}(M)\)-trilinear map \(R:\chi(M)\times\chi(M)\times\chi(M)\to\chi(M)\) defined as \[R(X,Y)Z:=\nabla_{X}\nabla_{Y}Z-\nabla_{Y}\nabla_{X}Z-\nabla_{[X,Y]}Z,\ X,Y,Z \in\chi(M).\]
Recall the following definitions [13].
**3.9 Definition**.: Let \(M\) be a smooth manifold with a connection \(\nabla\). Let \(S\) and \(R\) denote the torsion and curvature tensors of \(\nabla\), respectively. Then, \((M,\nabla)\) is said to be a locally reductive space if \(\nabla S=0\)\(\&\)\(\nabla R=0\); that is,
* for all \(X,Y,Z\in\chi(M)\); \(\nabla_{X}S(Y,Z)=0\);
* for all \(X,U,V,W\in\chi(M)\); \(\nabla_{X}R(U,V)W=0\).
**3.10 Definition**.: Let \(G\) be a connected Lie group and \(H\) be a closed subgroup of \(G.\) Then the homogeneous space \(M=G/H\) is said to be reductive if and only if \(G\) acts effectively on \(M\) and the Lie algebra \(\mathfrak{g}\) of \(G\) admits a direct sum decomposition as
\[\mathfrak{g}=\mathfrak{m}\oplus\mathfrak{h},\]
where \(\mathfrak{h}\) is the Lie algebra of \(H\) and \(\mathfrak{m}\) is a subspace of \(\mathfrak{g}\).
Next, we recall the notion of homogeneous Lie loops.
**3.11 Definition**.: Let \(G=(G,\mu)\) be a binary system with the binary operation
\[\mu:G\times G\to G\]
\(G\) is a loop if there is a (two-sided) identity \(e\in G\), \(xe=ex=x\) (\(x\in G\)), and the left and right translations of \(G\) by any element \(x\in G\), denoted by
\[L_{x},R_{x}:G\to G;\ L_{x}(y)=xy,\ R_{x}(y)=yx\ (y\in G),\]
are permutations of \(G\).
**3.12 Definition**.: A loop \(G\) is said to have the left inverse property, if for any \(x\in G\) there exists an element \(x^{-1}\in G\) such that
\[x^{-1}(xy)=y\ (y\in G)\]
**3.13 Definition**.: Let \(L_{0}(G)\) be the group generated by all left inner mappings, i.e.,
\[L_{x,y}=L_{xy}^{-1}\circ L_{x}\circ L_{y}\ (x,y\in G)\]
A loop \(G\) is called a left A-loop, if the left inner mapping group \(L_{0}(G)\) is a subgroup of the automorphism group \(AUT(G)\) of \(G\).
**3.14 Definition**.: A Loop \((G,\mu)\) is said to be a homogeneous loop, if it is a left A-loop with the left inverse property.
**3.15 Definition**.: A homogeneous Lie loop \(G\) is a homogeneous loop, and is also a smooth manifold such that the loop multiplication \(\mu:G\times G\to G\) is smooth.
Here are some examples of locally reductive spaces.
* Let \(G\) be a connected homogeneous Lie loop equipped with the canonical connection.
* Define \(K(G):=\) the closure of \(L_{0}(G)\) in the smooth automorphism group \(Aut(G)\) of \(G\), and consider the semi-direct product \(A(G)=G\times K(G)\). Since \(G\) is connected, \(L_{0}(G)\) is connected, and consequently \(K(G)\) is also connected. \(A(G)\) is also a connected Lie group with the product manifold structure. Further \(A(G)\) contains \(K(G)\) as a closed subgroup.
* The homogeneous space \(A(G)/K(G)\) is reductive.
Consider the reductive homogeneous space \(A(G)/K(G)\) equipped with the canonical connection. Then, we have the following results from [13].
**3.16 Theorem**.: _For a connected homogeneous Lie loop \(G\), the map_
\[i:G\to A(G)/K(G),\ i(x)=x\times K(G)\]
_is a connection preserving loop isomorphism onto \(A(G)/K(G)\) with multiplication_
\[(x\times K(G)).(y\times K(G))=(xy)\times K(G)\ (x,y\in G)\]
_with respect to the canonical connections on \(G\) and \(A(G)/K(G)\)._
As a result, any connected homogeneous Lie loop with canonical connection can be identified with a reductive homogeneous space with canonical connection. The following result of M. Kikkawa tells us that any reductive homogeneous space with canonical connection is locally reductive.
**3.17 Theorem**.: _Let \(S\) and \(R\) denote the torsion and curvature tensors of the canonical connection \(\nabla\) of a reductive homogeneous space \(M=G/H\), respectively. Then \(\nabla\) is locally reductive, i.e., \(\nabla S=0\) and \(\nabla R=0\)_
**3.18 Corollary**.: _Any connected homogeneous Lie loop with the canonical connection is a locally reductive space._
Below is a list of some examples of homogeneous Lie loops.
**3.19 Example**.: Any Lie group is a homogeneous Lie loop.
**3.20 Example**.: The set of all positive definite real symmetric matrices, denoted by \(P_{n}\), is a homogeneous Lie loop. Loop multiplication \(\mu\) being
\[\mu(X,Y)=X^{\frac{1}{2}}YX^{\frac{1}{2}},\ X,Y\in P_{n}.\]
We are now in a position to describe a Lie-Yamaguti algebra bundle which arose from the work of M. Kikkawa.
Since any connected homogeneous Lie loop with canonical connection is a locally reductive space, we obtain the following example (cf. Theorem 7.2 [13]).
**3.21 Example**.: Let \(M\) be a connected homogeneous Lie loop with the canonical connection. Let the associated torsion and curvature tensors be \(S\) and \(R\), respectively. Let \(\xi=(TM,p,M)\) be the tangent bundle of \(M.\) Define a \(2\)-field of brackets and a \(3\)-field of brackets on \(M\) as follows:
\[m\mapsto[a,b]_{m}=S_{m}(a,b);\ m\mapsto\{a,b,c\}=R_{m}(a,b)c\ (a,b,c\in T_{m}G).\]
Then \(\xi\) is a Lie-Yamaguti algebra bundle.
Next, we discuss a general existence theorem for locally trivial Lie-Yamaguti algebra bundle.
**3.22 Definition**.: Let \((\mathfrak{g},[\,\ ],\{\,\,\ \})\) be a Lie-Yamaguti algebra and \(G\) be a Lie group. We say that \(G\) acts on \(\mathfrak{g}\) if there exists a smooth homomorphism
\[\phi:G\to Aut_{LY}(\mathfrak{g}),\ g\mapsto\phi_{g}.\]
Given such an action \(\phi\), we simply write \(ga=:\phi_{g}(a),\ g\in G,\ a\in\mathfrak{g}.\)
Note that any closed subgroup of \(Aut_{LY}(\mathfrak{g})\) acts smoothly on \(\mathfrak{g}\) and is a closed subgroup of the general linear group \(GL_{n}(\mathbb{R})\).
**3.23 Definition**.: Let \(G\) be a Lie group and \(M\) a smooth manifold. A family of smooth transition maps in \(M\) with values in \(G\) is an atlas \(\{U_{i}:i\in I\}\) of \(M\) together with a collection of smooth maps
\[g_{ij}:U_{i}\cap U_{j}\to G,\ i,j\in I,\]
where \(I\) is any index set which we may assume to be countable satisfying the following condition. For \(i,j,k\in I,\) with \(U_{i}\cup U_{j}\cup U_{k}\neq\emptyset,\)
\[g_{ij}(m)\cdot g_{jk}(m)=g_{ik}(m),\ m\in U_{i}\cap U_{j}\cap U_{k}.\]
It follows from the above condition by taking \(i=j=k\) that for any \(i\in I,\)\(g_{ii}(m),\ m\in M\) is the identity of \(G.\) The above condition is known as the cocycle condition.
We have the following existence result of locally trivial Lie-Yamguti algebra bundles whose proof is parallel to the proof of clutching construction in the theory of fibre bundles [19]. We outline the sketch of the proof.
**3.24 Theorem**.: _Let \((\mathfrak{g},[\ \,\ ],\{\,\,\ \})\) be a Lie-Yamaguti algebra equipped with a smooth action of a Lie group \(G.\) Let \(M\) be a smooth manifold with a given countable atlas \(\{U_{i}:\ i\in I\}\) together with a family of smooth transition maps_
\[g_{ij}:U_{i}\cap U_{j}\to G,\ i,j\in I,\]
_in \(M\) with values in \(G.\) Then, there exists a locally trivial Lie-Yamaguti algebra bundle over \(M,\) with \(\mathfrak{g}\) as the fibre, \(G\) as the structure group of the bundle and with \(\{g_{ij}\}\) as the associated transition maps._
Proof.: Consider the following space where \(I\) has the discrete topology
\[\tilde{L}:=\bigcup_{i\in I}\{(u,a,i)|u\in U_{i},\ a\in\mathfrak{g},\ i\in I\}.\]
Define an equivalence relation on \(\tilde{L}\) by \((u,a,i)\sim(v,b,j)\) if and only if \(u=v,\ b=g_{ij}(u)a.\) Let \(L=\tilde{L}/\sim.\) Let us denote the equivalence class of \((u,a,i)\) by \([u,a,i].\) Let \(q:\tilde{L}\to L,(u,a,i)\mapsto[u,a,i]\) be the quotient map and \(p:L\to M,\ [u,a,i]\mapsto u\) be the natural projection map.
If \(q_{i}=q|_{(U_{i}\times\mathfrak{g}\times\{i\})},\) then it is readily seen that \(q_{i}\) is injective, \((q_{i}(U_{i}\times\mathfrak{g}\times\{i\}),q_{i}^{-1})\) is a smooth chart on \(L\) and \(p:L\to M\) is a smooth vector bundle.
We now show that \(\xi=(L,p,M)\) is a Lie-Yamaguti algebra bundle. Let \(m\in M\) and \(\xi_{m}\) be the fibre over \(m.\) Define a \(2\)-field of brackets \(m\mapsto[\,\ ]_{m}\) and a \(3\)-filed of brackets \(m\mapsto\{\,\,\ \}_{m}\) as follows. Note that for \(i\in I,\) the map
\[\{\psi_{i}:U_{i}\times\mathfrak{g}\to p^{(-1)}(U_{i})\}\]
defined by
\[\psi(u,a)=q(u,a,i),\ u\in U_{i},\ a\in\mathfrak{g}\]
gives the local trivialization of the vector bundle \(\xi.\) Let \(\psi_{i,m},\ m\in U_{i}\subset M\) denotes the restriction of \(\psi_{i}\) to \(\{m\}\times\mathfrak{g}.\)
Let \(a,b,c\in\xi_{m},\ m\in M.\) Choose \(i\in I\) such that \(m\in U_{i}.\) Define
\[[a,b]_{m}:=\psi_{i,m}([\psi_{i,m}^{-1}(a),\psi_{i,m}^{-1}(b)])\]
and
\[\{a,b,c\}_{m}:=\psi_{i,m}(\{\psi_{i,m}^{-1}(a),\psi_{i,m}^{-1}(b),\psi_{i,m}^{ -1}(c)\}).\]
Then, it is routine to verify that \(\xi\) is a locally trivial Lie-Yamaguti algebra bundle with fibre \(\mathfrak{g}.\)
**3.25 Remark**.: The above theorem provides a general method of constructing a locally trivial Lie-Yamaguti algebra bundle from any Lie group of symmetry of a given Lie-Yamaguti
algebra on a manifold, equipped with a family of smooth transition maps taking values in the group of symmetry. In particular, we may apply the above method for any Lie group of symmetry of the Lie-Yamaguti algebras discussed in the previous section to construct examples of Lie-Yamaguti algebra bundles.
**3.26 Definition**.: Let \(\xi=(L,p,M)\) and \(\xi^{\prime}=(L^{\prime},p^{\prime},M^{\prime})\) be two Lie-Yamaguti algebra bundles. A homomorphism \(\phi:(L,p,M)\to(L^{\prime},p^{\prime},M^{\prime})\) from \(\xi\) to \(\xi^{\prime}\) is a vector bundle morphism \((\tilde{\phi},\phi)\), where \(\tilde{\phi}:L\to L^{\prime}\), is the map between total spaces and \(\phi:M\to M^{\prime}\) is the maps of the base spaces such that \(\tilde{\phi}|_{L_{m}}:L_{m}\to L^{\prime}_{\phi(m)}\) is a Lie-Yamaguti algebra homomorphism, where \(m\in M.\)
A homomorphism \(\phi:\xi\to\xi^{\prime}\) of two Lie-Yamaguti algebra bundles over the same base space \(M\) is a vector bundle morphism \(\phi:\xi\to\xi^{\prime}\) such that \(\phi|_{\xi_{m}}:\xi_{m}\to\xi^{\prime}_{m}\) is a Lie-Yamaguti algebra homomorphism for all \(m\in M.\) Moreover, if \(\phi|_{\xi_{m}}\) is a linear bijection then \(\xi=(L,p,M)\) is said to be isomorphic to \(\xi^{\prime}=(L^{\prime},p^{\prime},M).\)
**3.27 Definition**.: A Lie-Yamaguti algebra bundle \(\xi\) is said to be trivial if it is isomorphic to a product Lie-Yamaguti algebra bundle.
## 4. Representation of Lie-Yamaguti Algebra Bundles
The aim of this section is to introduce the notion of representation of Lie-Yamaguti algebra bundles.
Our definition of representation of a Lie-Yamaguti algebra bundle is based on the definition of representation of a Lie-Yamaguti algebra [20].
**4.1 Definition**.: Let \(\xi=(L,p,M)\) be a Lie-Yamaguti algebra bundle and \(\eta=(E,q,M)\) be a vector bundle. For any point \(m\in M,\) let \(\eta_{m}\) denote the fibre \(\eta_{m}=q^{-1}(m)\) of the bundle \(\eta\) over \(m.\)
A representation of the Lie-Yamaguti algebra bundle \(\xi\) on the vector bundle \(\eta\) consists of vector bundle morphisms
\[\rho:\xi\to\operatorname{End}(\eta),\ D,\ \theta:\xi\otimes\xi\to \operatorname{End}(\eta)\]
such that these maps restricted to each fibre satisfy the conditions (RLYB1) - (RLYB6) as described below, where the bilinear maps
\[D|_{\xi_{m}},\ \theta|_{\xi_{m}}:\xi_{m}\times\xi_{m}\to\operatorname{End}( \eta_{m}),\]
obtained by restricting \(D,\ \theta\) to a fibre \(\xi_{m}\) are denoted by \(D_{m}\) and \(\theta_{m},\) respectively and similarly, \(\rho_{m}\) is the linear map
\[\rho|_{\xi_{m}}:\xi_{m}\to\operatorname{End}(\eta_{m}).\]
For any \(m\in M\) and \(a,b,c,d\in\xi_{m}\),
(RLYB1) \[D_{m}(a,b)+\theta_{m}(a,b)-\theta_{m}(b,a)=[\rho_{m}(a),\rho_{m}(b) ]_{m}-\rho_{m}([a,b]);\] (RLYB2) \[\theta_{m}(a,[b,c]_{m})-\rho_{m}(b)\theta_{m}(a,c)+\rho_{m}(c) \theta_{m}(a,b)=0;\] (RLYB3) \[\theta_{m}([a,b]_{m},c)-\theta_{m}(a,c)\rho_{m}(b)+\theta_{m}(b,c )\rho_{m}(a)=0;\] (RLYB4) \[\theta_{m}(c,d)\theta_{m}(a,b)-\theta_{m}(b,d)\theta_{m}(a,c)- \theta_{m}(a,\{b,c,d\}_{m})+D_{m}(b,c)\theta_{m}(a,d)=0;\] (RLYB5) \[[D_{m}(a,b),\rho_{m}(c)]_{m}=\rho_{m}(\{a,b,c\}_{m});\] (RLYB6) \[[D_{m}(a,b),\theta_{m}(c,d)]_{m}=\theta_{m}(\{a,b,c\}_{m},d)+ \theta_{m}(c,\{a,b,d\}_{m}).\]
We shall denote a representation of a Lie-Yamaguti algebra bundle \(\xi\) on a vector bundle \(\eta\) as described above by \((\eta;\ \rho,\ D,\ \theta).\) A representation \((\eta;\ \rho,\ D,\ \theta)\) of a Lie-Yamaguti algebra bundle \(\xi\) is also called a \(\xi\)-module.
_4.2 Remark_.: Like a representation of a Lie-Yamaguti algebra [20], given a representation \((\eta;\ \rho,\ D,\ \theta)\) of a Lie-Yamaguti algebra bundle \(\xi\), we have for every \(m\in M\)
(RLYB7) \[D_{m}([a,b]_{m},c)+D_{m}([b,c]_{m},a)+D_{m}([c,a]_{m},b)=0,\]
for any \(a,\ b,\ c\in\xi_{m}\).
_4.3 Example_.: Given a Lie-Yamaguti algebra bundle \(\xi\) over \(M\), we may consider \(\xi\) as a \(\xi\)-module which gives us the adjoint representation of \(\xi\) on itself. Explicitly, for each \(m\in M\), \(\rho_{m},\ D_{m},\ \theta_{m}\) are given by
\[\rho_{m}(a):b\mapsto[a,b]_{m};\ D_{m}(a,b):c\mapsto\{a,b,c\}_{m};\ \theta_{m}(a,b):c\mapsto\{c,a,b\}_{m},\]
for any \(a,\ b,\ c\in\xi_{m}\).
_4.4 Remark_.: Observe that for a \(0\)-dimensional manifold \(M=\{pt\}\), a Lie-Yamaguti algebra bundle \(\xi\) over \(M\) is simply a Lie-Yamaguti algebra and a representation \(\eta\) of \(\xi\) in this case, reduces to a representation of the Lie-Yamaguti algebra \(\xi.\) More generally, given any representation \((\eta;\rho,D,\theta)\) of a Lie-Yamaguti algebra bundle \(\xi=(L,p,M)\) on the vector bundle \(\eta=(E,q,M)\) over a smooth manifold \(M\), \((\eta_{m};\rho_{m},D_{m},\theta_{m})\) may be viewed as a representation of the Lie-Yamaguti algebra \(\xi_{m}\) for any \(m\in M\).
## 5. Cohomology of Lie-Yamaguti Algebra Bundle
In this section we introduce cohomology of Lie-Yamaguti algebra bundle with coefficients in a representation. The definition is motivated by the definition of cohomology of a Lie-Yamaguti algebra as introduced in [21]. We use the Remark 4.4 to introduce our definition.
_5.1 Definition_.: Let \(\xi=(L,p,M)\) be a Lie-Yamaguti algebra bundle and \((\eta;\rho,D,\theta)\) be a \(\xi\)-module. Let us denote the \(2\)-field and the \(3\)-field of brackets which make the vector bundle \(\xi\) a Lie-Yamaguti algebra bundle by
\[m\mapsto[\,\ ]_{m},\ m\mapsto\{\,\,\ \}_{m},\ m\in M.\]
Let
\[C^{1}(\xi;\eta)=\text{Hom }(\xi;\eta).\]
Let \(C^{0}(\xi;\eta)\) be the subspace spanned by the diagonal elements \((f,f)\in C^{1}(\xi;\eta)\times C^{1}(\xi,\eta)\). For \(n\geq 2\), let \(C^{n}(\xi;\eta)\) be the space of all vector bundle maps \(f:\xi^{\otimes n}\to\eta\), that is, \(f\in\text{Hom }(\xi^{\otimes n};\eta)\) such that the resulting \(n\)-linear maps \(f_{m}=f|_{\xi_{m}^{\otimes n}}:\xi_{m}\times\cdots\times\xi_{m}\to\eta_{m}\) satisfy
\[f_{m}(x_{1},\ldots,x_{2i-1},x_{2i},\ldots,x_{n})=0,\]
if \(x_{2i-1}=x_{2i},\ x_{i}\in\xi_{m},\ i=1,\ldots,[n/2]\).
For \(p\geq 1\), set
\[C^{(2p,2p+1)}(\xi;\eta):=C^{2p}(\xi;\eta)\times C^{2p+1}(\xi;\eta).\]
Any element \((f,g)\in C^{(2p,2p+1)}(\xi;\eta)\) will be referred to as a \((2p,2p+1)\)-cochain.
For \(p\geq 1\), we define a coboundary operator
\[\delta=(\delta_{I},\delta_{II}):C^{(2p,2p+1)}(\xi;\eta)\to C^{(2p+2,2p+3)}(\xi; \eta),\]
\[(f,g)\mapsto\delta(f,g)=(\delta_{I}f,\delta_{II}g)\]
by defining it fibre-wise using the formula introduced by K. Yamaguti [21]. In other words, for any \(m\in M\),
\[\delta(f,g)_{m}=((\delta_{I})_{m}f_{m},(\delta_{II})_{m}g_{m}).\]
Explicitly, for \(m\in M\) and \(x_{1},\ldots,x_{2p+2}\in\xi_{m}\),
\[(\delta_{I})_{m}f_{m}(x_{1},\ldots,x_{2p+2})\] \[= (-1)^{p}[\rho_{m}(x_{2p+1})g_{m}(x_{1},\ldots,x_{2p},x_{2p+2})- \rho_{m}(x_{2p+2})g_{m}(x_{1},\ldots,x_{2p},x_{2p+1})\] \[- g_{m}(x_{1},\ldots,x_{2p},[x_{2p+1},x_{2p+2}]_{m})]\] \[+ \sum_{k=1}^{p}(-1)^{k+1}D_{m}(x_{2k-1},x_{2k})f_{m}(x_{1},\ldots, \hat{x}_{2k-1},\hat{x}_{2k},\ldots,x_{2p+2})\] \[+ \sum_{k=1}^{p+1}\sum_{j=2k+1}^{2p+2}(-1)^{k}f_{m}(x_{1},\ldots, \hat{x}_{2k-1},\hat{x}_{2k},\ldots,\{x_{2k-1},x_{2k},x_{j}\}_{m},\ldots,x_{2p+ 2}).\]
Let \(x_{1},\ldots,x_{2p+3}\in\xi_{m}.\) Then,
\[(\delta_{II})_{m}g_{m}(x_{1},\ldots,x_{2p+3})\] \[= (-1)^{p}[\theta_{m}(x_{2p+2},x_{2p+3})g_{m}(x_{1},\ldots,x_{2p+1})\] \[- \theta_{m}(x_{2p+1},x_{2p+3})g_{m}(x_{1},\ldots,x_{2p},x_{2p+2})]\] \[+ \sum_{k=1}^{p+1}(-1)^{k+1}D_{m}(x_{2k-1},x_{2k})g_{m}(x_{1},\ldots,\hat{x}_{2k-1},\hat{x}_{2k},\ldots,x_{2p+3})\] \[+ \sum_{k=1}^{p+1}\sum_{j=2k+1}^{2p+3}(-1)^{k}g_{m}(x_{1},\ldots, \hat{x}_{2k-1},\hat{x}_{2k},\ldots,\{x_{2k-1},x_{2k},x_{j}\}_{m},\ldots,x_{2p+ 3}).\]
Now observe that for any \(m\in M,\) the coboundary operator \(\delta_{m}\) is precisely the coboundary operator for the Lie-Yamaguti algebra \(\xi_{m}\) with coefficient in \(\eta_{m}\) (cf. Remark 4.4) and since \(\delta_{m}\circ\delta_{m}=0\)[21] we obtain the following result.
**5.2 Lemma**.: _For \(p\geq 1,\) the coboundary operator_
\[\delta=(\delta_{I},\delta_{II}):C^{2p}(\xi;\eta)\times C^{2p+1}(\xi;\eta) \to C^{2p+2}(\xi;\eta)\times C^{2p+3}(\xi;\eta)\]
_satisfy \(\delta\circ\delta=0.\)_
**5.3 Definition**.: For the case \(p\geq 2,\) let \(Z^{(2p,2p+1)}(\xi;\eta)\) be the subspace of \(C^{(2p,2p+1)}(\xi;\eta)\) spanned by \((f,g)\) such that \(\delta(f,g)=0\) and \(B^{(2p,2p+1)}(\xi;\eta)\) be the subspace \(\delta(C^{(2p-2,2p-1)}(\xi;\eta)).\) Then, the \((2p,2p+1)\)-cohomology group of the Lie-Yamaguti algebra bundle \(\xi\) with coefficients in \(\eta\) is defined by
\[H^{(2p,2p+1)}(\xi;\eta):=\frac{Z^{(2p,2p+1)}(\xi;\eta)}{B^{(2p,2p+1)}(\xi;\eta)}.\]
We next consider the case \(p=1,\) and define the cohomology group \(H^{(2,3)}(\xi;\eta).\) Define a coboundary operator
\[\delta=(\delta_{I},\delta_{II}):C^{0}(\xi;\eta)\to C^{(2,3)}(\xi;\eta),\ (f,f)\mapsto(\delta_{I}f,\delta_{II}f),\]
where for \(x_{1},\ x_{2},\ x_{3}\in\xi_{m},\ m\in M,\)
\[(\delta_{I})_{m}f_{m}(x_{1},x_{2}) =\rho_{m}(x_{1})f_{m}(x_{2})-\rho_{m}(x_{2})f_{m}(x_{1})-f_{m}([ x_{1},x_{2}]_{m})\] \[(\delta_{II})_{m}f_{m}(x_{1},x_{2},x_{3}) =\theta_{m}(x_{2},x_{3})f_{m}(x_{1})-\theta_{m}(x_{1},x_{3})f_{m} (x_{2})\] \[+D_{m}(x_{1},x_{2})f_{m}(x_{3})-f_{m}(\{x_{1},x_{2},x_{3}\}_{m}).\]
Furthermore, we define another coboundary operator
\[\delta^{*}=(\delta_{I}^{*},\delta_{II}^{*}):C^{(2,3)}(\xi;\eta) \to C^{(3,4)}(\xi;\eta)\]
as follows.
Let \(m\in M\) and \(x_{1},\ x_{2},\ x_{3}\ x_{4}\in\xi_{m}.\) Then for \((f,g)\in C^{(2,3)}(\xi;\eta),\)
\[(\delta_{I}^{*})_{m}f_{m}(x_{1},x_{2},x_{3})\] \[= \rho_{m}(x_{1})f_{m}(x_{2},x_{3}\rho_{m}(x_{2})f_{m}(x_{3},x_{1}) -\rho_{m}(x_{3})f_{m}(x_{1},x_{2})\] \[+ f_{m}([x_{1},x_{2}]_{m},x_{3})+f_{m}([x_{2},x_{3}]_{m},x_{1})+f _{m}([x_{3},x_{1}]_{m},x_{2})\] \[+ g_{m}(x_{1},x_{2},x_{3})+g_{m}(x_{2},x_{3},x_{3})+g_{m}(x_{3},x_{1 },x_{2}),\]
and
\[(\delta_{II}^{*})_{m}g_{m}(x_{1},x_{2},x_{3},x_{4})\] \[=\theta_{m}(x_{1},x_{4})f_{m}(x_{2},x_{3})+\theta_{m}(x_{2},x_{4} )f_{m}(x_{3},x_{1})+\theta_{m}(x_{3},x_{4})f_{m}(x_{1},x_{2})\] \[+g_{m}([x_{1},x_{2}]_{m},x_{3},x_{4})+g_{m}([x_{2},x_{3}]_{m},x_{1 },x_{4})+g_{m}([x_{3},x_{1}]_{m},x_{2},x_{4}).\]
Following [21], we have for each \(f\in C^{1}(\xi;\eta)\)
\[\delta_{I}\delta_{I}f=\delta_{I}^{*}\delta_{I}f=0\text{ and }\delta_{II}\delta_{II}f= \delta_{II}^{*}\delta_{II}f=0.\]
In general, for \((f,g)\in C^{(2p,2p+1)}(\xi;\eta)\)
\[(\delta\circ\delta)(f,g)=(\delta_{I}\circ\delta_{I}(f),\delta_{II}\circ\delta_ {II}(g))=0.\]
We define
**5.4 Definition**.: \[H^{1}(\xi;\eta):=\{f\in C^{1}(\xi;\eta)|\delta_{I}f=0,\ \delta_{II}f=0\}.\]
For \(p=1\), we define the cohomology \(H^{(2,3)}(\xi;\eta)\) as follows.
**5.5 Definition**.: Let \(Z^{(2,3)}(\xi;\eta)\) be the subspace of \(C^{(2,3)}(\xi;\eta)\) spanned by \((f,g)\) such that \(\delta_{I}f=\delta_{I}^{*}f=0\), and \(\delta_{II}g=\delta_{II}^{*}g=0.\) Let
\[B^{(2,3)}(\xi;\eta)=\{\delta(f,f)|f\in C^{1}(\xi;\eta)\}.\]
Then, the \((2,3)\)-cohomology group of the Lie-Yamaguti algebra bundle \(\xi\) with coefficients in \(\eta\) is defined by
\[H^{(2,3)}(\xi;\eta)=\frac{Z^{(2,3)}(\xi;\eta)}{B^{(2,3)}(\xi;\eta)}.\]
## 6. Extensions of Lie-Yamaguti algebra bundles
Chevally and Eilenberg [4] showed that extensions of algebras can be interpreted in terms of certain Hochschild cohomology group. Later, D. K. Harrison [7] showed that certain Harrison cohomology group of commutative algebras can be related to extensions of commutative algebras. In the same spirit, Yamaguti showed that that \((2,3)\)-cohomology of a Lie-Yamaguti algebra with coefficients in a representation may be interpreted in terms of isomorphism classes of extensions of the Lie-Yamaguti algebra. The aim of this section is to introduce the notion of extension of Lie-Yamaguti algebra bundles and relate isomorphism classes of extensions in terms of \((2,3)\)-cohomology of such bundle as introduced in the previous section. We begin with some definitions.
**6.1 Definition**.: Let \(\xi=(L,p,M)\) be a Lie-Yamaguti algebra bundle. Let us denote the associated \(2\)-field and \(3\)-field of brackets by
\[m\rightarrow[\,\ ]_{m}\text{ and }m\rightarrow\{\,\,\ \}_{m},\ m\in M,\]
respectively. An ideal of \(\xi\) is a sub-bundle \(\eta\) of the vector bundle \(\xi\) such that for all \(m\in M\), \(v\in\eta_{m},\,a,\ b\in\xi_{m}\)
\[[v,a]_{m}\in\eta_{m}\text{ and }\{v,a,b\}_{m}\in\eta_{m},\ \{a,b,v\}_{m}\in\eta_{m}.\]
An ideal \(\eta\) of \(\xi\) is said to be **abelian** if for all \(m\in M\), \(u,v\in\eta_{m}\) and \(a\in\xi_{m}\),
\[[u,v]_{m}=0,\ \{u,v,a\}_{m}=\{u,a,v\}_{m}=\{a,u,v\}_{m}=0.\]
**6.2 Definition**.: Let \(\tilde{\xi}=(\tilde{L},\tilde{p},M)\) and \(\eta=(E,q,M)\) be Lie-Yamaguti algebra bundles. An extension of Lie-Yamaguti algebra bundle over \(M\) is a short exact sequence in the category of Lie-Yamaguti algebra bundles over \(M\) (that is, restricted to each fibre yields a short exact sequence of vector spaces where the maps involved are Lie-Yamaguti algebra homomorphisms)
\[0\xrightarrow{\ \
Define vector bundle morphisms \(\rho:\xi\to\operatorname{End}(\eta)\) and \(D,\theta:\xi\otimes\xi\to\operatorname{End}(\eta)\) fibre-wise as follows. Let \(m\in M.\) For any \(a,b\in\xi_{m}\) and \(v\in\eta_{m}\)
\[\rho_{m}(a)(v) :=[\sigma_{m}(a),v]_{m}^{\sim} \tag{2}\] \[D_{m}(a,b)(v) :=\{\sigma_{m}(a),\sigma_{m}(b),v\}_{m}^{\sim}\] (3) \[\theta_{m}(a,b)(v) :=\{v,\sigma_{m}(a),\sigma_{m}(b)\}_{m}^{\sim}. \tag{1}\]
**Proposition**.: _The above data yield a representation \((\eta;\ \rho,\ D,\ \theta)\) of \(\xi\). Furthermore,_
1. _The definition of_ \(\rho,D\)_, and_ \(\theta\) _does not depend on the choice of the section_ \(\sigma\)_, that is, given any two sections of_ \(j\)_, say_ \(\sigma^{1}\)_, and_ \(\sigma^{2}\)_, we have_ \[[\sigma_{m}^{1}(a),v]^{\sim}=[\sigma_{m}^{2}(a),v]^{\sim}\,\ \{\sigma_{m}^{1}(a),\sigma_{m}^{1}(b),v\}^{\sim}=\{ \sigma_{m}^{2}(a),\sigma_{m}^{2}(b),v\}^{\sim}\,\] \[\text{ and }\{v,\sigma_{m}^{1}(a),\sigma^{1}(b)\}^{\sim}=\{v, \sigma_{m}^{2}(a),\sigma_{m}^{2}(b)\}^{\sim}.\]
2. _Equivalent extensions induce the same representation on_ \(\eta\)_, that is, given any two equivalent extensions, say_ \(Ext_{\tilde{\xi}}\) _and_ \(Ext_{\tilde{\xi}}\) _with induced representations being_ \((\eta;\ \rho,\ D,\ \theta)\) _and_ \((\eta;\ \rho^{\prime},\ D^{\prime},\ \theta^{\prime})\) _respectively. Then_ \(\rho=\rho^{\prime},\ D=D^{\prime},\ \theta=\theta^{\prime}\)_._
Proof.: Let \(m\in M,\)\(a,b,c,d\in\xi_{m}\) and \(v\in eta_{m}.\) Let \(\sigma:\xi\to\tilde{\xi}\) be a given section of \(j:\tilde{\xi}\to\xi.\) Then, we have the following equalities.
From the condition (LY3) of \(\tilde{\xi}_{m}\) we get
\[\sum_{\circlearrowright(\sigma_{m}(a),\sigma_{m}(b),v)}[[\sigma_{m}(a),\sigma_ {m}(b)]^{\sim},v]^{\sim}+\sum_{\circlearrowright(\sigma_{m}(a),\sigma_{m}(b),v) }\{\sigma_{m}(a),\sigma_{m}(b),v\}^{\sim}=0\]
which reduces to (RLYB1):
\[D_{m}(a,b)+\theta_{m}(a,b)-\theta_{m}(b,a)=[\rho_{m}(a),\rho_{m}(b)]_{m}-\rho _{m}([a,b]_{m}).\]
By (LY5) of \(\tilde{\xi}\) we get
\[\{\sigma_{m}(a),v,[\sigma_{m}(b),\sigma_{m}(c)]_{m}^{\sim}\] \[=[\{\sigma_{m}(a),v,\sigma_{m}(b)\}_{m}^{\sim},\sigma_{m}(c)]_{m}^ {\sim}+[\sigma_{m}(b),\{\sigma_{m}(a),v,\sigma_{m}(c)\}_{m}^{\sim}]_{m}^{\sim}\]
which reduces to (RLYB2):
\[\theta_{m}(a,[b,c]_{m})=\rho_{m}(b)\theta_{m}(a,c)-\rho_{m}(c)\theta_{m}(a,b).\]
By (LY4) of \(\tilde{\xi}_{m}\) we get
\[\sum_{\circlearrowright(\sigma_{m}(a),\sigma_{m}(b),v)}\{[\sigma_{m}(a),\sigma _{m}(b)]_{m}^{\sim},v,\sigma_{m}(c)\}_{m}^{\sim}=0\]
which reduces to (RLYB3):
\[\theta_{m}([a,b]_{m},c)=\theta_{m}(a,c)\rho_{m}(b)-\theta_{m}(b,c)\rho_{m}(a).\]
By (LY6) of \(\tilde{\xi}_{m}\) we get
\[\{v,\sigma_{m}(a),\{\sigma_{m}(b),\sigma_{m}(c),\sigma_{m}(d)\}_{m}^{ \sim}\}_{m}^{\sim}\] \[=\{\{v,\sigma_{m}(a),\sigma_{m}(b)\}_{m}^{\sim},\sigma_{m}(c), \sigma_{m}(d)\}_{m}^{\sim}+\{\sigma_{m}(b),\{v,\sigma_{m}(a),\sigma_{m}(c)\_m} ^{\sim},\sigma_{m}(d)\}_{m}^{\sim}\] \[+\{\sigma_{m}(b),\sigma_{m}(c),\{v,\sigma_{m}(a),\sigma_{m}(d)\} _{m}^{\sim}\}_{m}^{\sim}\]
which reduces to (RLYB4):
\[\theta_{m}(a,\{b,c,d\}_{m})=\theta_{m}(c,d)\theta_{m}(a,b)-\theta_{m}(b,d) \theta_{m}(a,c)+D_{m}(b,c)\theta_{m}(a,d).\]
By (LY5) of \(\tilde{\xi}_{m}\) we get
\[\{\sigma_{m}(a),\sigma_{m}(b),[\sigma_{m}(c),v]_{m}^{\sim}\}_{m}^{\sim}\] \[=[\{\sigma_{m}(a),\sigma_{m}(b),\sigma_{m}(c)\}_{m}^{\sim},v]_{m} ^{\sim}+[\sigma_{m}(c),\{\sigma_{m}(a),\sigma_{m}(b),v\}_{m}^{\sim}]_{m}^{\sim}\]
which reduces to (RLYB5):
\[D_{m}(a,b)\rho_{m}(c)=\rho_{m}(c)D_{m}(a,b)+\rho_{m}(\{a,b,c\}_{m }).\]
By (LY6) of \(\tilde{\xi}_{m}\) we get
\[\{\sigma_{m}(a),\sigma_{m}(b),\{v,\sigma_{m}(c),\sigma_{m}(d)\}_{ m}^{\sim}\}_{m}^{\sim}\] \[=\{\{\sigma_{m}(a),\sigma_{m}(b),v\}_{m}^{\sim},\sigma_{m}(c), \sigma_{m}(d)\}_{m}^{\sim}+\{v,\{\sigma_{m}(a),\sigma_{m}(b),\sigma_{m}(c)\}_ {m}^{\sim},\sigma_{m}(d)\}_{m}^{\sim}\] \[+\{v,\sigma_{m}(c),\{\sigma_{m}(a),\sigma_{m}(b),\sigma_{m}(d)\} _{m}^{\sim}\}_{m}^{\sim}\]
which reduces to (RLYB6):
\[D_{m}(a,b)\theta_{m}(c,d)=\theta_{m}(c,d)D_{m}(a,b)+\theta_{m}(\{a,b,c\}_{m },d)+\theta_{m}(c,\{a,b,d\}_{m}).\]
Therefore, \((\eta;\rho,\ D,\ \theta)\) is a representation of \(\xi\). Hence any extension of \(\xi\) by \(\eta\) gives a representation of \(\xi\) on \(\eta\).
Next we show that the definition of \(\theta\) is independent of the choice of the section. The proofs that the definitions of \(\rho\) and \(D\) do not depend on the choice of the section \(\sigma\) are similar, hence, we omit the details.
Let \(\sigma,\ \sigma^{\prime}:\xi\to\tilde{\xi}\) be two sections of \(j:\tilde{\xi}\to\xi.\) Let \(m\in M.\) Then, for any \(a\in\xi_{m}\)
\[j(\sigma_{m}(a)-\sigma^{\prime}_{m}(a))=0.\]
Therefore, \(\sigma_{m}(a)-\sigma^{\prime}_{m}(a)\in\text{Ker}(j)=\eta_{m}\), so that \(\sigma_{m}(a)=\sigma^{\prime}_{m}(a)+v_{a}\) for some \(v_{a}\in\eta_{m}\). Since we are considering abelian extension, for any \(v\in\eta_{m},\,a,\ b\in\xi_{m}\) we have
\[\{v,\sigma_{m}(a),\sigma_{m}(b)\}_{m}^{\sim} =\{v,\sigma^{\prime}_{m}(a)+v_{a},\sigma^{\prime}_{m}(b)+v_{b}\}_ {m}^{\sim}\] \[=\{v,\sigma^{\prime}_{m}(a),\sigma^{\prime}_{m}(b)+v_{b}\}_{m}^{ \sim}+\{v,v_{a},\sigma^{\prime}_{m}(b)+v_{b}\}_{m}^{\sim}\] \[=\{v,\sigma^{\prime}_{m}(a),\sigma^{\prime}_{m}(b)\}_{m}^{\sim}+ \{v,\sigma^{\prime}_{m}(a),v_{b}\}_{m}^{\sim}\] \[=\{v,\sigma^{\prime}_{m}(a),\sigma^{\prime}_{m}(b)\}_{m}^{\sim}.\]
Finally, we show that two equivalent extensions of \(\xi\) by \(\eta\) induce the same representation. Suppose that \(Ext_{\tilde{\xi}}\) and \(Ext_{\tilde{\xi}}\) are two equivalent extensions of \(\xi.\) Let us denote the associated \(2\)-field and \(3\)-field of brackets of the Lie-Yamaguti algebra bundle \(\hat{\xi}\) by
\[m\to[\,\ ]^{\wedge}_{m}\text{ and }m\to\{\,\,\ \}^{\wedge}_{m},\ m\in M,\]
respectively.
Let \(f:\tilde{\xi}\to\hat{\xi}\) be a Lie-Yamaguti algebra isomorphism satisfying \(f\circ i=\hat{i}\) and \(\hat{j}\circ f=j.\) Thus, the following diagram is commutative.
Let \(\sigma:\xi\to\tilde{\xi}\) and \(\sigma^{\prime}:\xi\to\hat{\xi}\) be sections of \(j\) and \(\hat{j}\) respectively. Then, for any \(a\in\xi_{m},\ m\in M\) we have
\[\hat{j}\circ f(\sigma_{m}(a))=j\circ(\sigma_{m}(a))=a=\hat{j}\circ(\sigma^{ \prime}_{m}(a))\]
\[\Rightarrow\hat{j}(f(\sigma_{m}(a))-\sigma^{\prime}_{m}(a))=0.\]
This implies \(f(\sigma_{m}(a))-\sigma^{\prime}_{m}(a)\in\operatorname{Ker}(\hat{j}_{m})=\eta _{m},\) that is, \(f(\sigma_{m}(a))=\sigma^{\prime}_{m}(a)+v_{a}\) for some \(v_{a}\in\eta_{m}\). Thus, we have for any \(a,b\in\xi_{m}\) and \(v\in\eta_{m}\)
\[f\big{(}\{v,\sigma_{m}(a),\sigma_{m}(b)\}^{\sim}_{m}\big{)}=\{f(v),f(\sigma_{m }(a)),f(\sigma_{m}(b))\}^{\wedge}_{m}=\{v,\sigma^{\prime}_{m}(a),\sigma^{ \prime}_{m}(b)\}^{\wedge}_{m}.\]
Note that \(f(v)=v\) follows from the commutativity of first box in the diagram. Therefore, equivalent extensions induce the same \(\theta\). Similarly one can show that equivalent extensions induce the same \(D\) and \(\rho\).
As of now, we have seen that any extension \(Ext_{\xi}\)
of \(\xi\) by \(\eta\), induces a representation \((\eta;\rho,D,\theta)\) of \(\xi,\) where the vector bundle maps \(\rho,D,\) and \(\theta\) are defined by (1)- (3) in terms of a section \(\sigma:\xi\to\tilde{\xi}\) of \(j:\tilde{\xi}\to\xi\). Therefore, as discussed in the previous section, we have the following cochain complex of the Lie-Yamaguti algebra bundle \(\xi\) with coefficients in the induced representation \((\eta;\rho,D,\theta)\) of \(\xi\).
Our next goal is to attach a \((2,3)\)-cocycle of the above cochain complex to \(Ext_{\tilde{\xi}}.\)
Fix a section \(\sigma:\xi\to\tilde{\xi}\) of \(j:\tilde{\xi}\to\xi\). Define two maps; \(f:\xi\otimes\xi\to\eta\) and \(g:\xi\otimes\xi\otimes\xi\to\eta\) in the following way. Let \(m\in M.\) Denote by \(f_{m}\) and \(g_{m}\) the resulting bilinear and trilinear maps obtained by restricting \(f\) and \(g\) to the fibres \((\xi\otimes\xi)_{m}\) and \((\xi\otimes\xi\otimes\xi)_{m}\) respectively.
For all \(a_{1},\ a_{2},\ a_{3}\in\xi_{m}\), define
\[f_{m}(a_{1},a_{2}) :=[\sigma_{m}(a_{1}),\sigma_{m}(a_{2})]_{m}^{\sim}-\sigma_{m}([a_{1 },a_{2}]_{m}) \tag{5}\] \[g_{m}(a_{1},a_{2},a_{3}) :=\{\sigma_{m}(a_{1}),\sigma_{m}(a_{2}),\sigma_{m}(a_{3})\}_{m}^{ \sim}-\sigma_{m}(\{a_{1},a_{2},a_{3}\}_{m}) \tag{4}\]
Note that \((f,g)\in C^{(2,3)}(\xi;\eta)\).
**6.5 Proposition**.: _For any given abelian extension \(Ext_{\tilde{\xi}}\) of \(\xi\) by \(\eta\), the cochain \((f,g)\in C^{(2,3)}(\xi;\eta)\) as defined above is a \((2,3)\)-cocycle._
Proof.: To show that \((f,g)\) is a \((2,3)\)-cocycle, we need to show
\[\delta(f,g)=0\text{ and }\delta^{*}(f,g)=0,\]
that is,
\[\delta_{I}f=0,\ \delta_{II}g=0\text{ and }\delta^{*}_{I}f=0,\ \delta^{*}_{II}g=0.\]
Recall that the representation induced by the given extension are given by the vector bundle morphisms \(\rho,D,\) and \(\theta\), where for \(a,b\in\xi_{m}\) and \(v\in\eta_{m}\), \(m\in M\),
\[\rho_{m}(a)(v) =[\sigma_{m}(a),v]_{m}^{\sim}\] \[D_{m}(a,b)(v) =\{\sigma_{m}(a),\sigma_{m}(b),v\}_{m}^{\sim}\] \[\theta_{m}(a,b)(v) =\{v,\sigma_{m}(a),\sigma_{m}(b)\}_{m}^{\sim}.\]
Let \(a_{i}\in\xi_{m}\), \(1\leq i\leq 5\).
By the definitions of \(\delta\) and \(\delta^{*}\) we get
\[(\delta_{I})_{m}f_{m}(a_{1},a_{2},a_{3},a_{4})\] \[=-\rho_{m}(a_{3})g_{m}(a_{1},a_{2},a_{4})+\rho_{m}(a_{4})g_{m}(a_{ 1},a_{2},a_{3})+g_{m}(a_{1},a_{2},[a_{3},a_{4}]_{m})\] \[+D_{m}(a_{1},a_{2})f_{m}(a_{3},a_{4})-f_{m}(\{a_{1},a_{2},a_{3}\} _{m},a_{4})-f(a_{3},\{a_{1},a_{2},a_{4}\}_{m})\] \[=0,\]
where we have used the definition of representation as given above and (LY5). Similarly, we obtain using (LY6)
\[(\delta_{II})_{m}g_{m}(a_{1},a_{2},a_{3},a_{4},a_{5})\] \[=-\theta_{m}(a_{4},a_{5})g_{m}(a_{1},a_{2},a_{3})+\theta_{m}(a_{ 3},a_{5})g_{m}(a_{1},a_{2},a_{4})\] \[+D_{m}(a_{1},a_{2})g_{m}(a_{3},a_{4},a_{5})-D_{m}(a_{3},a_{4})g_{ m}(a_{1},a_{2},a_{5})\] \[-g_{m}(\{a_{1},a_{2},a_{3}\}_{m},a_{4},a_{5})-g_{m}(a_{3},\{a_{1},a_{2},a_{4}\}_{m},a_{5})\] \[-g_{m}(a_{3},a_{4},\{a_{1},a_{2},a_{5}\}_{m})+g_{m}(a_{1},a_{2}, \{a_{3},a_{4},a_{5}\}_{m})\] \[=0.\]
Moreover, from the above definition of representation and (LY3) we get
\[(\delta_{I}^{*})_{m}f_{m}(a_{1},a_{2},a_{3})\] \[=-\sum_{\circlearrowright(a_{1},a_{2},a_{3})}\rho_{m}(a_{1})f_{m}(a_ {2},a_{3})+\sum_{\circlearrowright(a_{1},a_{2},a_{3})}f_{m}([a_{1},a_{2}]_{m},a_ {3})\] \[+\sum_{\circlearrowright(a_{1},a_{2},a_{3})}g_{m}(a_{1},a_{2},a_{3})\] \[=0,\]
and from (LY4) we obtain
\[(\delta_{II}^{*})_{m}g_{m}(a_{1},a_{2},a_{3},a_{4})\] \[=\theta_{m}(a_{1},a_{4})f_{m}(a_{2},a_{3})+\theta_{m}(a_{2},a_{4} )f_{m}(a_{3},a_{4})+\theta_{m}(a_{3},a_{4})f_{m}(a_{1},a_{2})\] \[+g_{m}([a_{1},a_{2}]_{m},a_{3},a_{4})+g_{m}([a_{2},a_{3}]_{m},a_{ 1},a_{4})+g_{m}([a_{3},a_{1}]_{m},a_{2},a_{4})\] \[=0.\]
Thus, \((f,g)\in C^{(2,3)}(\xi;\eta)\) is a \((2,3)\)-cocycle.
By a routine calculation we obtain the following result.
**6.6 Corollary**.: _If \(\sigma,\sigma^{\prime}:\xi\to\tilde{\xi}\) are any two chosen sections of \(j:\tilde{\xi}\to\xi\) and \((f,g),\ (f^{\prime},g^{\prime})\) are the corresponding cocycles as obtained in Proposition 6.5, then \((f,g)\) and \((f^{\prime},g^{\prime})\) are cohomologous. Hence, the extension \(Ext_{\tilde{\xi}}\) of \(\xi\) by \(\eta\) determines uniquely an element of \(H^{(2,3)}(\xi;\eta).\)_
On the other hand, given a Lie-Yamaguti algebra bundle \(\xi\) equipped with a representation \((\eta;\rho,D,\theta),\) any \((2,3)\)-cocycle in \(Z^{(2,3)}(\xi;\eta)\) determines an abelian extension of \(\xi\) by \(\eta\) which is unique up to equivalence.
Let \(\xi\) be a given Lie-Yamaguti algebra bundle over \(M\) and \((\eta;\rho,D,\theta)\) be a given representation of \(\xi.\) Let \((f,g)\in Z^{(2,3)}(\xi;\eta).\) Then, we have the following result.
**6.7 Lemma**.: _Given \((f,g)\in Z^{(2,3)}(\xi;\eta),\) the vector bundle \(\tilde{\xi}=\xi\oplus\eta\) becomes a Lie-Yamaguti algebra bundle, where the associated \(2\)-field and \(3\)-field_
\[m\mapsto[\,\ ]_{m}^{\sim},\ m\mapsto\{\,\,\ \ \}_{m}^{\sim},\ m\in M\]
_are given by_
\[[a_{1}+w_{1},a_{2}+w_{2}]_{m}^{\sim}\] \[:=[a_{1},a_{2}]_{m}+f_{m}(a_{1},a_{2})+\rho_{m}(a_{1})(w_{2})- \rho_{m}(a_{2})(w_{1})\] \[\{a_{1}+w_{1},a_{2}+w_{2},a_{3}+w_{3}\}_{m}^{\sim}\] \[:=\{a_{1},a_{2},a_{3}\}_{m}+g_{m}(a_{1},a_{2},a_{3})+D_{m}(a_{1},a _{2})(w_{3})\] \[-\theta_{m}(a_{1},a_{3})(w_{2})+\theta_{m}(a_{2},a_{3})(w_{1})\]
_where \(a_{1},a_{2},a_{3}\in\xi_{m}\) and \(w_{1},w_{2},w_{3}\in\eta_{m}\). It is convenient to denote this Lie-Yamaguti algebra bundle by \(\xi\oplus_{(f,g)}\eta\) to emphasize that it is induced by the given cocycle._
Proof.: Clearly the assignments
\[m\mapsto[\,\ ]_{m}^{\sim},\ m\mapsto\{\,\,\ \}_{m}^{\sim},\ m\in M\]
as defined in the statement are smooth. So, it is enough to show that for any \(m\in M,\ \tilde{\xi}_{m}\) is a Lie-Yamaguti algebra. Let \(m\in M\). It is easy to see that (LY1) and (LY2) holds for \([\,\ ]_{m}^{\sim}\) and \(\{\,\,\ \}_{m}^{\sim}\) defined above. To verify (LY6) proceed as follows.
\[\big{\{}a_{1} +w_{1},a_{2}+w_{2},\{b_{1}+v_{1},b_{2}+v_{2},b_{3}+v_{3}\}_{m}^{ \sim}\big{\}}_{m}^{\sim}\] \[=\big{\{}a_{1}+w_{1},\ a_{2}+w_{2},\ \{b_{1},b_{2},b_{3}\}_{m}+g_{m}(b_{ 1},b_{2},b_{3})\] \[+D_{m}(b_{1},b_{2})(v_{3})-\theta_{m}(b_{1},b_{3})(v_{2})+\theta_ {m}(b_{2},b_{3})(v_{1})\big{\}}_{m}^{\sim}\] \[=\{a_{1},a_{2},\{b_{1},b_{2},b_{3}\}_{m}\}_{m}+g_{m}(a_{1},a_{2}, \{b_{1},b_{2},b_{3}\})+D_{m}(a_{1},a_{2})g_{m}(b_{1},b_{2},b_{3})\] \[+D_{m}(a_{1},a_{2})D_{m}(b_{1},b_{2})(v_{3})-D_{m}(a_{1},a_{2}) \theta_{m}(b_{1},b_{3})(v_{2})+D_{m}(a_{1},a_{2})\theta_{m}(b_{2},b_{3})(v_{1})\] \[-\theta_{m}(a_{1},\{b_{1},b_{2},b_{3}\}_{m})(w_{2})+\theta_{m}(a_{ 2},\{b_{1},b_{2},b_{3}\}_{m})(w_{1})\]
\[\big{\{}\{a_{1}+w_{1},a_{2}+w_{2},b_{1}+v_{1}\}_{m}^{\sim},b_{2}+v _{2},b_{3}+v_{3}\}_{m}^{\sim}\] \[=\big{\{}\{a_{1},a_{2},b_{1}\}_{m}+g_{m}(a_{1},a_{2},b_{1})+D_{m}( a_{1},a_{2})(v_{1})\] \[-\theta_{m}(a_{1},b_{1})(w_{2})+\theta_{m}(a_{2},b_{1})(w_{1}),\ b _{2}+v_{2},\ b_{3}+v_{3}\big{\}}_{m}^{\sim}\] \[=\{\{a_{1},a_{2},b_{1}\}_{m},b_{2},b_{3}\}_{m}+g_{m}(\{a_{1},a_{2 },b_{1}\}_{m},b_{2},b_{3})+D_{m}(\{a_{1},a_{2},b_{1}\}_{m},b_{2})(v_{3})\] \[-\theta_{m}(\{a_{1},a_{2},b_{1}\}_{m},b_{3})(v_{2})+\theta_{m}(b_ {2},b_{3})g_{m}(a_{1},a_{2},b_{1})+\theta_{m}(b_{2},b_{3})D_{m}(a_{1},a_{2})(v _{1})\] \[-\theta_{m}(b_{2},b_{3})\theta_{m}(a_{1},b_{1})(w_{2})+\theta_{m} (b_{2},b_{3})\theta_{m}(a_{2},b_{1})(w_{1})\]
\[\big{\{}b_{1} +v_{1},\{a_{1}+w_{1},a_{2}+w_{2},b_{2}+v_{2}\}_{m}^{\sim},b_{3}+ v_{3}\}_{m}^{\sim}\] \[=\big{\{}b_{1}+v_{1},\ \{a_{1},a_{2},b_{2}\}_{m}+g_{m}(a_{1},a_{2},b_{2})\] \[+D_{m}(a_{1},a_{2})(v_{2})-\theta_{m}(a_{1},b_{2})(w_{2})+\theta_ {m}(a_{2},b_{2})(w_{1}),\ b_{3}+v_{3}\big{\}}_{m}^{\sim}\] \[=\{b_{1},\{a_{1},a_{2},b_{2}\}_{m},b_{3}\}_{m}+g_{m}(b_{1},\{a_{1 },a_{2},b_{2}\}_{m},b_{3})+D_{m}(b_{1},\{a_{1},a_{2},b_{2}\}_{m})(v_{3})\] \[+\theta_{m}(\{a_{1},a_{2},b_{2}\}_{m},b_{3})(v_{1})-\theta_{m}(b _{1},b_{3})g_{m}(a_{1},a_{2},b_{2})-\theta_{m}(b_{1},b_{3})D_{m}(a_{1},a_{2})( v_{2})\] \[+\theta_{m}(b_{1},b_{3})\theta_{m}(a_{1},b_{2})(w_{2})-\theta_{m} (b_{1},b_{3})\theta_{m}(a_{2},b_{2})(w_{1})\]
\[\big{\{}b_{1} +v_{1},b_{2}+v_{2},\{a_{1}+w_{1},a_{2}+w_{2},b_{3}+v_{3}\}_{m}^{ \sim}\big{\}}_{m}^{\sim}\] \[=\big{\{}b_{1}+v_{1},\ b_{2}+v_{2},\ \{a_{1},a_{2},b_{3}\}_{m}+g_{m}(a_{1},a_{2},b_{3})\] \[+D_{m}(a_{1},a_{2})(v_{3})-\theta_{m}(a_{1},a_{3})(w_{2})+\theta_ {m}(a_{2},a_{3})(w_{1})\big{\}}_{m}^{\sim}\] \[=\{b_{1},b_{2},\{a_{1},a_{2},b_{3}\}_{m}\}_{m}+g_{m}(b_{1},b_{2}, \{a_{1},a_{2},b_{3}\}_{m})+D_{m}(b_{1},b_{2})g_{m}(a_{1},a_{2},b_{3})\] \[+D_{m}(b_{1},b_{2})D_{m}(a_{1},a_{2})(v_{3})-D_{m}(b_{1},b_{2}) \theta_{m}(a_{1},a_{3})(w_{2})+D_{m}(b_{1},b_{2})\theta_{m}(a_{2},a_{3})(w_{1})\] \[-\theta_{m}(b_{1},\{a_{1},a_{2},b_{3}\})(v_{2})+\theta_{m}(b_{2}, \{a_{1},a_{2},b_{3}\})(v_{1})\]
Using (RLYB6), (RLYB4) and the definition of coboundary maps we can show
\[\big{\{}a_{1}+w_{1}, a_{2}+w_{2},\{b_{1}+v_{1},b_{2}+v_{2},b_{3}+v_{3}\}_{m}^{ \sim}\big{\}}_{m}^{\sim}\] \[=\big{\{}\{a_{1}+w_{1},a_{2}+w_{2},b_{1}+v_{1}\}_{m}^{\sim},b_{2}+v _{2},b_{3}+v_{3}\}_{m}^{\sim}\] \[+\big{\{}b_{1}+v_{1},\{a_{1}+w_{1},a_{2}+w_{2},b_{2}+v_{2}\}_{m}^{ \sim},b_{3}+v_{3}\big{\}}_{m}^{\sim}\] \[+\big{\{}b_{1}+v_{1},b_{2}+v_{2},\{a_{1}+w_{1},a_{2}+w_{2},b_{3}+v _{3}\}_{m}^{\sim}\big{\}}_{m}^{\sim}\]
giving us (LY6). Other relations, (LY3), (LY4), (LY5) can also be obtained in the same way. Thus making \(\xi\oplus_{(f,g)}\eta\) a Lie-Yamaguti algebra bundle.
Observe that the Lie-Yamaguti algebra brackets of the fibres of \(\xi\oplus_{(f,g)}\eta\) makes \(\eta\) an abelian ideal in \(\xi\oplus_{(f,g)}\eta\) and we have the following extension of \(\xi\) by \(\eta\):
where \(i\) is the inclusion map and \(j\) is the projection map.
Let \((h,k)\in Z^{(2,3)}(\xi;\eta)\) be another cocycle. Then, we have the following result.
**6.8 Lemma**.: _Two extensions \(0\to\eta\to\xi\oplus_{(f,g)}\eta\to\xi\to 0\) and \(0\to\eta\to\xi\oplus_{(h,k)}\to\xi\to 0\) are equivalent iff \((f,g),\ (h,k)\in Z^{(2,3)}(\xi;\eta)\) are cohomologous._
Proof.: Let the two extensions \(0\to\eta\xrightarrow{i}\xi\oplus_{(f,g)}\eta\xrightarrow{p}\xi\to 0\) and \(0\to\eta\xrightarrow{i}\xi\oplus_{(h,k)}\eta\xrightarrow{p}\xi\to 0\) be equivalent through a Lie-Yamaguti algebra isomorphism
\[\gamma:\xi\oplus_{(f,g)}\eta\to\xi\oplus_{(h,k)}\eta\]
Then for each \(m\in M\) we have the following equivalence of abelian extension.
To show that \((f,g)\) and \((h,k)\) are cohomologous it is enough to show for each \(m\in M\), \((f_{m},g_{m})\) and \((h_{m},k_{m})\) are cohomologous, that is,
\[(f_{m},g_{m})-(h_{m},k_{m})\in B^{(2,3)}(\xi_{m};\eta_{m})\]
We define a map \(\lambda_{m}:\xi_{m}\to\eta_{m}\) by \(\lambda_{m}(a)=\gamma_{m}(a)-a\) by which one can show
\[f_{m}-h_{m}=(\delta_{I})_{m}(\lambda_{m})\text{ and }g_{m}-k_{m}=(\delta_{ II})_{m}(\lambda_{m})\]
Conversely, assume that for each \(m\in M,\ (f_{m},g_{m})\) and \((h_{m},k_{m})\) are in the same cohomology class, that is, \((f_{m},g_{m})-(h_{m},k_{m})=(\delta)_{m}(\lambda_{m})\). Then, \(\gamma_{m}:\xi_{m}\oplus_{(f,g)}\eta_{m}\to\xi_{m}\oplus_{(h,k)}\eta_{m}\) defined by
\[\gamma_{m}(a+v)=a+\lambda_{m}(a)+v\]
gives the required isomorphism.
By summarizing the above observations we have the following theorem.
**6.9 Theorem**.: _To each equivalence class of abelian extensions of \(\xi\) by \(\eta\) there corresponds an element of \(H^{(2,3)}(\xi;\eta)\). Suppose \(\xi\) is a given Lie-Yamaguti algebra bundle over \(M\) equipped with a representation \((\eta;\rho,\ D,\theta).\) To each cohomology class \([(f,g)]\in H^{(2,3)}(\xi;\eta)\), there is an extension of \(\xi\) by \(\eta\)_
_which is unique up to equivalence of extensions._
We conclude addressing the following questions.
**Question: Does there exist a notion of Lie-Yamaguti algebroid which appears as the infinitesimal version of some groupoid with relevant structure?**
|
2310.00513 | Formal Probabilistic Methods for Combinatorial Structures using the
Lovász Local Lemma | Formalised libraries of combinatorial mathematics have rapidly expanded over
the last five years, but few use one of the most important tools: probability.
How can often intuitive probabilistic arguments on the existence of
combinatorial structures, such as hypergraphs, be translated into a formal
text? We present a modular framework using locales in Isabelle/HOL to formalise
such probabilistic proofs, including the basic existence method and first
formalisation of the Lov\'asz local lemma, a fundamental result in probability.
The formalisation focuses on general, reusable formal probabilistic lemmas for
combinatorial structures, and highlights several notable gaps in typical
intuitive probabilistic reasoning on paper. The applicability of the techniques
is demonstrated through the formalisation of several classic lemmas on the
existence of hypergraphs with certain colourings. | Chelsea Edmonds, Lawrence C. Paulson | 2023-09-30T22:28:31Z | http://arxiv.org/abs/2310.00513v2 | # Formal Probabilistic Methods for Combinatorial Structures in Isabelle/HOL
###### Abstract.
Formalised libraries of combinatorial mathematics have rapidly expanded over the last five years, but few use one of the most important tools: probability. How can often intuitive probabilistic arguments be translated into a formal text? We present a modular framework in Isabelle/HOL to formalise combinatorial proofs using probabilistic methods such as the Lovasz local lemma, a fundamental result in probability which is particularly important for existence proofs. We apply the framework to formalise several classic lemmas on hypergraph colourings, revealing how intuitive probabilistic reasoning can lead mathematicians astray.
Interactive theorem proving proof assistants, formalisation of mathematics, Isabelle/HOL, hypergraph colourings, combinatorics, probability +
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
+
Footnote †: journal: Computer Science
and Lovasz local lemma to existence properties on hypergraphs. We conclude with a discussion of the formal probabilistic method and future work in (7).
## 2. Background
### Mathematical Background
Probability theory is built on top of the much broader field of measure theory. A _measure space_ is a triplet \((X,B,\mu)\) where \((X,B)\) is a measurable space and \(\mu:B\rightarrow[0,+\infty]\) is a countably additive measure. A probability space is a particularly important restricted measure space. Its definition, given below, adapts the triple's syntax to match the notation traditionally used in probability.
**Definition 2.1** (Probability Space).: A _probability space_ is a measure space \((\Omega,\mathcal{F},\mathbb{P})\) which has a total measure of 1: \(\mathbb{P}(\Omega)=1\).
Commonly, \(\Omega\) represents the sample space, the set of all possible states a random system could be in. \(\mathcal{F}\) is the set of all possible events the probability space can measure, using the probability measure \(\mathbb{P}\), where \(\mathbb{P}(E)\) is the probability of event \(E\in\mathcal{F}\) occurring. In a discrete context, \(\mathcal{F}=\text{Pow}(\Omega)\). It's assumed readers have a basic knowledge of probability. A particularly important concept for this formalisation is independent events.
**Definition 2.2** (Independent events).: A collection of events \(E\) is defined as _independent_ if and only if for all subsets \(F\subseteq E\), \(\mathbb{P}(\bigwedge F)=\prod_{f\in F}\mathbb{P}(f)\)
A related but weaker concept is that of a mutually independent set of events:
**Definition 2.3** (Mutually independent events).: Given an event \(A\) and a set of events \(E\), \(A\) is _mutually independent_ of \(E\) if for all subsets \(F\subseteq E\), \(\mathbb{P}(A\land(\bigwedge F))=\mathbb{P}(A)\mathbb{P}(\bigwedge F)\)
Combinatorial applications of probability theory commonly involve discrete probability measures, which are much simpler then continuous measures. For example, discrete measures use summations rather than integrals. Most probability spaces in combinatorics involve a point measure, which assigns a specific probability to each event in the probability space. A uniform count measure is a point measure where each event has the same probability, i.e. \(\mathbb{P}(A)=1/|\Omega|\).
The core idea behind the probabilistic method for combinatorics is to show the existence of a structure with certain features by showing its probability is strictly positive. There are many techniques which can be used to obtain this positive probability bound including (Belle and Arab, 1995): basic bounds; linearity of expectation; alterations; the second moment method (variance inequality); and the local lemma. More details on the basic bounds, and the local lemma will be presented alongside their formalisations.
Combinatorial structures are varied, but often based on an incidence set system. Using hypergraph language this is a set of _vertices_, and a collection of vertex subsets known as _edges_. Hypergraphs can intuitively be viewed as a generalisation of graphs where edges can be of any (non-empty) size. In this paper we will focus on finite non-trivial hypergraphs.
### Isabelle Background
Isabelle/HOL, henceforth referred to as Isabelle, is a proof assistant built on higher order logic. It has a number of features that make it ideal for formalising mathematics, including: the human-readable Isar proof language (Isabelle, 1995), strong automation through Sledgehammer (Isabelle, 1995), extensive foundational libraries in analysis and algebra, and the Archive of Formal Proofs (AFP) with nearly four million lines of code across entries in mathematics and computer science. As Isabelle was also used in the only prior formalisations of the probabilistic method, it was ideal to continue this type of work.
#### 2.2.1. Locales & Combinatorial Structures
This paper builds on our previous work formalising several combinatorial structures, such as design theory (Isabelle, 1995) and graph theory (Isabelle, 1995). These libraries use the _locale-centric_ approach for formalising mathematical hierarchies, based on ideas introduced by Ballarin (Ballarin, 1995).
Locales are Isabelle's module system, enabling flexible and extensible inheritance hierarchies and proof contexts. A basic locale consists of a set of parameters and assumptions. The inheritance hierarchy can be manipulated after construction through use of the **sublocale** command. It is also possible to **interpret** a locale instance at both a theory and proof level for use in proof and lemma statements.
#### 2.2.2. Existing Libraries
Isabelle has extensive libraries in measure theory, which the probability libraries are built on. A probability space is defined in the locale _prob_space_, which takes a measure space as a parameter. This locale contains formal definitions for common concepts such as probability measures, expectation, variance, space and events. These are often abbreviations, i.e. pretty syntax, from concepts in measure theory and analysis. Many important lemmas are similarly inherited from the measure theory libraries.
There are many pre-defined types of measures. This paper refers to (1) the _point_measure_, which takes an additional function \(p\in\Omega\rightarrow\mathbb{R}\), that "assigns a probability" to each object in the space, and (2) the _uniform_count_measure_ which is a uniform specialisation of a _point_measure_ where the function \(p\) is not required.
## 3. Background Formalisation Work
Several significant extensions to existing libraries were required for this project, focusing on hypergraphs and probability theory. This section presents the key additions.
### Probability
#### 3.1.1. General Event Extensions
The _Prob-Events-Extras_ theory contains many useful lemmas on manipulating combinations of events and calculating the resultant probabilities. This included lemmas showing properties such as event closure and basic probability bounds on the complement, intersection and union operations, with an example provided below. These were typically proven by induction.
```
lemma_events-inter: assumes finite S assumes \(S\neq\{\}\) shows(\(\wedge\) A. A \(\in\) S \(\Longrightarrow\) A \(\in\) events) \(\Longrightarrow\)\(\bigcap S\in\) events
```
Note the non-empty assumption in the above proof. On paper this is not required, as \(\bigcap\emptyset=\mathbb{U}\), and \(\mathbb{U}\) and \(\Omega\) are considered interchangeable. However, this was the first sign of the _universal set vs probability space_ challenge in Isabelle's probability library. In Isabelle the universal set \(\mathbb{U}\) (_UNIV_ in Isabelle) is not necessarily equal to \(\Omega\).
#### 3.1.2. Conditional Probability
There was surprisingly little support available in the existing Isabelle probability libraries for conditional probability. The most significant formalisation effort appears to be on Markov chains (Han et al., 2017). This introduced the _cond_prob_ definition, and _cond_pmf_ for working with conditional probabilities using probability mass functions (PMFs). Both are available in the main probability library; however, we found very few general lemmas.
The current notation in the _cond_prob_ definition is bulky to use, so we begin our formalisation by defining an abbreviation which mirrors mathematical notation.
```
abbreviation_cond-prob-ev: "a set \(\Rightarrow\) 'a set \(\Rightarrow\) real (\(\mathcal{P}\)'(-\(\mid\) -?) where \(\mathcal{P}(B\mid A)\equiv\mathcal{P}(x\;\text{in}\;M.\;(x\in B)\mid(x\in A))\)
```
Conditional probability can also be viewed as the probability of \(A\) given a uniform measure on \(B\).
```
lemma_measure-uniform-measure-eq-cond-prob-ev3: assumes \(A\in\) events \(B\in\) events shows \(\mathcal{P}(A\mid B)=\text{measure}\;(\text{uniform-measure}\;M\;B)\) A
```
This proof enables existing lemmas to effectively be lifted to the conditional probability space. Amongst the lemmas formalised on conditional probability, the most prominent was _Bayes theorem_ and variations. Despite its relevance and simplicity, this doesn't appear to have been previously formalised in Isabelle.
```
theorem_Bayes-theorem: assumes \(A\in\) events \(B\in\) events showsprob \(B\ast\mathcal{P}(A\mid B)=\text{prob}\;A\ast\mathcal{P}(B\mid A)\)
```
Particularly important to later formalisations, was the formalisation of the general multiplication rule on events. The first challenge of this supposedly simple proof was working with an ordering on events. There are thus two versions of this proof, one on a list of events (which imposes an ordering), and the other using a bijective indexing function on an event collection. The latter was ultimately easier to use, as it is relatively easy to obtain an index function given a finite collection, using bijections on sets.
``` lemma_prob-cond-inter-fin: assumes bij-betwg \(\{0.\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\
\(A\in\textit{events}\wedge(F\,^{\prime}I\subseteq\textit{events})\wedge\ \ (\forall\,\mathcal{I}\subseteq I.\ \mathcal{J}\neq\{\}\longrightarrow\textit{ prob}(A\cap(\cap\{j\not\in J\,.\ F\}))=\textit{prob}A*\textit{prob}((\cap\{j\not\in J\,.\ F\}))\)
The theory contains numerous basic lemmas enabling easy reasoning on mutual independence. There are many commonalities between mutual independence and classical independence, with the latter being a stronger result. In particular, we formalised a lemma showing that a set of events \(S\) is independent if and only if for every event \(A\in S\), \(A\) is mutually independent to the set \(S\setminus\{A\}\).
```
lemmamutual-indep-ev-set-all: assumes\(F\,^{\prime}I\subseteq\textit{events}\) assumes\(\bigwedge\,i\in I\Longrightarrow(\textit{mutual-indep-events}(F\,i)\,F\,(I-\{i\}))\) shows\(\textit{indep-events}\,F\)
```
### Hypergraphs
Hypergraphs have the same underlying foundations as combinatorial designs, which as mentioned in Sect. 2.2, we have previously formalised (Bahdan et al., 2017). Both are simply incidence set systems; however, hypergraphs are often used in different ways with their own unique concepts. For example, hypergraph language is less limited to finite structures and is more commonly used in applications of the probabilistic method.
The locale-centric approach provides an easy way to use the existing design theory library while mirroring the hypergraph language. A full discussion on the hypergraph formalisation and this approach is out of scope of this paper, however, some basics required for Sect. 6 are highlighted.
#### 3.2.1. Designs to Hypergraphs
We first define a _hypersystem_ locale which directly inherits from the existing _incidence_set_system_ locale, however, instantiates the parameters using hypergraph language.
```
localehypersystem=incidence-systemvertices::'a set edges::'a hyp-edgemultisetforvertices(\(\mathcal{V}\))andedges(E)
```
Note that _'a hyp_edge_ is a type synonym for _'a set_, and _'a hyp_graph_ for _'a set_\(\times\)'a hyp_edge multiset_. Within the locale we define numerous basic definitions such as neighbourhood, degree, adjacency and rank. Using **rewrites** in sublocale declarations is essential to automatically translating these definitions between hypergraph and design language.
From here we continue to define different variations of hypergraphs either by direct or indirect inheritance of design concepts. For example a _hypergraph_ inherits from both the _hypersystem_ locale and the _inf_design_ locale, which adds a non-empty edge condition. Additionally, we also formalised variations of uniform hypergraphs (constant size edges) and established inheritance with the _block_design_ locale, as well as regular hypergraphs (constant degree), and established inheritance with the _const_rep_design_ locale. These inheritances were established indirectly, as hypergraphs first establish the properties in a non-finite environment.
```
localekuniform-hypergraph=hypergraph+fixesk::nat assumesuniform:\(\bigwedge\,e\,.\,e\in\,\triangleq\,E\,\Longrightarrow\textit{card}\,e=k\)
```
**sublocale**: _fin-kuniform-hypergraph-nt\(\subseteq\textit{block-design}\,\mathcal{V}\,E\,k\) rewritespoint-replication-numberE\(\,\nu=\textit{hedgeree}\,\nu\) andpoints-indexE\(\,\nu=\textit{hedgeree-set}\,\nu\) ```
#### 3.2.2. Colourings
Colourings are rarely reasoned on in design theory, but are one of the most common concepts in hypergraph (and graph) theory. This formalisation established a library on vertex colourings
**Definition 3.1** (n-vertex colouring).: A _n_-vertex colouring is an assignment of up to \(n\) colours to the vertices of a hypergraph such that no edge is _monochromatic_, i.e. contain vertices all the same colour.
The formalisation begins by formalising the concept of a monochromatic edge. Defining a vertex colouring as a set partition, as done for simple graphs by Noschinski (Noschinski, 2007), ultimately made it tricky to refer to an edge having a _particular_ colour due to the unordered nature of sets. It also only allowed for a colouring of precisely \(n\) colours, rather than the more general _up to n_ colours in Def. 3.1, a common inconsistency in literature. As such we formalise a colouring as a function in \(\mathcal{V}\rightarrow\{0..<n\}\) where colour is a type synonym for the natural numbers.
``` defintionmono-edge\(::(\)a\(\Rightarrow\)colour\()\)\(\Rightarrow\)'a hyp-edge\(\Rightarrow\)bool where\(\textit{mono-edge}\,e\equiv\exists\,\,c.\,\,\forall\,\,v\in\,e.\,\,f\nu=c\) ```
The lemma _is_proper_colouring_alt matches Def. 3.1 by unfolding the _proper_vertex_colouring_ definition. The _complete_vertex_colouring_ definition models a colouring using precisely \(n\) colours, and was shown to be equivalent to a partition definition.
Many lemmas are available in the hypergraph library on vertex colourings. These could easily be translated to a graph theoretic context using the locale-centric approach and existing undirected graph theory library (Bahdan et al., 2017).
## 4. The Basic Method
The core idea behind the probabilistic method is to show the existence of a structure with certain features via a positive probability. There is a basic methodology to do this, with the calculations tending to get more complicated in line with more complex problems. This section explores the formalisation of a framework to mirror aspects of the basic method in a formal environment.
### The Basic Method Framework
The basic method, or pattern for applying the probabilistic method on paper, can be summarised by five steps: (i) introduce randomness to the problem domain; (ii) randomly construct/select an object in the problem domain; (iii) define the desired property of this object (or property to avoid); (iv) show the desired property has a positive probability (or probability less than 1 for avoidance); (v) obtain an example of an event in the probability space with the desired property.
We propose that a 4-step formal framework can help structure formal proofs to mirror these steps.
1. Define a probability space.
2. Define object properties
3. Calculate probabilities
4. Obtain exemplar object
Note the omission of the explicit selection/construction of an object. Given the more structured way we must introduce randomness in a formal environment, most of our probability proofs are quantified over all elements of the space, so selection is done implicitly. Furthermore, while (2) is an important step, it is very problem specific so little can be done to generalise it. The remainder of this section focuses on general techniques for the remaining three steps.
### Defining the Probability Space
Let's first look at step (1), defining a probability space. On paper, the first step introduces randomness to the problem domain in usually one informal sentence. It would be very rare that the probability space is actually defined, presenting the first challenge of formalising the probabilistic method. This framework aims to significantly simplify this step.
To establish a probability space in Isabelle, it is necessary to identify the probability measure you want to use and then interpret an instance of the _prob_space_ locale in each individual proof. Additionally, to easily apply simplification tactics later in the proof, it was useful to prove a number of additional facts around basic properties such as the space, events and measureability specific to that locale interpretation. When dealing with similar probability spaces in different proofs, this can result in notable duplication.
Noschinski's work (Noschinski, 2009) defined an _edge_space_ locale, a probability space over graph edges, which introduces some modularity solving some of the above issues. Our solution significantly extends on this by taking full advantage of the flexibility of inheritance patterns with locales to develop a framework not specific to a particular measure. Firstly, a basic vertex space locale was defined for probabilistic reasoning on any finite non-trivial incidence system:
``` locale vertex-fn-space=fin-hypersystem-vne+ fixesF:"a set="b setandfn-finite(F'V) assumespeto:\(\wedge\)fv.fv.fv.fv.fv.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.g.v.g.v.g.v.g.v.g.v.g.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.g.v.g.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.g.v.g.v.g.v.g.v.g.v.g.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.g.v.g.v.g.v.g.v.g.v.g.v.g.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.g.v.g.v.g.v.g.v.g.v.g.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.g.v.g.v.g.v.g.g.v.g.v.g.v.g.v.g.v.g.g.v.g.g.v.g.v.g.v.g.v.g.g.v.g.v.g.v.g.v.g.v.g.g.v.g.g.v.g.v.g.g.v.g.v.g.v.g.g.v.g.v.g.v.g.v.g.g.v.g.v.g.v.g.g.v.g.v.g.v.g.g.v.g.v.g.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.g.v.g.v.g.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.g.v.g.v.g.v.g.v.g.v.g.g.v.g.v.g.g.v.g.g.v.g.v.g.g.v.g.v.g.v.g.v.g.g.v.g.v.g.v.g.g.v.g.v.g.g.v.g.v.g.v.g.v.g.v.g.g.v.g.v.g.g.v.g.v.g.v.g.v.g.g.v.g.v.g.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.g.v.g.v.g.g.v.g.g.v.g.g.v.g.v.g.v.g.g.v.g.v.g.v.g.v.g.g.v.g.v.g.v.g.v.g.v.g.g.v.g.v.g.g.v.g.g.v.g.v.g.g.v.g.v.g.v.g.g.v.g.v.g.v.g.v.g.v.g.g.v.g.v.g.v.g.g.v.g.v.g.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.g.v.g.v.g.g.v.g.v.g.v.g.v.g.g.v.g.v.g.v.g.v.g.v.g.g.v.g.v.g.v.g.v.g.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.g.v.g.v.g.g.v.g.v.g.v.g.v.g.g.v.g.v.g.v.g.v.g.v.g.g.v.g.v.g.g.v.g.v.g.g.v.g.g.v.g.v.g.v.g.g.v.g.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.v.g.g.v.g.g.v.g.v.g.v.g.g.v.g.v.g.v.g.v.g.v.g.g.v.g.v.g.v.g.g.v.g.v.g.v.g.g.v.g.v.g.v.g.v.g.g.v.g.v.g.g.v.g.v.g.v.g.v.g.v.g.g.v.g.v.g.v.g.v.g.v.g.v.
**fixes**\(n:\)_nat_
**assumes**\(n\)_-lt-order:_\(n\leq\)_order_**and**\(n\)_-not-zero:_\(n\neq 0\)
**sublocale**_vertex-colour-space_\(\subseteq\)_vertex-prop-space_\(\mathcal{V}\)_E_\(\{0..<n\}\)**
**rewrites**\(\Omega U=C^{n}\)
Again, the **rewrites** command is integral to automatically rewrite the standard notation for the probability space with the existing vertex colourings notation, \(C^{n}\). All the basic lemmas from the original _vertex_fn_space_ locale are still available, as well as other extensions from intermediate locales. Any proof involving a random colouring can now simply interpret this locale to set up the probability space and automatically access these properties.
This methodology naturally encourages increased modularity in proof, and thus reduces duplication. For example, general facts on vertex colouring probabilities can be formalised within the _vertex_colour_space_ locale, instead of individual proofs. This is particularly valuable for lemmas that are often presented as intuitive facts on paper, but require fiddly proofs in a formal environment, that would significantly increase the proof length if included in the main proof. For example, on paper, a uniform vertex colouring could be described by saying "colour each vertex red or blue with equal probability". In the formal probability space, this actually means each vertex colouring function is equally likely. However, it would also be useful to derive a result on the probability of each individual vertex having a specific colouring (or other property). This is a simple lemma in _vertex_prop_space_, which is automatically rewritten in _vertex_colour_space_ to use \(\Omega U=C^{n}\).
**lemma**_prob-uniform-vertex_:
**assumes**\(b\in P\) and \(v\in\mathcal{V}\)
**shows**\(prob\)\(\{f\in\Omega U\cdot fv=b\}=1/(\textit{card}\,P)\)
While it is intuitive that a vertex would have a colour \(c\) with probability \(1/n\) given \(n\) colours, the formalisation involved reasoning on the cardinality of filtered sets. The _Pixel-Extras_ theory formalises a number of counting lemmas specific to the extensional function set relation.
### Basic Bounds
The main task of step (2) of the framework is typically defining the _bad events_ (events to be avoided), or alternatively, the desired properties of the structure. Identifying these can be a challenge in the textbook proof, but once identified should be straightforward to translate to a formal environment.
Once the properties have been identified, step (3) of the formalisation involves calculations to show the structure has the desired properties with a positive probability. These calculations can be complex, but there are a number of simple bounds which are a useful starting point. This framework formalises these basic bounds for easy applicability.
Firstly, the _union bound_ intuitively states that given a collection of bad events with a total probability less than one (usually smaller), it is possible to avoid all of them (Kolmogorov, 1959).
**Theorem 4.1** (Union Bound).: _Given events \(A=\{A_{1},\ldots,A_{n}\}\), then \(\mathbb{P}(\bigcup A)\leq\sum_{i=1}^{n}\mathbb{P}(E_{i})\). Therefore, if \(\sum_{i=1}^{n}\mathbb{P}(E_{i})<1\) then \(\mathbb{P}(\overline{\bigcup A})>0\)_
The lemma _finite_measure_subadditive_finite_ from the measure theory libraries formalised the first part of this statement. It was simple to extend this to show the avoidance version of the theorem for event complements.
**lemma**_Union-bound-avoid-fun_:
**assumes**_finite_\(A\) and \((\sum a\in A\). \(\textit{prob}\,(fa))<1\)_**and**\(f^{\prime}A\subseteq\textit{events}\)
**shows**_prob_ (_space_\(M-\bigcup(f^{\prime}A))>0\)
The other bound is the _complete independence_ bound (Kolmogorov, 1959). Intuitively, this states that given an arbitrary number of independent bad events, each occurring with a probability less than one, then it is possible, often with a tiny probability, to avoid all of them.
**Theorem 4.2** (Complete Independence Bound).: _Given a set of independent events \(A=\{A_{1},\ldots,A_{n}\}\) if for all \(i,\mathbb{P}(A_{i})<1\), then \(\mathbb{P}(\overline{\bigcup A})>0\). Note \(\overline{\bigcup A}=\bigcap\limits_{i=1}^{n}\overline{A_{i}}\)._
This had not previously been formalised, and required the lemmas on independent event complements from Sect. 3.1. The formalisation was then relatively straightforward, requiring 10 Isar proof steps.
**lemma**_complete-indep-boundz-index_:
**assumes**_finite_\(A\)**and**\(F^{\prime}A\subseteq\textit{events}\)**and**_indep-events_\(FA\)**assumes**\(\bigwedge a\)_._\(a\in A\Longrightarrow\textit{prob}\,(Fa)<1\)**shows**\(prob\,(\textit{space}\,M-(\bigcup(F^{\prime}A)))>0\)_
### Obtain Structure
The final step of the framework typically obtains an exemplar object from the space with the desired property. Intuitively, this follows from demonstrating a positive probability, and is often omitted entirely from a paper proof. However, it is a necessary step in a formalisation. The framework includes the formalisation of several existence lemmas, some based on a positive probability, and the others for a probability less than one when avoiding certain events.
**lemma**_prob-lt-one-obtain_:
**assumes**\(\{e\in\textit{space}\,M\cdot Q\,e\}\in\textit{events}\)
**and**_prob_\(\{e\in\textit{space}\,M\cdot Q\,e\}<1\)
**obtains**\(e\)**where**\(e\in\textit{space}\,M\)**and**\(\neg Q\,e\)
These obtain lemmas could be easily combined with the formalisation of the union and independence bound lemmas. This effectively combines steps (3) and (4) in the formal framework and simplifies the overall proof. One example of this is given below:
**lemma**_Union-bound-obtain-fun_:
**assumes**_finite_\(A\)**and**\((\sum a\in A\). \(\textit{prob}\,(fa))<1\)**and**\(f^{\prime}A\subseteq\textit{events}\)
**obtains**\(e\)**where**\(e\in\textit{space}\,M\)**and**\(e\notin\bigcup\{\,a\in A\,fa\}\)
## 5. Lovasz local lemma
The Lovasz local lemma is a fundamental tool from the probabilistic method. It (and its variations) enable the provision of tight bounds in situations dealing with rare events, i.e. events that occur with a small positive probability. As such, it is particularly useful in step (3) of the framework. The lemma had not previously been formalised in any system. The formalisation process focussed on the general lemma, which was then adapted to formalise several useful corollaries.
Theorem 5.1 (General Lovasz local lemma) ().: _Let \(A_{1},A_{2},\ldots,A_{n}\) be events in an arbitrary probability space. Suppose \(D=(V,E)\) is a dependency (di)graph for the above events, and suppose there are real numbers \(x_{1},\ldots,x_{n}\) such that \(0\leq x_{i}<1\) and \(\mathbb{P}[A_{i}]\leq x_{i}\prod_{(i,j)\in E}(1-x_{j})\) for all \(1\leq i\leq n\). Then_
\[\mathbb{P}\left[\bigwedge_{i=1}^{n}\overline{A_{i}}\right]\geq\prod_{i=1}^{n} (1-x_{i})\geq 0\]
Our formalisation combines aspects of the paper proofs from several sources, primarily including the probabilistic method textbook (Bauer and Wies, 2011), which provides a good overview, and Zhao's probabilistic method lecture notes (Zhao, 2011), which provided further detail.
### Dependency Graphs
The first necessary concept for Thm. 5.1 is _dependency graphs_, a (di)graph \(D=(V,E)\) where events \(A_{1}\ldots A_{n}\) are represented by \(V\) and for each \(i\), \(1\leq i\leq n\), the event \(A_{i}\) is mutually independent of all the events \(\{A_{j}:(i,j)\not\in E\}\).
Interestingly, various texts will switch between using graphs and digraphs in the language. For example Zhao notes graphs are usually sufficient (Zhao, 2011), however Alon and Spencer only reference digraphs (Alon and Spencer, 2011). Ultimately dependency graphs are simply an intuitive representation of mutual independence, where any events _not_ in a specific event's neighbourhood are part of a mutually independent set.
As such, the formalisation could have been completed without dependency graphs. However, there can be advantages intuitively with mirroring the language used in the majority of texts, especially for formal lemmas which represent common proof techniques. Ideally, the formal environment should be set up such that it is easy to switch between versions of the lemma statement with and without dependency graph notation, as done on paper.
Early in the formalisation efforts, it was clear that using undirected graphs would highly restrict the ability to move to a set representation. Generally, just because event \(A_{j}\) is in a mutually independent set of \(A_{i}\), the reverse isn't automatically true. As such, our formalisation of dependency graphs used Noschinski's directed graph theory library (Noschinski, 2011).
**locale**_dependency-digraph = pair-digraph \(G:\) nat pair-pre-digraph \(+\) prob-space \(M::\) a measure for \(G\ M+\) fixes \(F::\) nat \(\Rightarrow\) a set_
**assumes**_vss:_\(F\ ^{\ (}(\textit{pverts}\ G)\ \mbox{\raisebox{-1.29pt}{$\Left$}}\ \textit{events}\)
**assumes**_mis:_\(\bigwedge\ i.\ i\in(\textit{pverts}\ G)\ \mbox{\raisebox{-1.29pt}{$\Left$}}\ \textit{ mutual-indep-events}\)
\((F\ i)\ F\ ((\textit{pverts}\ G)-(\{i\}\cup\textit{neighborhood}\ i))\)
A few minor additions were required to the digraph libraries. Specifically, a neighbourhood definition and related basic lemmas unusually was not in the original library. Within the new dependency digraph locale, several useful helper lemmas were also formalised. These were derived from the mutual independence assumption, and again aimed to avoid later duplication. For example, _dep_graph_indep_event_ establishes an independent event set based on vertices with a zero outdegree, making use of the _mutual_indep_ev_set_all_ lemma from Sect. 3.1 in its proof.
**lemma**_dep-graph-indep-events_:
**assumes**_A_\(\subseteq\) _pverts_\(G\) **and**\(\bigwedge\ Ai.\ Ai\in A\ \mbox{\raisebox{-1.29pt}{$\Left$}}\ \textit{out-degree}\ G\ Ai=\ 0\)
**shows**_indep-events_\(FA\)
### Formalising the General Lemma
Using the dependency digraph library, we could now formalise Thm. 5.1 in the _prob_space_ locale.
**theorem**_lowasz-local-general_:
**assumes**_A_\(\neq\{\}\) **and**_\(F\ ^{\ \
**assumes finA: finite \(A\) and _Aevents_: \(F\)\(A\subseteq\) events
**assumes** _bounds_: \(\bigwedge\)\(i\). \(i\in A\implies g\)\(i\geq 0\)\(\wedge\)\(g\)\(i<1\)
**assumes** _dep-graph: dependency-digraph \(G\)\(M\)F_
**and** _dep-graph-verts: perts \(G=A\)_
**assumes probA: \(\bigwedge\)\(Ai\). \(Ai\in A\implies\)prob \((F\,Ai)\leq(fAi)*(\)\(\prod Aj\in\)pre-digraph.neighborhood \(G\)\(Ai\). \((1-(fA)))\)**
**assumes Ai-in \(Ai\)**. \(A\) and _S-subset: \(S\subseteq A-\{Ai\}\)_
**assumes S-nempty: \(S\neq\{\}\)\(\mathbf{\ }\)assumes prob2: prob \((\)\(\)\(\cap\)\(A\)\(j\in S\). \((\)_space \(M-(FA)))>0\) shows \(\mathcal{P}((FA)\)\(|\)\((\cap\)\(A\)\(j\in S\). \((\)space \(M-(FA))))\leq fAi\)**
The proof proceeds by induction on \(S\), stating the base case as trivial, before following the proof sketch below:
1. Split \(S\) into \(S_{1}=\{j\in S|A_{j}\in\mathrm{neighbourhood}(A_{i})\}\), and \(S_{2}=S-S_{1}\), i.e. a set of events mutually independent of \(A_{i}\).
2. Apply a version of Bayes rule to get the following fraction: \[\frac{\mathbb{P}\left[A_{i}\wedge\left(\bigwedge_{j\in S_{1}}\overline{A_{j}} \right)\big{|}\wedge_{l\in S_{2}}\overline{A_{l}}\right]}{\mathbb{P}\left[ \bigwedge_{j\in S_{1}}\overline{A_{j}}\right|\wedge_{l\in S_{2}}\overline{A_{ l}}\right]}\]
3. As \(A_{i}\) is mutually independent of \(S_{2}\), show the numerator has an upper bound of: \(x_{i}\prod_{(i,j)\in E}(1-x_{j})\).
4. Using the induction hypothesis, show the denominator is lower bounded by: \(\prod_{(i,j)\in E}(1-x_{j})\).
5. The lemma statement now follows by calculation.
The formalisation of this lemma was complicated by the universal set vs probability space challenge. As the universal set is not necessarily the probability space, its probability could be \(0\) rather than \(1\). The textbook proof routinely uses \(\mathbb{P}((\bigcap\emptyset)=\mathbb{P}(\Omega)=1\), whereas our formalisation must deal with any probabilities conditional on \(\bigcap\emptyset\) separately.
The original base case of the lemma (when \(S\) is empty) was thus formalised separately first in _lowasz_inductive_base_.
**lemma** _lovas-inductive-base_:
**assumes** _dependency-digraph \(G\)\(M\)F_
**assumes** _\(\bigwedge\) Ai. \(Ai\). \(Ai\in A\implies g\)\(Ai\geq 0\)\(\wedge\)\(g\)\(Ai<1\)
**assumes** _\(\bigwedge\) Ai. \(Ai\in A\implies\)(prob \((FA)\)\(\leq(g\)\(Ai)*(\)\(\prod Aj\in\)pre-digraph.neighborhood \(G\) Ai. \((1-(g\)A))))\)
**assumes** _Ai \(A\) and **perts \(G=A\)_
**shows** _prob \((FA)\leq g\)\(Ai\)_
This was a straightforward formalisation requiring only four Isar proof steps. We now proceed with the main proof, which first establishes some notation. The variable \(\mathcal{X}\) represents a function mapping an event index to its complement event \(\mathcal{T}e=\lambda\)\(i\). _space \(M-F\)\(i\)_. A local instance of the digraph locale, _dg_, was also interpreted for easy use.
**interpret** _dg_: dependency-digraph \(GM\)F_**using** _assms(4)_**by** _simp_
The proof required strong induction. Rather than inducting on the cardinality of the set \(S\) as done on paper, the pre-existing _finite_psubset_induct_ rule was ideal, resulting in an induction hypothesis which establishes the statement on any non-empty _proper_ subset of the set \(S\). Several assumptions were carefully selected as induction premises. The induction step of the formal proof is shown below:
**show** \(\mathcal{P}((\)\(FA\)\()\(|\)\(\cap\)\(A\)\(j\in S\). \((\)_space \(M-(FA)))))\(\leq fAi\)
**using** _finS Ai-in S-subset S-nempty prob2_
**proof** (_induct_\(S\)_) _arbitrary. \(Ai\) rule: finite-psubset-induct_ )
After applying induction, the formalisation mirrored step (1) by defining \(S_{1}\) and \(S_{2}\), along with a number of useful facts (finiteness, event subsets etc). Next, the formalisation showed that if \(S_{1}=\emptyset\), the proof followed from _lovasz_inductive_base_, as \(A_{i}\) is mutually independent of \(S_{2}\) by definition.
Assuming \(S_{1}\neq\emptyset\), steps (2) to (4) vary slightly depending on if \(S_{2}=\emptyset\) (due to the universal set challenge), requiring slightly different lemmas to establish the fraction and to apply the conditional multiplication rule. This case split is done on the following proof step which encapsulates the result of steps (2) to (4), to avoid duplicated work for calculations in step (5):
**moreover** **have** \(\exists\)\(P\)\(P\)\(A\)\(i\)\(|\)\(\cap\)\(A\)\(j\)\(e\). _space \(M-FA\)\(j)=P\({}_{1}/{P2}\)\(\wedge\)\(P\)\(\leq\)\(prob\)\((FA)\)\(\wedge\)\(P\)\(2\)\(\equiv\)\(\prod\)\(A\)\(j\)\(e\)\(S\). \((1-(fA)))\)
The cases first required slightly different lemmas to establish the fraction, as per step (2). Step (3) was easily formalised in both cases in one or two proof steps, as it was simple to apply the mutual independence assumption. The denominator bound in step (4) which used the induction hypothesis required the most work. In both cases, the multiplication rule for conditional probability from Sect. 3.1 was used. The resulting product then needed to be manipulated, as done on paper. However, typical of a formal environment, the calculations required more work. While some calculations were unique, those that were shared between cases used a single helper lemma to reduce duplication in these fieldly proofs.
From here, the final calculation required in step (5) could be formalised using simple proof tactics.
#### 5.2.2. Applying the helper lemma
On paper, the main lemma typically follows directly from the helper. For example, Alon and Spencer [1] state "The assertion of Lemma 5.1.1 now follows easily".
However, this isn't the case when you break the proof down formally. In particular, to apply the helper lemma, a further induction step on the events set was required. A brief survey of many lecture notes on the subject appear to routinely skip this step, however, it was acknowledged as necessary when the problem was discussed.
The formalisation of _lovasz_local_general_ first obtained the required assumptions from both the base and general version of the helper lemma, then applied the non-empty finite set induction rule with these assumptions as induction premises.
**show** _prob_ \((\)\(\cap\)\(Ai\in A\). \((\)\(\mathcal{T}\)\(Ai))\(\geq(\)\(\prod\)\(Ai\in A\). \((1-fAi))\)
**using** _assms(3)_ _assms(1)_ _assms(2)_ _assms(4)_ _general base_
**proof** (_induct_\(A\)_ _rule finite-ne-induct_)
The induction proof itself was relatively straight forward to formalise, requiring around 15 calculational Isar proof steps which required several of the lemmas on conditional probability and independence from Sect. 3.1.
### Corollaries and Variations
There are many various forms of the Lovasz local lemma. The simplest corollary states that the probability of none of the events occurring is positive. This was a one line formalisation following immediately from the general lemma.
The symmetric Lovasz local lemma is another important variation and has several forms. While less general, it is more commonly used in practice.
**Corollary 5.3** (The Lovasz local lemma; symmetric case [(1)]).: _Let \(A_{1},\ldots,A_{n}\) be events in an arbitrary probability space. Suppose that each event \(A_{i}\) is mutually independent of a set of all the other events \(A_{j}\) but at most \(d\), and that the \(\mathbb{P}[Ai]\leq p\) for all \(1\leq i\leq n\). If \(ep(d+1)\leq 1\) then \(\mathbb{P}\left[\wedge_{i=1}^{n}\overline{A_{i}}\right]>0\)_
One commonly seen symmetric variation in literature instead retains the dependency (di)graph notation, replacing the mutually independent set condition with one that states: given a dependency graph \(D=(V,E)\) where \(V=\{1,\ldots,n\}\), the outdegree of each vertex is at most \(d\). The second symmetric variation further replaces the \(ep(d+1)\leq 1\) condition with \(4pd\leq 1\), which is a tighter bound for \(d<3\).
#### 5.3.1. The Symmetric Lemma; Dependency Graph
The formalised statement of the dependency graph variation is given below:
```
1:\(\mathbf{\underline{\textbf{\textbf{\textbf{\textbf{\textbf{\textbf{\textbf{ \textbf{\textbf{\textbf{\textbf{\textbf{\textbf{\textbf{\textbf{\textbf}}}}}}}}}}}}}} \ \ \mathbf{\underline{\textbf{\textbf{\textbf{\textbf{\textbf{\textbf{\textbf{\textbf{ \textbf{\textbf{\textbf{\textbf{\textbf{\textbf{\textbf{\textbf{\textbf{\textbf{\textbf{ \textbftextbf{ \textbftextbf{ }}}}}}}}}}}}}}}}} \ \ \ \mathbf{\underline{\textbf{\textbf{\textbf{\textbf{\textbf{\textbf{\textbf{\textbf{ \textbf{\textbf{\textbf{\textbf{\textbf{\textbf{\textbf{\textbf{\textbf{\textbf{\textbf{ }}}}}}}}}}}}}}}}} \mathbf{\underline{\textbf{\textbf{\textbf{\textbf{\textbf{\textbf{\textbf{ \textbf{\textbf{\textbf{\textbf{\textbf{\textbf{\textbf{\textbf{\textbf{\textbf{\textbf{ \textbf{\textbf{\textbf{\textbf{\textbftextbf{ \textbftextbftextbftextbf{ \textbftextbftextbftextbftextbf { \textbftextbftextbftextbftextbf{ \textbftextbftextbftextbftextbf{ \textbftextbf{ \textbf{ \textbftextbf{ \textbf{ \textbftextbf{ \textbf{ \textbf{ \textbf{ \textbf{ \textbf{ \textbf{ \textbf{ }}}}}}}}}}}} \textbf{}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\)\)\)\)\) \ \ \ \ \ \\ \\\\\\\ \\\ \ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\}
The probabilistic method can be used to establish existence conditions for hypergraphs which satisfy _Property B_, and in turn place bounds on \(m(n)\). Property B is simple to formalise, with \(m(n)\) being slightly more tricky -- requiring use of a dummy variable to avoid typing warnings (as we do not care about the hypergraph vertex type).
**abbreviation** (**in**_hypergraph_) _has-property-B_ : _bool_ **where**_has-property-B_ \(\equiv\) _is-n-colourable 2_
**definition**_min-edges-colouring_ : _nat_ \(\Rightarrow\) _'a itself_ \(\Rightarrow\) _enat_ **where**_min-edges-colouring_ \(n\) _=_ INF_ \(h\) (_(_not-col-n-uni-hyps_ \(n\))_ :_
_'a hyp-graph_ set) _. enat_ (size_ (_hyp-edges_ \(h\)))
### Monochromatic Edges and Independence
Basic probability properties on monochromatic edges were essential and repetitive. Building on the example in Sect. 4.2, these could be encapsulated in the _vertex_colour_space_ locale. For example, we first formalised the probability of an edge \(e\) being monochromatic with colour \(e\) given an \(n\) colouring function \(f\).
**lemma**_prob-edge-colour_:
**assumes**_e_\(\in\)E_c_ \(\in\){0_,<_n_}
**shows**_prob_ (_f_ \(\in\)C\({}^{n}\)_. mono-edge-colfe_ \(c\)) = 1/(_n pow_ (_card_ \(e\)))
In lecture notes, the proof of this statement is typically either glossed over (Selton, 2015), or mentions that as each vertex \(v\) clearly independently has a colour \(c\) with probability \(1/n\), the independence multiplication rule can be applied.
However, this is a classic example of circular reasoning based on real world intuition when using probability. Formally, events are independent only if they adhere to the above multiplication rule. Therefore, the multiplication rule can't be used unless independence has previously been established by other logical inferences. The formalisation instead directly counts the number of colourings where an edge is monochromatic via a helper lemma on the extensional function set relation. The probability was then directly calculated using the established uniform probability rule in _vertex_fn_space_. While no longer relevant to the proof, this would also establish independence on the vertex colouring events.
The monochromatic edge event for a particular colour is disjoint from the same event for a different colour, formalised in the _disjoint_family_on_colourings_ lemma. The formalisation of the probability of an edge being monochromatic with any colour follows.
**lemma**_prob-monochromatic-edge_:
**assumes**_e_\(\in\)E_
**shows**_prob_(_f_ \(\in\)C\({}^{n}\)_. mono-edge_ \(e\)) = _n pow_ (_1 - int_ (_card_ \(e\)))
### Property B: Uniform Hypergraphs
The following basic bound on uniform hypergraphs was proposed by Erdos in 1963. This is a classic early example of the probabilistic method on paper. The formalisation of the proof is intended to be a simple exemplar for how to apply the formal probabilistic framework from Sect. 4.
**Theorem 6.2** (Property B: \(n\)-uniform hypergraphs).: _Every \(n\)-uniform hypergraph with less than \(2^{n-1}\) edges has property B. Therefore \(m(n)\geq 2^{n-1}\)._
The proof on paper is relatively simple, approximately 5 lines. Using the framework from Sect. 4, the final proof was only 11 Isar proof steps, as seen below:
**proposition**_erds-propertyB_:
**assumes**_size_\(E<(2^{n}(k-1))\)**and**_k_ \(>\) \(0\)
**shows**_has-property-B_
**proof** --
**interpret**_P_: vertex-colour-space_ \(\forall\)_E 2_
**by**_unfold-locales_ (_auto simple add: order-ge-two_)
**define**_A_ **where**_A_ \(\equiv\)_(_c_ \(e\). _[_f_ \(\in\)C\({}^{2}\)_. mono-edge_ \(f\)_ _e_))
**have**_(_\(\sum e\in\)set-mset_ \(E\). Prop_ (_A_ _e_)) \(<\) \(1\)
**(**\(\zeta\)**-step calculation proof)**
**moreover have**_A_ _'(set-mset_ _E_)_ \(\subseteq\)_P.events_
**unfolding**_A_-_def_P.sets-eq_ **by** _blast_
**ultimately obtain**_f_ **where**_\(f\in\)C\({}^{2}\)and**_\(f\)\(\notin\)\(\bigcup\)_ (_A_ '(set-mset_ _E_))
**using**_P.Union-bound-obtain-fun_ _[_of set-mset_ \(E\) _A_]_ _finite-set-mset_ _P.space-eq_ **by** _auto_
**thus**_?hesis_ **using**_event-is-proper-colouring_ _A-defis-n-colourable-def_ **by** _auto_
**qed**
This proposition is in the _fin_kuniform_hypergraph_nt_ locale. Notably, this lemma does not necessarily hold if the graph is trivial -- an assumption omitted from the original theorem statement.
The above proof clearly lines up with each step of the formal framework as well as the original proof:
1. The first step interprets the _vertex_colour_space_ locale to set up the probability space, in place of the paper proof stating "Color \(V\) randomly by two colours".
2. It then mirrors the paper proof and lets \(A_{e}\) be the event that \(e\in E\) is monochromatic (i.e. defines the event to avoid).
3. Next, the calculation step must show the sum of the probabilities of the edges being monochromatic is strictly less than one. This used the lemma from Sect. 6.1, which the on paper proof calls on without calculation. The calculations required in the 5-step Isar proof were summarised by a single line in the paper proof.
4. Finally, the _Union_bound_obtain_fun_ lemma (Sect. 4.3) was applied to obtain a colouring function not in the set of all possible monochromatic edge events (combining steps 3 and 4 of the framework).
From here it was also possible to formalise the second part of the theorem in a few lines to establish a bound on \(m(n)\).
**corollary**_erds-propertyB_-min_:
**fixes**_z_ \(\coloneqq\) 'a itself_
**assumes**_n_ \(>\) 0_
**shows**_(_min-edges-colouring_ \(n\) _z_) \(\geq\)
### Property B: A more general bound
Thm 6.2 is only for \(k\)-uniform hypergraphs, which is a notable restriction. The Lovasz local lemma provides a more general condition.
**Theorem 6.3** (Property B).: _Let \(H=(V,E)\) be a hypergraph in which every edge has at least \(k\) elements, and suppose that each edge of \(H\) intersects at most \(d\) other edges. If \(e(d+1)\leq 2^{k-1}\), then \(H\) has property B._
The proof of this property on paper begins in the same way as Thm. 6.2. There is a slight alteration to the calculation of the probability of a monochromatic edge given each edge is of a different size. It then uses two lines to establish a mutual independence condition between the different edge events, before stating the result follows from the symmetric Lovasz local lemma. The paper proof totals only 5 lines. Again, this also assumes a non-trivial hypergraph implicitly. The formalised version of the lemma statement, in the _fin_hypergraph_nt_ locale, is given below, showing the application of the Lovasz local lemma.
**proposition**_exlos-propertyB-LLL_:
**assumes**\(\wedge\)\(e\). \(e\Leftarrow\) _card_\(e\geq k\)
**assumes**\(\wedge\)\(e\). \(e\Leftarrow\)
**size**\(\{\sharp\)\(\varepsilon\)\(\varepsilon\)\((\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\(\)\)\(\)\(\)\(\)\(\
combinatorics. This included an Isabelle specific challenge on universal sets, dealing with different types of intuitive proofs, and the advantages of a modular framework based approach for formalising more general techniques.
### The universal set challenge
A challenge that must be addressed specific to Isabelle is the disparity between the probability space, \(\Omega\) and the universal set, \(\mathbb{U}\). On paper, these two concepts are analogous in probability theory, which specifically enables the following calculation \(\mathbb{P}(\bigcap\emptyset)=\mathbb{P}(\mathbb{U})=\mathbb{P}(\Omega)=1\). However, in Isabelle, the set of all vertex colouring functions is clearly not equal to the universal set (all functions from '\(a\Rightarrow nat\)). Therefore, \(\mathbb{U}\) contains elements outside the probability space, so \(\mathbb{P}(\bigcap\emptyset)=0\). While possible to work around, as demonstrated in Sect. 5, the formal proofs were more complex. Another approach which could avoid this problem, while deviating from typical mathematical notation, would be to use the Isabelle PMF library. A _pmf_ can be shown to inherit from the more general _prob_space_ locale used in this paper. The definition also requires that \(\Omega=\mathbb{U}\). However ideally we could find a solution in Isabelle for the main probability library to avoid this problem, as initial investigations indicate it is a non-issue in other proof assistants such as Lean where the probability space is identified with a type.
### Intuition in probability
Traditional combinatorial proof techniques such as counting rely heavily on human intuition. It was interesting to see how probability driven proofs relied on a different use of real-world intuition, often skipping over proofs of certain facts entirely. This was particularly evident in independence proofs where circular reasoning was surprisingly common, due to proofs that appealed to physical intuition. This intuition can perhaps be linked back to how this concept is taught early in mathematical education. For example, the Cambridge International A Level textbook (3, p.100) states "two events are said to be independent if either can occur without being affected by the occurrence of the other". It then proceeds to give the multiplication law for independent events, when in fact two events are only independent if they satisfy the multiplication law. The textbook example uses physical intuition to deduce independence, before using the multiplication law, which reinforces this circular reasoning.
In a formal setting, appealing to such physical intuition is not possible. In cases where independence was not previously established (either by calculation or assumption), the probability had to be calculated directly, which in turn required formal counting proofs. The clearest example of this was when calculating the probability of a monochromatic edge in Sect. 6.1. Mutually independent sets relied on similar physical intuition in on paper proofs. This was exemplified by the observation in Sect. 6.3 where the mutual independence principle was seldom referred to, let alone proven. This formalisation thus fills the significant gaps in the proof on paper to establish mutual independence of monochromatic non-intersecting edge events, and makes the proof easier to find to begin with.
Another interesting aspect of intuition in probability is how randomness is introduced, and results are obtained. On paper, mathematicians will usually refer to natural intuition to establish this, such as specifying individual probabilities, rather than defining the full probability space the proof is working with. This motivated the development of the general framework to structure these steps in a formal environment.
### Using a formal framework
The framework presented in Sect. 4.1 successfully minimises both the setup and conclusion of formal probabilistic proofs. Our approach demonstrates a new application of locales; creating a hierarchy for proof contexts rather than structures (Bauer et al., 2007; Bauer et al., 2007). Through strategic use of **rewrites**, this significantly minimised duplication between proofs on the same vertex space in Sect. 6.2 and 6.3. By basing the hierarchy on a very general incidence system locale, it provides many exemplar formal probability space definitions which would be straightforward to apply in other contexts. To test this, we refactored a probabilistic proof from (Bauer et al., 2007) on bipartite graphs. The framework reduced the probability space setup required, and made several lines of proof significantly simpler with a higher level of automation. This additionally reinforced the power of the locale-centric approach for mathematical hierarchies (Bauer et al., 2007). Locales were easy to use to switch between different mathematical contexts such as probability and combinatorics, and even combine ideas, as in the case of dependency graphs.
The framework is intended to be a guide for future formalisations of the probabilistic method. In addition to the probability space set up benefits, the existence lemmas to do the final step (often omitted on paper), made it easy to move from a bound to a proof conclusion. Mirroring the on paper environment, the framework enables a user to focus on the middle steps of the formalisation which are more theorem specific. The addition of several general bounding techniques to the framework, such as the Lovasz local lemma, can further help structure and minimise these calculation steps, as demonstrated in Sect. 6.
## 8. Concluding Comments
This paper proposed a general formal framework for proofs using the probabilistic method in combinatorics, which makes it easier to translate intuitive aspects of probability proofs
to the formal environment, while reducing duplication between proofs. A significant aspect of this framework is the first formalisation of the Lovasz local lemma -- a fundamental technique in probability with wide application potential -- alongside other contributions to general libraries on probability and combinatorics which could be used in a wide range of future work. Exploring proofs on hypergraph colourings additionally uncovered some fascinating discrepancies in mathematical intuition in the probabilistic context. The formalisations are intended to be published via the Isabelle Archive of Formal Proofs for easy access for future work. The framework was kept intentionally general, opening the door to future extensions such as further probabilistic methods, and new applications.
## Acknowledgements
Chelsea Edmonds was jointly funded by the Cambridge Trust (Cambridge Australia Scholarship) and a Cambridge Department of Computer Science and Technology Premium Research Studentship.
Lawrence C. Paulson was funded by the ERC Advanced Grant ALEXANDRIA (Project GA 742178).
|
2309.14411 | Signatures of Parafermion Zero Modes in Fractional Quantum
Hall-Superconductor Heterostructures | Parafermion zero modes can arise in hybrid structures composed of $\nu=1/m$
fractional quantum Hall edges proximitized with an s-wave superconductor. Here
we consider parafermion and Cooper pair tunneling, and backscattering in a
junction formed in such hybrid structures. We find that the $4\pi m$
periodicity due to parafermion-only tunneling reduces, in the presence of
backscattering, to $4\pi$-periodic at zero temperature and $2\pi$-periodic at
finite temperature unless the fermion parity is fixed. Nevertheless, a clear
signature of parafermion tunneling remains in the shape of the current-phase
relation. | Junyi Cao, Angela Kou, Eduardo Fradkin | 2023-09-25T18:00:00Z | http://arxiv.org/abs/2309.14411v2 | # Signatures of Parafermion Zero Modes in Fractional Quantum Hall-Superconductor Heterostructures
###### Abstract
Parafermion zero modes can arise in hybrid structures composed of \(\nu=1/m\) fractional quantum Hall edges proximitized with an s-wave superconductor. Here we consider parafermion and Cooper pair tunneling, and backscattering in a junction formed in such hybrid structures. We find that the \(4\pi m\) periodicity due to parafermion-only tunneling reduces, in the presence of backscattering, to \(4\pi\)-periodic at zero temperature and \(2\pi\)-periodic at finite temperature unless the fermion parity is fixed. Nevertheless, a clear signature of parafermion tunneling remains in the shape of the current-phase relation.
Non-Abelian topologically ordered phases are among the most promising platforms for fault-tolerant quantum computation [1]. The excitations in these phases are non-Abelian anyons that have non-trivial fusion rules and braiding statistics [2]. These fusion rules provide a source of topological ground state degeneracy, which allows for non-local storage of information and anyon braiding that is topologically protected against decoherence [3; 4; 5; 6; 7]. While there has been significant interest in using Majorana zero modes (MZMs) for topological quantum computation [1], the \(\mathbb{Z}_{N}\) generalization of MZMs, parafermion zero modes (PZMs) [8], is necessary to perform universal topological quantum computation. It was shown in Ref. [9] that an array of PZMs provides a realization of Fibonacci anyons that is capable of universal topological quantum computation [1; 10; 11; 12].
It has been theoretically proposed that PZMs can arise in fractional topological superconductors (FTSC) [13]; a key example is a hybrid structure composed of edge states of a \(\nu=1/m\) fractional quantum Hall (FQH) system proximitized with an s-wave superconductor. A recent experiment has focused on implementing such a structure in graphene and observed crossed Andreev reflection (CAR), which was suggested to indicate the presence of PZMs [14]. A renormalization group (RG) analysis in Ref. [15] showed that CAR is a necessary but not a sufficient condition for the existence of PZMs. A need has therefore arisen for a closer investigation into FTSCs and the identification of additional observable signatures of PZMs in FTSCs. In this letter, we theoretically consider an FTSC and the experimental signatures that arise when two FTSCs are incorporated into a Josephson junction geometry. We identify the PZMs at each end of the junction and determine the energy spectra and current-phase relation of the junction in the presence of parafermion tunneling, Cooper pair tunneling, and backscattering. We find, in the low-energy effective Hamiltonian, that backscattering explicitly breaks the \(\mathbb{Z}_{m}\) symmetry present in the junction, which results in the periodicity of the Josephson phase being the same for PZMs and MZMs. While the periodicity is an insufficient distinguishing metric, additional features arise in the thermally-averaged current-phase relation that discriminate between PZMs and MZMs. We additionally identify, in the parity-projected thermally-averaged current-phase relation, a temperature-dependent \(4\pi\)-periodic fractional Josephson effect that occurs only under the presence of parafermion tunneling.
_Model for FTSC and PZM.--_We first discuss our model for an FTSC, which consists of two edges from a (fully spin-polarized) \(\nu=1/m\) FQH system proximity coupled by an s-wave superconductor (SC) finger as in Fig. 1. Since at each finger, the edges are in proximity with an s-wave superconductor finger, we include a density-density interaction term and a superconductor term \(\Delta\psi_{L}\psi_{R}+\text{h.c.}\) Here we denote by \(L\) and \(R\) the top and bottom FQH edges states in contact with the SC
Figure 1: The Josephson junction includes two copies of FTSCs, parafermion tunneling \(\Gamma_{P}\), Cooper pair tunneling \(\Gamma_{\Delta}\) and backscattering \(\Gamma_{B}\). Each FTSC consists of two edges from \(\nu=1/m\) FQH system (pink) proximity coupled by an s-wave SC (cyan). The spacing between the two FTSCs is exaggerated to emphasize the presence of a junction.
finger. The corresponding bosonized [16] Hamiltonian is,
\[\begin{split} H=&\int_{-L}^{0}\mathrm{d}x\;\bigg{\{} \frac{mv_{F}}{4\pi}\left[\left(\partial_{x}\phi_{R}\right)^{2}+\left(\partial_{x }\phi_{L}\right)^{2}\right]\\ &+\frac{mU}{2\pi}\partial_{x}\phi_{L}\partial_{x}\phi_{R}-\frac{ \Delta}{\ell_{0}^{2}}\cos\left[m\left(\phi_{R}-\phi_{L}\right)\right]\bigg{\}},\end{split} \tag{1}\]
where \(v_{F}\) is the Fermi velocity of the edges, \(\phi_{R/L}\) are the bosonic fields on the top/bottom edge, \(U\) is the interaction strength, \(\Delta>0\) is the (dimensionless) proximity gap, and \(\ell_{0}\) is the magnetic length which plays the role of a UV cutoff.
It is useful to rewrite the bosonic fields as \(\phi_{R/L}=\varphi\pm\vartheta\), which obey the commutation relation,
\[\left[\varphi(x),\vartheta(x^{\prime})\right]=i\frac{\pi}{m}\Theta(x-x^{ \prime}), \tag{2}\]
where \(\rho\equiv\partial_{x}\varphi/\pi=\rho_{R}+\rho_{L}\) is the total charge density operator. The Hamiltonian in the new basis is,
\[\begin{split} H=\int_{-L}^{0}\mathrm{d}x\bigg{\{}& \frac{mu}{2\pi}\left[K\left(\partial_{x}\varphi\right)^{2}+\frac{1}{K}\left( \partial_{x}\vartheta\right)^{2}\right]\\ &-\frac{\Delta}{\ell_{0}^{2}}\cos(2m\vartheta)\bigg{\}},\end{split} \tag{3}\]
with effective velocity \(mu/\pi\) with \(u=\sqrt{v_{F}^{2}-U^{2}}\)[17] and Luttinger parameter \(K=(v_{F}+U)/\sqrt{v_{F}^{2}-U^{2}}\), making this theory a sine-Gordon theory on a finite interval. Hence in the vicinity of the critical point, the proximity gap \(\Delta\) and the distance from the Luttinger parameter to the critical point \(x\equiv 2-mK\) follow the leading terms of Kosterlitz RG [16, 18, 19],
\[\frac{\mathrm{d}\Delta}{\mathrm{d}l}=x\Delta,\;\;\;\;\;\frac{\mathrm{d}x}{ \mathrm{d}l}=128m^{2}\pi^{5}\Delta^{2}. \tag{4}\]
For the FTSC to be in the superconducting phase, either Luttinger parameter must satisfy \(K<K_{c}=2/m\)[20], which requires a sufficiently large attractive interaction, or strong pairing \(\Delta>\Delta_{c}\) is required. Deep in the superconducting phase, the \(\vartheta\) field is pinned to one of the \(2m\) minima of the cosine term in the bulk, \(\vartheta(x)=\frac{\tilde{n}\pi}{m}\), where \(\tilde{n}\in\mathbb{Z}_{2m}\) is an integer-valued operator which is related to the clock operator in the \(N\)-state clock model on a single site with \(N=2m\)[2, 16].
This Hamiltonian has a \(\mathbb{Z}_{2m}\) symmetry, \(\vartheta\rightarrow\vartheta+\frac{\pi}{m}\), representing charge conservation modulo \(e/m\). The corresponding symmetry generator is a \(\mathbb{Z}_{2m}\) generalization of the fermion parity operator \((-1)^{F}\), which we call quasiparticle parity,
\[\hat{P}\equiv\exp\!\left(i\pi\hat{Q}\right)=\exp\left\{i\left[\varphi(0)- \varphi(-L)\right]\right\}, \tag{5}\]
where \(Q\equiv\int_{-L}^{0}\mathrm{d}x\;\rho(x)\) is the number operator. One can check that this is indeed the symmetry generator by,
\[\hat{P}^{-1}\vartheta\hat{P}=\vartheta+\frac{\pi}{m}. \tag{6}\]
The parafermion operator localized at one end of the FTSC is,
\[\alpha_{0}=\frac{1}{b}\int_{-b}^{0}\!\mathrm{d}x\;e^{i\vartheta(x)}\propto e ^{i\vartheta_{0}}, \tag{7}\]
where \(\vartheta_{0}\equiv\vartheta(x=0)\), \(b\) denotes the length of the region where a PZM is localized and is comparable to the coherence length \(\xi\). From solutions of the sine-Gordon equation, \(\vartheta\) has exponentially small fluctuation within \(x\in[-b,0]\); hence, we can treat \(e^{i\vartheta(x)}\) as a constant in this region as well. From the form of the PZM, we can interpret it as "half" of a quasiparticle pair, which reflects the fractionalized nature of the system. The commutator between the Hamiltonian and the PZM is,
\[[H,\alpha_{0}]=-K\partial_{x}\varphi_{0}e^{i\vartheta_{0}}=0. \tag{8}\]
The commutator equals zero because the total current density \(j=\partial_{x}\varphi/\pi\) is \(0\) at \(x=0\), representing no total current flowing from the edge to the FQH background. The commutation relation between the parafermion zero mode operator \(\alpha_{0}\) and the quasiparticle parity \(\hat{P}\) is,
\[\hat{P}\alpha_{0}=e^{i\frac{\pi}{m}}\alpha_{0}\hat{P}. \tag{9}\]
Physically this means that the parafermion operator changes the number operator by one.
_Josephson junction and tunneling._--One way to experimentally identify topologically nontrivial zero modes is the fractional Josephson effect [13, 21], i.e. \(4m\pi\) periodic signal in the current-phase relation. The Josephson junction of interest consists of two copies of FTSCs as in Fig. 1,
\[\begin{split} H=\int_{-L}^{0}&\mathrm{d}x\;\bigg{\{} \sum_{i=1,2}\frac{m}{2\pi}\left[K\left(\partial_{x}\varphi^{(i)}\right)^{2}+ \frac{1}{K}\left(\partial_{x}\vartheta^{(i)}\right)^{2}\right]\\ &-\frac{\Delta}{\ell_{0}^{2}}\cos\!\left(2m\vartheta^{(1)}\right)- \frac{\Delta}{\ell_{0}^{2}}\cos\!\left(2m\vartheta^{(2)}-\delta\phi_{sc}\right) \!\bigg{\}},\end{split} \tag{10}\]
where \(\delta\phi_{sc}\) is the Josephson phase between the two SCs, and \(i=1,2\) describes the edges near the left/right SC, respectively. This system has symmetries including an overall \(\mathbb{Z}_{2m}\) quasiparticle parity and charge conjugation \(\vartheta^{(i)}\rightarrow-\vartheta^{(i)}\), \(\varphi^{(i)}\rightarrow\varphi^{(i)}\). The allowed tunneling between two FTSCs are parafermion tunneling (charge \(e/m\)), Cooper pair tunneling (charge \(2e\)), and backscattering (charge \(0\))[22].
The parafermion tunneling Hamiltonian can be written as,
\[\begin{split} H_{P}&=\Gamma_{P}\alpha_{0}^{(1)\dagger} \alpha_{0}^{(2)}+\mathrm{h.c.}\\ &=\Gamma_{P}e^{i\left(\vartheta_{0}^{(1)}-\vartheta_{0}^{(2)}- \frac{\delta\phi_{sc}}{2m}\right)}+\mathrm{h.c.},\end{split} \tag{11}\]
where \(\Gamma_{P}\) is the parafermion tunneling amplitude. Similarly, the Cooper pair tunneling can be written as,
\[H_{\Delta}=\Gamma_{\Delta}e^{2mi\left(\vartheta_{0}^{(1)}-\vartheta_{0}^{(2)}- \frac{\delta\phi_{sc}}{2m}\right)}+\mathrm{h.c.} \tag{12}\]
We can see the Josephson phase \(\delta\phi_{sc}\) is \(4m\pi\) periodic. Another process consistent with symmetry is the tunneling of "half" of a quasiparticle-quasihole pair, which we call backscattering,
\[H_{B}=\Gamma_{B}e^{i\left(\varphi_{0}^{(1)}-\varphi_{0}^{(2)}\right)}+{\rm h.c.} \tag{13}\]
This tunneling term tunnels \(0\) charge; hence, it does not couple to the electromagnetic field and does not contribute to tunneling current.
_Effective Hamiltonian for tunneling through the Josephson junction.--_In this section, we consider energies below the superconducting potential, \(|E|\ll\Delta\), where the system can be described by an effective Hamiltonian including all of the tunneling processes. The effective Hamiltonian can then be written in the basis where \(e^{i\varphi_{0}^{(i)}}\) are diagonal, with eigenvalues \(e^{in^{(i)}\pi/m}\), where \(n^{(i)}\) can be thought as the eigenvalue of the number operator \(n^{(i)}\in\mathcal{Z}_{2m}\). In this basis, the parafermion operator \(e^{i\phi_{0}^{(i)}}\) shifts \(n^{(i)}\) by one and the Cooper pair tunneling term shifts \(n^{(i)}\) by \(2m\), which is equivalent to not changing \(n^{(i)}\). We find the effective Hamiltonian,
\[\begin{split} H_{\rm eff}=\sum_{n^{(1)},n^{(2)}=0}^{2m-1}& \left\{2|\Gamma_{B}|\cos\left[\frac{(n^{(1)}-n^{(2)})\pi}{m} \right]+2|\Gamma_{\Delta}|\cos(\delta\phi_{sc})\right\}\Big{|}n^{(1)},n^{(2)} \Big{\rangle}\!\Big{\langle}n^{(1)},n^{(2)}\Big{|}\\ &+|\Gamma_{P}|\left(e^{-i\frac{\delta\phi_{sc}}{2m}}\left|n^{(1) }+1,n^{(2)}-1\right\rangle\!\Big{\langle}n^{(1)},n^{(2)}\Big{|}+e^{i\frac{ \delta\phi_{sc}}{2m}}\left|n^{(1)},n^{(2)}\right\rangle\!\Big{\langle}n^{(1)} +1,n^{(2)}-1\Big{|}\right).\end{split} \tag{14}\]
Since \(n^{(i)}\in\mathbb{Z}_{2m}\) and all tunneling terms conserve the total quasiparticle parity \(n^{(1)}+n^{(2)}\mod 2m\), this Hamiltonian can be block diagonalized. In the following, we only consider the effective Hamiltonian in the sector where the total quasiparticle parity is zero. The wavefunction \(\Psi_{r}(\delta\phi_{sc})\) satisfies a Harper-like equation [23],
\[\begin{split}&|\Gamma_{P}|\left[e^{-i\frac{\delta\phi_{sc}}{2m} }\Psi_{r+1}(\delta\phi_{sc})+e^{i\frac{\delta\phi_{sc}}{2m}}\Psi_{r-1}\left( \delta\phi_{sc}\right)\right]\\ &+\left[2|\Gamma_{B}|\cos\left(2\pi\frac{r}{m}\right)+2|\Gamma_ {D}|\cos(\delta\phi_{sc})\right]\Psi_{r}(\delta\phi_{sc})\\ &=E_{r}(\delta\phi_{sc})\Psi_{r}\left(\delta\phi_{sc}\right), \end{split} \tag{15}\]
with \(1\leq r\leq 2m\). The eigenstates satisfy the periodic boundary condition \(\Psi_{r}\left(\frac{\delta\phi_{sc}}{2m}+2\pi k\right)=\Psi_{r}\left(\frac{ \delta\phi_{sc}}{2m}\right)\) with \(k\in\mathbb{Z}\).
We can see the effect of each tunneling term from Eq. (14) and Fig. 2. The Cooper pair tunneling term is proportional to identity; hence it only provides a \(\delta\phi_{sc}\)-dependent shift to all states. With no backscattering, the eigenstates satisfy the boundary condition \(\Psi_{r}(\delta\phi_{sc}+2\pi)=\Psi_{r+1}(\delta\phi_{sc})\) and the Hamiltonian is invariant under \(\mathbb{Z}_{2m}\) transformation \(n^{(1)}\to n^{(1)}+1\mod 2m\). The backscattering term explicitly breaks the \(\mathbb{Z}_{2m}\) down to \(\mathbb{Z}_{2}\), corresponding to \(n^{(1)}\to n^{(1)}+m\mod 2m\). The eigenstates now satisfy a different boundary condition, \(\Psi_{r}(\delta\phi_{sc}+2\pi)=\Psi_{r+m}(\delta\phi_{sc})\).
These effects demonstrate that the \(\mathbb{Z}_{m}\) part of the symmetry is inherently different from the \(\mathbb{Z}_{2}\) fermion parity. The unbroken \(\mathbb{Z}_{2}\) represents the topologically protected fermion parity and can only be broken by nonlocal terms like creating a pair of parafermions at the two ends of the FTSC, \(\alpha_{-L}^{\dagger}\alpha_{0}\), whereas the \(\mathbb{Z}_{m}\) symmetry can be broken by local tunneling terms like tunneling of a quasiparticle and a quasihole \(\psi_{R,qp}^{(1)\dagger}\psi_{L,qp}^{(1)}\psi_{R,qp}^{(2)}\psi_{L,qp}^{(2) \dagger}\sim\cos[2(\varphi_{0}^{(1)}-\varphi_{0}^{(2)})]\). This suggests that in systems with backscattering, one cannot distinguish PZM from MZM
Figure 2: Spectra of tunneling effective Hamiltonian as functions of the Josephson phase \(\delta\phi_{sc}\) with (a) only parafermion tunneling, (b) parafermion and Cooper pair tunneling, (c) parafermion tunneling and backscattering and (d) backscattering, parafermion, and Cooper pair tunneling. The dashed lines can be mapped to the solid line by symmetry transformations, whereas the translucent lines cannot be mapped to the opaque ones.
tunneling from the periodicity of energy-phase relation since both of them have \(\mathbb{Z}_{2}\) symmetry.
_Tunneling current._--The difference between the \(\mathbb{Z}_{2}\) and \(\mathbb{Z}_{m}\) part of the quasiparticle parity can also be seen in the current-phase relation where the Josephson phase is \(4m\pi\)-periodic without the backscattering term and \(4\pi\)-periodic with it as shown in Fig. 3. From our effective Hamiltonian, the tunneling current operator is given by the commutator between tunneling Hamiltonian and the total number operator \(\hat{N}^{(2)}=\left(\varphi_{-L}^{(2)}-\varphi_{0}^{(2)}\right)/\pi\)[24],
\[\begin{split}\hat{I}(\delta\phi_{sc})=& e\frac{ \mathrm{d}\hat{N}^{(2)}}{\mathrm{d}t}=ie\Big{[}H_{\mathrm{eff}},\hat{N}^{(2)} \Big{]}\\ =&\frac{2e}{m}|\Gamma_{P}|\sin\!\left(\vartheta_{0} ^{(1)}-\vartheta_{0}^{(2)}-\frac{\delta\phi_{sc}}{2m}\right)\\ &+4e|\Gamma_{\Delta}|\sin\left[2m\left(\vartheta_{0}^{(1)}- \vartheta_{0}^{(2)}\right)-\delta\phi_{sc}\right].\end{split} \tag{16}\]
The tunneling current for each eigenstate \(|\Psi_{r}\rangle\) of Eq. (14) is then given by,
\[I_{r}(\delta\phi_{sc})=\frac{\langle\Psi_{r}|\hat{I}|\Psi_{r}\rangle}{\langle \Psi_{r}|\Psi_{r}\rangle}=2e\frac{\mathrm{d}E_{r}}{\mathrm{d}\delta\phi_{sc}}. \tag{17}\]
We can see from Fig. 3(c,d) that the backscattering term explicitly breaks the \(\mathbb{Z}_{m}\) symmetry and results in a \(4\pi\) periodicity in the Josephson current.
If there are no terms violating the total quasiparticle parity at the tunneling junction, e.g. terms proportional to \(\alpha_{L}^{(1)\dagger}\alpha_{0}^{(1)}\), there will be no transitions between different channels of the tunneling current. However, at finite temperature, the current thermalizes and becomes,
\[\left\langle\hat{I}(\delta\phi_{sc})\right\rangle_{\beta}=\frac{\mathrm{tr} \left(e^{-\beta H_{\mathrm{eff}}}\hat{I}\right)}{\mathrm{tr}\,e^{-\beta H_{ \mathrm{eff}}}}, \tag{18}\]
at inverse temperature \(\beta=1/T\). For \(\nu=1\) with \(|\Gamma_{\Delta}|=|\Gamma_{B}|=0\), Eq. (18) reduces to the known result [25; 26]. In the thermally-averaged current-phase relation Eq. (18), each of the tunneling terms has different contributions. The thermally-averaged current-phase relation for parafermion and Majorana fermion tunneling are shown in Fig. 4(a,b). Both the parafermion and Majorana fermion terms exhibit a zig-zag with a slope proportional to \(\beta\) whereas the Cooper pair tunneling term has a sine-wave contribution [27]. Backscattering does not contribute to the shape of the thermally-averaged current. All thermally-averaged currents exhibit \(2\pi\)-periodicity due to the contributions from states with different quasiparticle parity.
The fractional Josephson effect can arise at finite temperature if one is capable of projecting into individual states. We can define the projected current using a projection operator \(\hat{P}_{r}\) in the definition of trace in Eq. (18),
\[\left\langle I_{r}(\delta\phi_{sc})\right\rangle_{\beta}=\frac{\mathrm{tr} \Big{(}e^{-\beta H_{\mathrm{eff}}}\hat{P}_{r}\hat{I}\Big{)}}{\mathrm{tr}\Big{(} e^{-\beta H_{\mathrm{eff}}}\hat{P}_{r}\Big{)}}. \tag{19}\]
These projection operators \(\hat{P}_{r}\) are generalizations of the fermion parity projection operators \(\hat{P}_{\pm}=[\mathbb{1}\pm(-1)^{F}]/2\), and \(r\) depends on the parity symmetry of the system, i.e. it is \(\pm\) if there is finite backscattering and ranges from \(1\) to \(2m\) otherwise. The projection operators can be obtained by a linear combination of powers of the clock matrix \(\sigma\)[16]. In experiments, these projections can be realized by fixing the charge difference between the two FTSCs, represented by \(n^{(1)}-n^{(2)}\mod 2m\) in Eq. (14). If there is no backscattering, the projected currents are equivalent to the single-channel currents in Eq. (17), similar to the \(\nu=1\) case in Fig. 4(c). When there is backscattering, as in Fig. 4(d), the parafermion tunneling adds a \(4\pi\)-periodic contribution to the current with the amplitude that scales as a power-law of inverse temperature \(\beta\). This behavior is unique to parafermion tunneling, and therefore, is a fingerprint of PZMs.
_Discussion._--We have presented a model for FTSC consisting of two edge states from a \(\nu=1/m\) FQH system proximity coupled by an s-wave superconductor and identified a PZM at one end of the FTSC. We have constructed an effective Hamiltonian and shown the roles played by parafermion tunneling, Cooper pair tunneling,
Figure 3: Single-channel current-phase relations \(I_{r}(\delta\phi_{sc})\) with \(\nu=1/3\) for (a) only parafermion tunneling, (b) parafermion and Cooper pair tunneling, (c) parafermion tunneling and backscattering, (d) backscattering, parafermion, and Cooper pair tunneling. Currents associated with additional parafermion states that are not shown are related to the currents in this figure by symmetry transformations of the eigenstates, represented by \(2\pi\) shifts in the Josephson phase. The currents have the same color as the corresponding energy states in Fig. 2.
and backscattering by identifying the symmetry of the ground states and Josephson periodicity in different tunneling currents. We showed that for a \(\mathbb{Z}_{2m}\) FTSC, only the \(\mathbb{Z}_{2}\) part of the symmetry is topologically protected while the \(\mathbb{Z}_{m}\) part can be explicitly broken by local tunneling terms like tunneling of a quasiparticle and a quasihole. We proposed using a projected thermal average of the current to detect parafermion tunneling, which has a \(4\pi\)-periodic fractional Josephson effect.
Although the results we have presented are true for general values of the tunneling amplitudes, the relative strength of the tunneling terms is important for experimental detection of the fractional Josephson effect, i.e., the parafermion tunneling term should be larger than the Cooper pair tunneling and the backscattering term. For a Josephson junction with gap distance \(l\), the backscattering amplitude is expected to scale as \(\Gamma_{B}\sim e^{-l/\ell_{0}}\). The parafermion and Cooper pair tunneling amplitude are expected to scale as \(\Gamma_{P},~{}\Gamma_{\Delta}\sim e^{-l/\xi}\). As a realistic example, we can consider niobium nitride, which under a magnetic field of 6 T where \(\nu=1/3\) should arise, has a larger coherence length (\(\sim\)50 nm) than the magnetic length (\(\sim\)10 nm). Hence, we expect the backscattering to be much weaker than both Cooper pair and parafermion tunneling. We naively expect the Cooper pairing amplitude to be much smaller than the parafermion tunneling because the tunneling of multiple charges is suppressed through an FQH background. The effects described in this letter should therefore be observable in experiments. Additional screening layers may also be beneficial for entering the regime where parafermion tunneling dominates.
_Acknowledgments.--_We thank Erez Berg, Paul Fendley, Netanel Lindner, Yi-Zhuang You, and Lucas Wagner for insightful discussions. This work was supported in part by the US National Science Foundation (NSF) through the grant NSF DMR-2225920 at the University of Illinois (JC and EF) and through the NSF Quantum Leap Challenge Institute for Hybrid Quantum Architectures and Networks, NSF OMA-2016136 (AK).
|
2309.12060 | Axisymmetric Incompressible Viscous Plasmas: Global Well-Posedness and
Asymptotics | This paper is devoted to the global analysis of the three-dimensional
axisymmetric Navier--Stokes--Maxwell equations. More precisely, we are able to
prove that, for large values of the speed of light $c\in (c_0, \infty)$, for
some threshold $c_0>0$ depending only on the initial data, the system in
question admits a unique global solution. The ensuing bounds on the solutions
are uniform with respect to the speed of light, which allows us to study the
singular regime $c\rightarrow \infty$ and rigorously derive the limiting
viscous magnetohydrodynamic (MHD) system in the axisymmetric setting.
The strategy of our proofs draws insight from recent results on the
two-dimensional incompressible Euler--Maxwell system to exploit the
dissipative--dispersive structure of Maxwell's system in the axisymmetric
setting. Furthermore, a detailed analysis of the asymptotic regime $c\to\infty$
allows us to derive a robust nonlinear energy estimate which holds uniformly in
$c$. As a byproduct of such refined uniform estimates, we are able to describe
the global strong convergence of solutions toward the MHD system.
This collection of results seemingly establishes the first available global
well-posedness of three-dimensional viscous plasmas, where the electric and
magnetic fields are governed by the complete Maxwell equations, for large
initial data as $c\to\infty$. | Diogo Arsénio, Zineb Hassainia, Haroune Houamed | 2023-09-21T13:25:41Z | http://arxiv.org/abs/2309.12060v2 | # Axisymmetric incompressible viscous plasmas: global well-posedness and asymptotics
###### Abstract.
This paper is devoted to the global analysis of the three-dimensional axisymmetric Navier-Stokes-Maxwell equations. More precisely, we are able to prove that, for large values of the speed of light \(c\in(c_{0},\infty)\), for some threshold \(c_{0}>0\) depending only on the initial data, the system in question admits a unique global solution. The ensuing bounds on the solutions are uniform with respect to the speed of light, which allows us to study the singular regime \(c\to\infty\) and rigorously derive the limiting viscous magnetohydrodynamic (MHD) system in the axisymmetric setting.
The strategy of our proofs draws insight from recent results on the two-dimensional incompressible Euler-Maxwell system to exploit the dissipative-dispersive structure of Maxwell's system in the axisymmetric setting. Furthermore, a detailed analysis of the asymptotic regime \(c\to\infty\) allows us to derive a robust nonlinear energy estimate which holds uniformly in \(c\). As a byproduct of such refined uniform estimates, we are able to describe the global strong convergence of solutions toward the MHD system.
This collection of results seemingly establishes the first available global well-posedness of three-dimensional viscous plasmas, where the electric and magnetic fields are governed by the complete Maxwell equations, for large initial data as \(c\to\infty\).
Key words and phrases:Incompressible viscous three-dimensional fluids, Maxwell's system, plasmas, global well-posedness, axisymmetric structure
###### Contents
* 1 Introduction and main results
* 2 The axisymmetric structure, Hardy inequalities and paradifferential calculus
* 3 A priori estimates
* 4 Asymptotic analysis of electromagnetic fields
* 5 Closing the estimates and proof of Theorem 1.1
* 6 Convergence and proof of Theorem 1.3
## 1. Introduction and main results
In this paper, we consider the incompressible Navier-Stokes-Maxwell equations
\[\left\{\begin{aligned} \text{(Navier-Stokes's equation)}& \partial_{t}u+u\cdot\nabla u=\nu\Delta u-\nabla p+j\times B,& \operatorname{div}u=0,\\ \text{(Ampbre's equation)}&\frac{1}{c}\partial_{t}E- \nabla\times B=-j,&\operatorname{div}E=0,\\ \text{(Faraday's equation)}&\frac{1}{c}\partial_{t}B+ \nabla\times E=0,&\operatorname{div}B=0,\\ \text{(Ohm's law)}& j=\sigma\big{(}cE+P(u\times B) \big{)},&\operatorname{div}j=0,\end{aligned}\right. \tag{1.1}\]
###### Abstract
We consider the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem. We show that the Cauchy problem for the Cauchy problem is a Cauchy problem for the Cauchy problem. We show that the Cauchy problem for the Cauchy problem is a Cauchy problem for the Cauchy problem. We show that the Cauchy problem for the Cauchy problem is a Cauchy problem for the Cauchy problem for the Cauchy problem. We show that the Cauchy problem for the Cauchy problem is a Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem and Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy Cauchy problem for the Cauchy Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy Cauchy problem for the Cauchy Cauchy problem for the Cauchy problem for the Cauchy Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy Cauchy problem for the Cauchy Cauchy problem for the Cauchy problem for the Cauchy Cauchy problem for the Cauchy problem for the Cauchy Cauchy problem for the Cauchy Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy Cauchy Cauchy problem for the Cauchy problem for the Cauchy Cauchy problem for the Cauchy problem for the Cauchy Cauchy problem for the Cauchy problem for the Cauchy problem for the Cauchy Cauchy problem for the Cauchy Cauchy problem for the Cauchy Cauchy problem for the Cauchy Cauchy problem for the Cauchy Cauchy problem for the Cauchy Cauchy problem for the Cauchy Cauchy problem for Cauchy Cauchy problem for Cauchy Cauchy Cauchy problem for Cauchy Cauchy problem for Cauchy Cauchy problem for Cauchy Cauchy Cauchy problem for Cauchy Cauchy problem for Cauchy Cauchy problem for
global weak solution to (1.1). Apart from the energy estimate, the solution constructed therein does not satisfy any uniform bound with respect to the speed of light \(c\).
Later on, the first author, Ibrahim and Masmoudi [5] established a conditional convergence result which entails the convergence of weak solutions of (1.1) to (MHD).
Crucial progress was then achieved in [2] by the first author and Gallagher by showing the persistence of the \(H^{s}\)-regularity of the electromagnetic field, uniformly with respect to the speed of light \(c\).
As for the inviscid version of (1.1), i.e., the incompressible Euler-Maxwell system, the first global results on that model were established recently by the first and the third authors in [3, 4]. In that context, owing to known results on Euler equations, see for instance [14], working at the level of the energy (1.2), with \(\nu=0\), is insufficient to ensure the global well-posedness of the system. One is therefore compelled to study the existence and uniqueness of weak solutions in a higher-regularity space.
The main contribution from [3] is then the construction of a unique global solution of the Euler-Maxwell equations (1.1) (with \(\nu=0\)) in the spirit of Yudovich's work [40], where the electromagnetic field has some Sobolev regularity \(H^{s}(\mathbb{R}^{2})\), \(s\in(\frac{7}{4},2)\). Moreover, it is shown therein that the solution is uniformly bounded with respect to \(c\) in adequate spaces. The other work by the same authors [4] further establishes the strong convergence of that solution, as \(c\) goes to \(\infty\). We should point out that the results in [3, 4], for the Euler-Maxwell system, only hold under the assumption that \((u,E,B)\) has the _two-dimensional normal structure_
\[u(t,x)=\begin{pmatrix}u_{1}(t,x)\\ u_{2}(t,x)\\ 0\end{pmatrix},\qquad E(t,x)=\begin{pmatrix}E_{1}(t,x)\\ E_{2}(t,x)\\ 0\end{pmatrix}\qquad\text{and}\qquad B(t,x)=\begin{pmatrix}0\\ 0\\ b(t,x)\end{pmatrix}. \tag{1.4}\]
Extending the arguments from [3, 4] to general structures is a challenging open problem. As we shall see later on, a novelty of the present work is to permit the consideration of a new three-dimensional structure in the context of plasmas, namely, the axisymmetric structure.
From now on, we are going to focus on the three dimensional case by first considering the work by Ibrahim and Keraani [27] where it is proved that the system (1.1) is globally well-posed provided that the initial data \((u_{0},E_{0},B_{0})\) are small enough in
\[\dot{B}^{\frac{1}{2}}_{2,1}\times\dot{H}^{\frac{1}{2}}\times\dot{H}^{\frac{1}{ 2}}(\mathbb{R}^{3}).\]
This smallness condition is slightly weakened in the work of Germain, Ibrahim and Masmoudi [19], where the initial data lies in the space
\[\dot{H}^{\frac{1}{2}}\times\dot{H}^{\frac{1}{2}}\times\dot{H}^{\frac{1}{2}}( \mathbb{R}^{3}).\]
Note that the scaling of these spaces is at the critical level of the three-dimensional Navier-Stokes equations (i.e., in the case \(E\equiv B\equiv 0\)). Accordingly, we do not expect to maintain the uniqueness of the solution to (1.1) if the initial velocity lies in spaces that scale below \(\dot{H}^{\frac{1}{2}}\).
Beside [19, 27], the three-dimensional Navier-Stokes-Maxwell system (1.1) was studied in several papers, for instance [2, 41, 42, 43]. However, to the best of our knowledge, none of the existing results provide enough information to study the regime \(c\to\infty\) in the three-dimensional case.
Thus, an important novelty of our paper is achieved by shedding light on the regime \(c\to\infty\) and extending the techniques recently developed by the first and the third authors in [3, 4] to the three-dimensional axisymmetric setting, which we introduce next.
### The axisymmetric structure
Throughout this paper, we assume that the velocity and the electromagnetic fields are axisymmetric, \(u\) and \(E\) are without swirl, whereas, \(B\) has pure
swirl. That is to say
\[u(x)=u_{r}(r,z)e_{r}+u_{z}(r,z)e_{z},\quad E(x)=E_{r}(r,z)e_{r}+E_{z}(r,z)e_{z}, \quad B(x)=B_{\theta}(r,z)e_{\theta}, \tag{1.5}\]
where \((r,\theta,z)\) denotes the usual cylindrical coordinates and \((e_{r},e_{\theta},e_{z})\) is the corresponding orthonormal basis (see Section 2 for the explicit definition of these variables). The careful reader may now notice that this assumption shares some similarity with the _two-dimensional normal structure_ (1.4). Indeed, in (1.5), the magnetic field \(B\) remains orthogonal to \(u\) and \(E\).
This key observation will allow us to somewhat simplify the equations. However, this structure will also require some extra attention when performing estimates in Besov spaces, for the cylindrical basis \((e_{r},e_{\theta},e_{z})\) depends on the position variables.
Let us now state some crucial properties provided by (1.5). First of all, note that the vorticity
\[\omega\stackrel{{\rm def}}{{=}}\operatorname{curl}u\]
is acting only in the direction \(e_{\theta}\), that is,
\[\omega=\omega_{\theta}\,e_{\theta}\stackrel{{\rm def}}{{=}} \bigl{(}\partial_{z}u_{r}-\partial_{r}u_{z}\bigr{)}e_{\theta}.\]
On the other hand, straightforward computations show that
\[\nabla\times(j\times B)=B\cdot\nabla j-j\cdot\nabla B=j_{r}\frac{B}{r}-j\cdot \nabla B.\]
Thus, taking the curl of the momentum equation in (1.1) yields
\[\partial_{t}\omega_{\theta}+u\cdot\nabla\omega_{\theta}-\nu\bigl{(}\Delta- \frac{1}{r^{2}}\bigr{)}\omega_{\theta}=\frac{u_{r}}{r}\omega_{\theta}+j_{r} \frac{B_{\theta}}{r}-j\cdot\nabla B_{\theta}. \tag{1.6}\]
Then, by introducing
\[\Omega\stackrel{{\rm def}}{{=}}\frac{\omega_{\theta}}{r},\qquad \Gamma\stackrel{{\rm def}}{{=}}\frac{B_{\theta}}{r},\]
we further obtain the equation
\[\partial_{t}\Omega+u\cdot\nabla\Omega-\nu\bigl{(}\Delta+\frac{\partial_{r}}{r }\bigr{)}\Omega=j\cdot\nabla\Gamma. \tag{1.7}\]
Note that the last equation is very similar to the vorticity equation of the two-dimensional Euler-Maxwell system treated in [3].
An equivalent formulation of (1.6) and (1.7) is obtained by employing Ampere's equation to eliminate the current density \(j\) and deduce that
\[\partial_{t}\omega_{\theta}+u\cdot\nabla\omega_{\theta}-\nu\bigl{(}\Delta- \frac{1}{r^{2}}\bigr{)}\omega_{\theta}=\frac{u_{r}}{r}\omega_{\theta}- \partial_{z}\bigl{(}\Gamma B_{\theta}\bigr{)}-\frac{1}{c}\partial_{t}E_{r} \frac{B_{\theta}}{r}+\frac{1}{c}\partial_{t}E\cdot\nabla B_{\theta}, \tag{1.8}\]
and
\[\partial_{t}\Omega+u\cdot\nabla\Omega-\nu\bigl{(}\Delta+\frac{\partial_{r}}{ r}\bigr{)}\Omega=-\partial_{z}\bigl{(}\Gamma^{2}\bigr{)}-\frac{1}{c}\partial_{t}E \cdot\nabla\Gamma. \tag{1.9}\]
Furthermore, by virtue of (1.5), combining Ampere and Faraday's equations yields
\[\frac{1}{c^{2}}\partial_{tt}B+\partial_{t}B+u\cdot\nabla B-\frac{1}{\sigma} \Delta B=\frac{u_{r}}{r}B,\]
and therefore
\[\frac{1}{c^{2}}\partial_{tt}\Gamma+\partial_{t}\Gamma+u\cdot\nabla\Gamma- \frac{1}{\sigma}\bigl{(}\Delta+\frac{\partial_{r}}{r}\bigr{)}\Gamma=0.\]
The key observation behind the global estimates in the axisymmetric setting when \(E\equiv B\equiv 0\) is the fact that the stretching term \(\omega\cdot\nabla u\) is reduced to \(\frac{u_{r}}{r}\omega\). As shown in [37], this term can be controlled by the estimate1
Footnote 1: \(L^{p,q}\) stands for the usual Lorentz spaces.
\[\Big{\|}\frac{u_{r}}{r}\Big{\|}_{L^{\infty}}\lesssim\|\Omega\|_{L^{3,1}} \lesssim\|\Omega\|_{L^{2}}^{\frac{1}{2}}\,\|\nabla\Omega\|_{L^{2}}^{\frac{1}{ 2}}\,, \tag{1.10}\]
which then yields a global bound on the velocity in sub-critical spaces. Indeed, this is due to the fact that \(\Omega\) obeys a transport-diffusion equation (resp. transport equation) in the case of the Navier-Stokes equations (resp. Euler equations), which can be used to deduce a global bound for \(\Omega\) in \(L_{t}^{\infty}L^{2}\cap L_{t}^{2}\dot{H}^{1}\) (resp. in \(L_{t}^{\infty}L^{3,1}\)), by means of standard energy estimates. Similar results can be extended to the MHD equations (see Theorem 1.2, below).
We conclude by pointing out that the axisymmetric structure has been extensively studied in a variety of fluid models, such as the Navier-Stokes [11, 12, 33, 34, 35], Euler [15], Boussinesq [1, 16, 17, 22, 23, 25, 26] and MHD [10, 28, 29, 31, 32, 39] systems with partial or full dissipation.
### Aims and main results
We intend to show that the global well-posedness of the three dimensional system (1.1) holds whenever \(c\) is large enough. We shall prove our results uniformly with respect to \(c\) and then derive the MHD equations by studying the singular limit \(c\to\infty\). We refer to Section 2 for the definition of all functional spaces.
**Theorem 1.1** (Global well-posedness).: _Let \((u_{0}^{c},E_{0}^{c},B_{0}^{c})_{c>0}\) be a family of divergence-free axisymmetric initial data such that \(u_{0}^{c}\) and \(E_{0}^{c}\) are without swirl, whereas \(B_{0}^{c}\) has pure swirl. Assume further that_
\[(u_{0}^{c},E_{0}^{c},B_{0}^{c})\in\left(H^{1}\times H^{\frac{3}{2}}\times H^{ \frac{3}{2}}\right)(\mathbb{R}^{3}),\qquad c^{-1}(E_{0}^{c},B_{0}^{c})\in \dot{B}^{\frac{5}{2}}_{2,1}(\mathbb{R}^{3}),\]
\[\Omega_{0}^{c}\stackrel{{\rm def}}{{=}}\frac{\omega_{0}^{c}}{r} \in L^{2}(\mathbb{R}^{3}),\qquad j_{0}^{c}\stackrel{{\rm def}}{{= }}\sigma(cE_{0}^{c}+P(u_{0}^{c}\times B_{0}^{c}))\in H^{\frac{1}{2}}(\mathbb{ R}^{3}),\]
_uniformly in \(c>0\). Then, there is a constant \(c_{0}>0\), depending only on the size of the initial data, such that, for any \(c\in(c_{0},\infty)\), there is a unique global axisymmetric solution \((u^{c},E^{c},B^{c})\) of the three dimensional Navier-Stokes-Maxwell equations (1.1), with \(\nu>0\), such that \(u^{c}\) and \(E^{c}\) are without swirl and \(B^{c}\) has pure swirl. This solution enjoys the bounds_
\[u^{c}\in L^{\infty}(\mathbb{R}^{+};H^{1}),\qquad\nabla u^{c}\in L ^{2}(\mathbb{R}^{+};H^{1}),\qquad\Omega^{c}\stackrel{{\rm def}}{{= }}\frac{\omega_{0}^{c}}{r}\in L^{\infty}(\mathbb{R}^{+};L^{2})\cap L^{2}( \mathbb{R}^{+};\dot{H}^{1}),\] \[(E^{c},B^{c})\in L^{\infty}(\mathbb{R}^{+};H^{\frac{3}{2}}),\qquad c ^{-1}(E^{c},B^{c})\in L^{\infty}(\mathbb{R}^{+};\dot{B}^{\frac{5}{2}}_{2,1}),\] \[(E^{c},B^{c})\in L^{2}(\mathbb{R}^{+};\dot{B}^{\frac{5}{2}}_{2,1 }),\quad cE^{c}\in L^{2}(\mathbb{R}^{+};H^{\frac{3}{2}}),\quad B^{c}\in L^{2} (\mathbb{R}^{+};\dot{H}^{1}),\quad j^{c}\in\bigcap_{p=2}^{\infty}L^{p}( \mathbb{R}^{+};H^{\frac{1}{2}}),\]
_uniformly in \(c\in(c_{0},\infty)\). If, moreover,_
\[(E_{0}^{c},B_{0}^{c})\in\dot{B}^{\frac{5}{2}}_{2,1}(\mathbb{R}^{3}),\]
_uniformly in \(c>0\), then the bound_
\[(E^{c},B^{c})\in L^{\infty}(\mathbb{R}^{+};\dot{B}^{\frac{5}{2}}_{2,1})\]
_holds uniformly in \(c\in(c_{0},\infty)\)._
The complete proof of Theorem 1.1 is given in Section 5.
_Remark_.: By setting \(E\equiv B\equiv 0\) in the above theorem, observe that we recover a classical global well-posedness result for the axisymmetric Navier-Stokes equations (see, for instance, Theorem 10.4 in [30]).
We also wish to understand the singular regime \(c\to\infty\) in (1.1). The next corollary establishes a well-posedness result for the limiting system (MHD) by considering the weak limit of solutions constructed in Theorem 1.1.
**Corollary 1.2** (Weak singular limit).: _Let \((u_{0},B_{0})\) be an axisymmetric divergence-free vector field, where \(u_{0}\) is without swirl and \(B_{0}\) has pure swirl. Further assume that_
\[(u_{0},B_{0})\in\left(H^{1}\times H^{\frac{3}{2}}\right)(\mathbb{R}^{3}),\quad \frac{\omega_{0}}{r}\in L^{2}(\mathbb{R}^{3}).\]
_Then, there is a unique axisymmetric solution to (MHD), with \(\nu>0\) and initial data \((u_{0},B_{0})\), enjoying the bounds_
\[(u,B)\in L^{\infty}\left(\mathbb{R}^{+};H^{1}(\mathbb{R}^{3})\times H^{\frac{ 3}{2}}(\mathbb{R}^{3})\right),\quad(\nabla u,\nabla B)\in L^{2}\left(\mathbb{R }^{+};H^{1}(\mathbb{R}^{3})\times B_{2,1}^{\frac{3}{2}}(\mathbb{R}^{3}) \right),\]
_and_
\[\frac{\omega_{\theta}}{r}\in L^{\infty}(\mathbb{R}^{+};L^{2}(\mathbb{R}^{3}) )\cap L^{2}(\mathbb{R}^{+};\dot{H}^{1}(\mathbb{R}^{3})).\]
Proof.: We build global solutions by making use of the bounds from the preceding theorem and employing standard compactness methods. To that end, let \((u_{0}^{c_{n}},E_{0}^{c_{n}},B_{0}^{c_{n}})_{n\in\mathbb{N}}\) be any family of divergence-free vector fields satisfying the assumptions of Theorem 1.1 and converging to \((u_{0},0,B_{0})\), at least in the sense of distributions. In particular, by Theorem 1.1, there exists a unique family \((u^{c_{n}},E^{c_{n}},B^{c_{n}})_{n\in\mathbb{N}}\) of solutions to (1.1), which obeys the bounds
\[(u^{c_{n}},B^{c_{n}})\in L^{\infty}\left(\mathbb{R}^{+};H^{1}(\mathbb{R}^{3}) \times H^{\frac{3}{2}}(\mathbb{R}^{3})\right),\quad(\nabla u^{c_{n}},\nabla B^ {c_{n}})\in L^{2}\left(\mathbb{R}^{+};H^{1}(\mathbb{R}^{3})\times B_{2,1}^{ \frac{3}{2}}(\mathbb{R}^{3})\right),\]
and
\[\frac{\omega_{\theta}^{c_{n}}}{r}\in L^{\infty}(\mathbb{R}^{+};L^{2}(\mathbb{ R}^{3}))\cap L^{2}(\mathbb{R}^{+};\dot{H}^{1}(\mathbb{R}^{3})),\qquad c_{n}E^{c_ {n}}\in L^{2}(\mathbb{R}^{+};H^{\frac{3}{2}}),\]
uniformly in \(n\in\mathbb{N}\).
Thus, by the Banach-Alaoglu theorem, up to extraction of a subsequence (which is not distinguished, for simplicity), it holds that
\[(u^{c_{n}},E^{c_{n}},B^{c_{n}})\stackrel{{ n\to\infty}}{{ \longrightarrow}}(u,0,B),\quad\text{ in }\,\mathcal{D}^{\prime}(\mathbb{R}^{+}\times\mathbb{R}^{3}),\]
where \((u,B)\) is in the same functional spaces as \((u^{c_{n}},B^{c_{n}})\).
In fact, one can also show that \((\partial_{t}u^{c_{n}},\partial_{t}B^{c_{n}})_{n\in\mathbb{N}}\) is uniformly bounded in \(L^{2}_{t,x,\text{loc}}\), which implies, by a classical compactness result by Aubin and Lions (see [38] for a thorough discussion of such compactness results and, in particular, Section 9 therein, for convenient results which are easily applicable to our setting), that \((u^{c_{n}},B^{c_{n}})_{n\in\mathbb{N}}\) is relatively compact in \(L^{2}_{t,x,\text{loc}}\).
Therefore, by taking the limit in (1.1) in the sense of distributions, and exploiting the strong compactness of \((u^{c_{n}},B^{c_{n}})_{n\in\mathbb{N}}\) to show the weak stability of nonlinear terms, it is readily seen that \((u,B)\) is a weak solution of (MHD), thereby completing the existence proof.
As for the uniqueness of solutions, we only need to note that it follows directly from weak-strong uniqueness principles for fluid dynamical models (for instance, see [6, Section 3.2.3] and [18]).
_Remark_.: The solution constructed in the previous corollary enjoys the refined bound
\[B\in\widetilde{L}^{\infty}\big{(}\mathbb{R}^{+};\dot{B}_{2,2}^{\frac{3}{2}}( \mathbb{R}^{3})\big{)}. \tag{1.11}\]
Indeed, by virtue of the bounds from Corollary 1.2, one can show that \(u\cdot\nabla B\) and \(B\cdot\nabla u\) belong to \(L^{2}(\mathbb{R}^{+};\dot{H}^{\frac{1}{2}}(\mathbb{R}^{3}))\). Therefore, standard parabolic regularity estimates applied directly to the heat
equation satisfied by \(B\) in (MHD) give the bound above. In particular, this bound guarantees that
\[\lim_{n\to\infty}\sup_{t\in[0,\infty)}\|\mathds{1}_{\{|D|\geq\Theta_{n}\}}B(t)\| _{\dot{H}^{\frac{3}{2}}}=0, \tag{1.12}\]
for any sequence \((\Theta_{n})_{n\geq 1}\) satisfying
\[\lim_{n\to\infty}\Theta_{n}=\infty.\]
Further note that the weaker bound
\[B\in L^{\infty}\big{(}\mathbb{R}^{+};\dot{H}^{\frac{3}{2}}(\mathbb{R}^{3}) \big{)}\]
would not be enough to establish (1.12). This will be important, later on, in the proofs of our main results.
We state now our second main theorem. It concerns the strong convergence, as \(c\to\infty\), of the solution given in Theorem 1.1 toward the solution constructed in Corollary 1.2.
**Theorem 1.3** (Strong singular limit).: _Let \((u_{0}^{c},E_{0}^{c},B_{0}^{c})_{c>0}\) be a family of initial data satisfying the assumptions in Theorem 1.1 and denote by \((u^{c},E^{c},B^{c})_{c>c_{0}}\) the corresponding unique global solution to (1.1) given by the same theorem. Further consider divergence-free vector fields \((u_{0},B_{0})\) such that_
\[(u_{0},B_{0})\in\left(H^{1}\times H^{\frac{3}{2}}\right)(\mathbb{R}^{3}), \quad\frac{\omega_{0}}{r}\in L^{2}(\mathbb{R}^{3}),\]
_and assume that_
\[\lim_{c\to\infty}\|(u_{0}^{c},B_{0}^{c})-(u_{0},B_{0})\|_{H^{1}\times L^{2}}=0.\]
_Finally, let \((u,B)\) be the unique global solution of (MHD), given in Corollary 1.2, associated to the data \((u_{0},B_{0})\)._
_Then \((u^{c},B^{c})\) converges strongly to \((u,B)\), as \(c\to\infty\). More specifically, it holds that_
\[\lim_{c\to\infty}\left(\sup_{t\in[0,\infty)}\|(u^{c}-u)(t)\|_{\dot{H}^{1}}+ \int_{0}^{\infty}\|(u^{c}-u)(t)\|_{\dot{H}^{2}}^{2}dt\right)=0, \tag{1.13}\]
_and, for all \(s\in[0,\frac{3}{2})\),_
\[\lim_{c\to\infty}\left(\sup_{t\in[0,\infty)}\|(B^{c}-B)(t)\|_{\dot{H}^{s}}+ \int_{0}^{\infty}\|(B^{c}-B)(t)\|_{\dot{H}^{s+1}}^{2}dt\right)=0. \tag{1.14}\]
_If, moreover, we assume that_
\[\lim_{c\to\infty}\|(E_{0}^{c},B_{0}^{c})-(0,B_{0})\|_{\dot{H}^{\frac{3}{2}}}=0\]
_and_
\[\lim_{c\to\infty}\left(c^{-1}\|(E_{0}^{c},B_{0}^{c})\|_{B_{2,1}^{\frac{5}{2}} }\right)=0, \tag{1.15}\]
_then (1.14) holds for \(s=\frac{3}{2}\), as well._
The complete proof of Theorem 1.3 is given in Section 6.
_Remark_.: It is possible to quantify (1.13) and (1.14) with a rate of convergence \(O(c^{-\alpha})\), for some \(\alpha>0\), provided that the initial data satisfy a similar algebraic rate of convergence.
_Remark_.: Observe that the initial data in Theorem 1.1 are required to enjoy the regularity \(\dot{B}^{\frac{5}{2}}_{2,1}\). However, this is not needed in the limiting system obtained in the regime \(c\to\infty\), as reflected in Corollary 1.2.
Moreover, the growth of the global solution from Theorem 1.1 in that space is of order \(c\), which is consistent with the fact that it is uniformly bounded in the space \(\dot{H}^{\frac{3}{2}}\). Indeed, heuristically, we notice from our proofs, later on, that each spatial derivative has the same dimension as the speed of light \(c\), i.e.,
\[c^{-1}\dot{B}^{\frac{5}{2}}_{2,1}\sim\dot{H}^{\frac{3}{2}},\]
which means that the solutions have a comparable size in each space. This is natural in view of the fact that waves produced by Maxwell's system have a characteristic speed \(c\).
In conclusion, we supplement our remark with a typical example of data which summarizes the preceding observations and fulfills the assumptions of Theorem 1.1 and Theorem 1.3. To that end, let \(\varphi\in C^{\infty}_{c}(\mathbb{R}^{3})\) and introduce the standard mollifier
\[\varphi_{c}(\cdot)\stackrel{{\rm def}}{{=}}c^{3}\varphi(c\, \cdot).\]
Let \((u_{0},B_{0})\) be the initial data from Corollary 1.2 and \(E_{0}\) be any divergence-free profile in \(H^{\frac{3}{2}}\). Then the sequence of data defined by
\[(u_{0}^{c},E_{0}^{c},B_{0}^{c})\stackrel{{\rm def}}{{=}} \varphi_{c}\ast(u_{0},0,B_{0}) \tag{1.16}\]
is suitable and satisfies all the assumptions in Theorem 1.1 and Theorem 1.3, as long as \(c>c_{0}\), where \(c_{0}>0\) only depends on the size of \(u_{0}\), \(E_{0}\), \(B_{0}\) and \(\varphi\).
### Challenges and ingredients of proof
Here, we provide the reader with a short roadmap that sheds light on our strategy and main ingredients towards proving our results. We already made it clear in the introduction that the construction of a (unique global) solution to (1.1) at the level of the energy (1.2) is an outstanding open problem. Hence, we seek solutions in higher-regularity spaces.
Notice that one cannot expect to have a better understanding of the Cauchy problem for (1.1) compared to the Navier-Stokes equations (i.e., the case \(E\equiv B\equiv 0\)). In the three-dimensional case, one of the well known settings where the Navier-Stokes equations are globally well-posed is when the initial data obey an axisymmetric geometric condition, which, roughly speaking, reduces the dimension from three to two, in some sense. In this work, we therefore make the choice of restricting ourselves to the case of axisymmetric data.
We first clarify the key idea leading to an estimate of the velocity field, which can be performed on (1.6)-(1.7) or (1.8)-(1.9). We argue now that it is better to work on (1.8)-(1.9). The reasons behind this choice can be summarized in two crucial points. The first one is that the terms containing \(\frac{1}{c}\partial_{t}E\) can be seen as an error that should vanish and be discarded from the system when \(c\to\infty\), at least formally. Observe that this formal limit yields exactly the equations for the vorticity in the axisymmetric case, which have been studied in several papers (for instance, see [24]). Accordingly, we believe that (1.8)-(1.9) are more suitable to study the equations in the regime \(c\to\infty\). We shall come back to this limit, later on, to comment on the formal claim
\[\lim_{c\to\infty}\frac{1}{c}\partial_{t}E=0. \tag{1.17}\]
The second reason why we choose to consider (1.8)-(1.9) is more technical and it relies on the fact that the \(L^{2}\) energy estimate (1.2) provides us with weak information in dimension three, unlike the two dimensional case. Indeed, our alternative choice would be to consider (1.6)-(1.7) and to estimate \(j\) in \(L^{2}_{t,x}\), which is the only global bound on \(j\) that can be extracted from the energy estimate. By doing so, one can can only establish an estimate of \((\omega,\Omega)\) in \(L^{\infty}_{t}L^{2}_{x}\cap L^{2}_{t}\dot{H}^{1}_{x}\), which is
linear in terms of the \(L_{t}^{2}B_{2,1}^{\frac{5}{2}}\) norm of \(B\). This is comparable to the techniques from [3] where, in two dimensions, it is shown that such estimates can be closed. However, in three dimensions, if one goes into the details of low-frequency estimates, one notices that the interpolation argument used in [3] to weaken the power of \(B\) in some crucial norms will likely not work. It seems that this method would require a global control of \(j\) in \(L_{t}^{2}L_{x}^{3}\), which is not available.
Thus, our main estimate on the velocity field is based on (1.8)-(1.9) and is given in Proposition 3.1, below, where we show that
\[\|(\omega,\Omega)\|_{L_{t}^{\infty}L^{2}\cap L_{t}^{2}\dot{H}^{1}}\] \[\qquad\lesssim\left(\|(\omega_{0},\Omega_{0})\|_{L^{2}}+\left\| \frac{1}{c}\partial_{t}E\right\|_{L_{t}^{2}\dot{H}^{\frac{1}{2}}}\|B\|_{L_{t} ^{\infty}H^{2}}+\|\Gamma\|_{L_{t}^{\infty}L^{3}}\,\|(B,\Gamma)\|_{L_{t}^{2} \dot{H}^{1}}\right)\exp\big{(}C\mathcal{E}_{0}^{2}\big{)},\]
for some \(C>0\). Now, in order to use this bound, we need the following ingredients:
1. An asymptotically vanishing estimate for \(\frac{1}{c}\partial_{t}E\) of the form \[\frac{1}{c}\,\|\partial_{t}E\|_{L_{t}^{2}\dot{H}^{\frac{1}{2}}}\lesssim c^{- \alpha}F\Big{(}\,\|(u,E,B)\|_{X}\,\Big{)},\] for some \(\alpha>\frac{1}{2}\), some (nonlinear) function \(F\) and a suitable functional space \(X\).
2. A bound of the type \[\|B\|_{L_{t}^{\infty}H^{2}}\lesssim c^{\frac{1}{2}}F\Big{(}\,\|(u,E,B)\|_{X}\, \Big{)}.\]
3. An asymptotically global estimate for \(B\) in \(L_{t}^{2}\dot{H}^{1}\) and \(\Gamma\) in \(L_{t}^{\infty}L^{3}\cap L_{t}^{2}\dot{H}^{1}\) of the form \[\|B\|_{L_{t}^{2}\dot{H}^{1}}+\|\Gamma\|_{L_{t}^{\infty}L^{3}\cap L_{t}^{2}\dot {H}^{1}}\leq C_{0}+c^{-\beta}F\Big{(}\,\|(u,E,B)\|_{X}\,\Big{)},\] for some \(\beta>0\) and \(C_{0}>0\) depending only on the initial data.
The first bound above is inspired by the results from [4, Section 3], whereas the third one is obtained in the spirit of [3, Section 3.6].
The precise proof of these claims is the subject of Section 4 where we will build on the arguments from [3, 4]. For simplicity, the reader can think of the space \(X\) in the foregoing estimates as a combination of the spaces appearing in the statement of Theorem 1.1. The precise construction of \(X\) is also detailed in Section 4.
The next step in our strategy is to study the Maxwell system
\[\left\{\begin{aligned} \frac{1}{c}\partial_{t}E-\nabla\times B+ \sigma cE&=-\sigma P(u\times B),\\ \frac{1}{c}\partial_{t}B+\nabla\times E&=0,\\ \operatorname{div}u=\operatorname{div}B=\operatorname{div}E& =0.\end{aligned}\right. \tag{1.18}\]
The relevant estimates for \((E,B)\) with a general forcing term have recently been established in [3]. They are reproduced in Lemmas 3.2 and 3.3. Nevertheless, some refinements in the analysis of (1.18) are required in order to obtain adequate bounds for the electromagnetic field which are compatible with the asymptotic behavior of \(\frac{1}{c}\partial_{t}E\), \(B\) and \(\Gamma\).
As a starting point, we will be facing the problem of estimating products of the form
\[\|P(u\times B)\|_{B_{p,q}^{*}}\,,\]
with \(s>\frac{3}{p}\), where the regularity of \(u\) is restricted to \(L_{t}^{\infty}\dot{H}^{1}\cap L_{t}^{2}\dot{H}^{2}\). Note that a similar issue was overcome in [3] by exploiting the two dimensional normal structure (1.4), which is not valid in our context. Here, we show that a similar analysis can be performed in the axisymmetric case, by using the fact that \(B\) remains orthogonal to \(E\) and \(u\). Accordingly, in Lemma 2.3, we
provide a general result refining the classical paraproduct estimates in a framework which covers the axisymmetric setting.
The ability to prove Theorem 1.1 further requires a precise understanding of the damping phenomenon in (1.18), which is obtained by studying the parabolic-hyperbolic properties of fields, their interaction and their behavior relatively to \(c\). This analysis is laid out in detail in Section 3.2.
With the above ingredients, we are then in a position to conclude Theorem 1.1. Thus, in Section 5.3, we gather all the estimates to establish a nonlinear bound of the form
\[\left\|(u,E,B)\right\|_{X}\leq C_{0}+\mathcal{P}\left(\left\|(u,E,B)\right\|_ {X}\right),\]
where \(\mathcal{P}\) is a polynomial whose coefficients vanish as \(c\to\infty\). The conclusion of the global estimates is then a straightforward application of the abstract Lemma 5.2 which ensures that
\[\left\|(u,E,B)\right\|_{X}\leq 2C_{0},\]
as long as \(c\) is larger than some power of \(C_{0}\).
Note that the proof of Theorem 1.1 is divided into two parts. The first one deals with the case that we call _rough profiles,_ in the spirit of the example of initial data given in (1.16). In that case, the initial electromagnetic field \((E_{0}^{c},B_{0}^{c})\) can have a \(\dot{B}_{2,1}^{\frac{5}{2}}\) norm growing at most like \(c\). In the second part of the proof, in the case of _regular profiles,_ we show that if, moreover, the corresponding \(\dot{B}_{2,1}^{\frac{5}{2}}\) norm of \((E_{0}^{c},B_{0}^{c})\) does not blow up as \(c\) goes to infinity, then, the \(\dot{B}_{2,1}^{\frac{5}{2}}\) norm of \((E^{c}(t),B^{c}(t))\) remains bounded, for any \(t\in[0,\infty)\). The second case will be proved with a kind of bootstrap argument and it can be seen as the result of persistence of initial regularity.
Our second main result (Theorem 1.3) establishes the convergence toward (MHD) of the solution constructed in Theorem 1.1. It is to be emphasized that a fundamental ingredient in the proof of Theorem 1.3 hinges upon the understanding of the limit (1.17). This is crucial to obtain a convergence result in the whole domain \([0,\infty)\times\mathbb{R}^{3}\).
The proof of Theorem 1.3 will be done in several steps. Firstly, we prove the convergence in the \(L^{2}\) energy space by performing adequate stability estimates. Subsequently, by interpolation, it follows that the convergence of velocities and magnetic fields holds in \(L_{t}^{\infty}\dot{H}^{s}\cap L_{t}^{2}\dot{H}^{s+1}\) and \(L_{t}^{\infty}\dot{H}^{\frac{3s}{2}}\cap L_{t}^{2}\dot{H}^{\frac{3s}{2}+1}\), respectively, for any \(s\in[0,1)\).
Therefore, the convergence of velocity fields in the endpoint space \(L_{t}^{\infty}\dot{H}^{1}\cap L_{t}^{2}\dot{H}^{2}\) will be achieved by an energy estimate in \(\dot{H}^{1}\) and by making use of the stability results from the previous step. Here, it is important to mention that the validity of (1.17) in \(L_{t}^{2}\dot{H}^{\frac{1}{2}}\) (see Proposition 4.1) is crucial.
On the other hand, the convergence of the magnetic fields in the endpoint space \(L_{t}^{\infty}\dot{H}^{\frac{3}{2}}\cap L_{t}^{2}\dot{H}^{\frac{3}{2}+1}\) will be established by a different approach, for a standard \(\dot{H}^{\frac{3}{2}}\) energy estimate would require the validity of (1.17) in \(L_{t}^{2}\dot{H}^{\frac{3}{2}}\) which is not available from Proposition 4.1. More precisely, the convergence of magnetic fields in that space follows from extrapolation compactness techniques, introduced and utilized by the first and third authors in [4].
In summary, the proof of convergence of magnetic fields in the energy space of \(\dot{H}^{\frac{3}{2}}\) is split into two main steps. In the first step, we treat the convergence of frequencies that are localized in a ball whose radius grows as the speed of light increases. By suitably choosing that radius, the convergence of low frequencies then follows as a direct consequence of the convergence in the \(L^{2}\) energy space. In the second step, by exploiting the assumption (1.15), we take care of the remaining high frequencies by building on the refined analysis of Maxwell equations laid out in Section 3.2. The combination of these ideas eventually leads to the completion of all proofs.
### Notation
All definitions and basic properties of functional spaces utilized throughout the paper are introduced in Section 2.
Furthermore, the letter \(C\) will often denote a universal (possibly large) constant that is independent of the variables of the problem, and which is also allowed to change from one line to the next.
For simplicity, we will also be using \(A\lesssim B\) instead of \(A\leq CB.\) Moreover, when needed, in order to specify the dependence of some estimates on some parameters, we will occasionally utilize \(A\lesssim_{s}B\) to insist on the fact that the generic constant \(C\) might depend on a parameter \(s.\)
## 2. The axisymmetric structure, Hardy inequalities and paradifferential calculus
In this section, we establish several lemmas which shed light on crucial features of the axisymmetric structure, which are similar to the properties of the two-dimensional normal structure which is exploited in [3, 4]. In particular, this structure will be employed to obtain useful improvements on the classical paradifferential product laws and will serve in our main a priori estimates, later on.
First of all, we recall that a vector field \(F:\mathbb{R}^{3}\to\mathbb{R}^{3}\) is axisymmetric if it has the form
\[F(x_{1},x_{2},x_{3})=F_{r}(r,z)e_{r}+F_{\theta}(r,z)e_{\theta}+F_{z}(r,z)e_{z},\]
where the triple \((r,\theta,z)\) denotes the usual cylindrical coordinates defined by the relations
\[x_{1}=r\cos\theta,\quad x_{2}=r\sin\theta,\quad x_{3}=z,\]
and \((e_{r},e_{\theta},e_{z})\) is the corresponding cylindrical orthonormal basis
\[e_{r}=\Big{(}\frac{x_{h}}{r},0\Big{)},\quad e_{\theta}=\Big{(}\frac{x_{h}^{ \perp}}{r},0\Big{)},\quad e_{\theta}=(0,0,1).\]
Here, the index \(h\) is used to refer to the horizontal components
\[x_{h}\stackrel{{\rm def}}{{=}}(x_{1},x_{2}),\quad x_{h}^{\perp }\stackrel{{\rm def}}{{=}}(-x_{2},x_{1}).\]
Thus, axisymmetry is characterized by the property that the components \((F_{r},F_{\theta},F_{z})\) are independent of \(\theta\). In other words, the field \(F\) is axisymmetric if and only if it satisfies \(R\circ F=F\circ R\), for all rotations \(R\) around the \(z\)-axis.
Furthermore, we say that the vector field \(F:\mathbb{R}^{3}\to\mathbb{R}^{3}\) is axisymmetric without swirl if it has the form
\[F(x_{1},x_{2},x_{3})=F_{r}(r,z)e_{r}+F_{z}(r,z)e_{z},\]
and that it is axisymmetric with pure swirl if it can be represented as
\[F(x_{1},x_{2},x_{3})=F_{\theta}(r,z)e_{\theta}.\]
Observe that an axisymmetric field with pure swirl is always divergence free, which follows from a straightforward computation.
We take some time now to carefully introduce the functional spaces which we use in this article and some related notation. To that end, we first consider the Littlewood-Paley decomposition
\[\sum_{k\in\mathbb{Z}}\Delta_{k}f=f\]
of a tempered distribution modulo polynomials \(f\in\mathcal{S}^{\prime}/\mathcal{P}(\mathbb{R}^{d})\), in any dimension \(d\geq 1\), where the operator \(\Delta_{k}\) is the classical frequency truncation which restricts the support of the Fourier transform
\[\mathcal{F}f\left(\xi\right)=\hat{f}(\xi)\stackrel{{\rm def}}{{= }}\int_{\mathbb{R}^{d}}e^{-i\xi\cdot x}f(x)dx\]
to the set \(\{2^{k-1}\leq|\xi|\leq 2^{k+1}\}\), for each \(k\in\mathbb{Z}\). Recall that the space \(\mathcal{S}^{\prime}/\mathcal{P}(\mathbb{R}^{d})\) is isomorphic to the space \(\mathcal{S}^{\prime}_{0}(\mathbb{R}^{d})\) of tempered distribution restricted to the subspace \(\mathcal{S}_{0}(\mathbb{R}^{d})\), which is made up of all Schwartz functions \(\varphi\in\mathcal{S}(\mathbb{R}^{d})\) such that
\[\int_{\mathbb{R}^{d}}x^{\alpha}\varphi(x)dx=0,\]
for every mutli-index \(\alpha\in\mathbb{N}^{d}\). It is always possible to construct \(\Delta_{k}\) so that it acts on tempered distributions through a convolution with a radial smooth function in \(\mathcal{S}_{0}(\mathbb{R}^{d})\), which is appropriately dilated by a factor \(2^{k}\).
Then, for any \(s\in\mathbb{R}\) and \(1\leq p,q,r\leq\infty\), we define the homogeneous Besov space \(\dot{B}^{s}_{p,q}\left(\mathbb{R}^{d}\right)\) and the homogeneous Chemin-Lerner space \(\widetilde{L}^{r}\left([0,T);B^{s}_{p,q}\left(\mathbb{R}^{d}\right)\right)\), with \(T\in(0,\infty]\), as the subspaces of tempered distributions modulo polynomials \(\mathcal{S}^{\prime}/\mathcal{P}\) endowed with the respective norms
\[\|f\|_{\dot{B}^{s}_{p,q}(\mathbb{R}^{d})} \stackrel{{\rm def}}{{=}}\left(\sum_{k\in\mathbb{Z} }2^{ksq}\left\|\Delta_{k}f\right\|_{L^{p}(\mathbb{R}^{d})}^{q}\right)^{\frac{1 }{q}},\] \[\|f\|_{\widetilde{L}^{r}\left([0,T);B^{s}_{p,q}(\mathbb{R}^{d}) \right)} \stackrel{{\rm def}}{{=}}\left(\sum_{k\in\mathbb{Z} }2^{ksq}\left\|\Delta_{k}f\right\|_{L^{r}([0,T);L^{p}(\mathbb{R}^{d}))}^{q} \right)^{\frac{1}{q}},\]
if \(q<\infty\), and with the usual modifications if \(q=\infty\).
We will also employ the homogeneous Sobolev spaces \(\dot{W}^{s,p}\left(\mathbb{R}^{d}\right)\subset\mathcal{S}^{\prime}/\mathcal{P}\) which are defined by the semi-norms
\[\|f\|_{\dot{W}^{s,p}(\mathbb{R}^{d})}\stackrel{{\rm def}}{{=}} \left\||D|^{s}f\right\|_{L^{p}(\mathbb{R}^{d})},\]
where \(|D|^{s}\) is the Fourier multiplier corresponding to the symbol \(|\xi|^{s}\), with \(s\in\mathbb{R}\) and \(1<p<\infty\). When \(p=q=2\), note that the Besov space \(\dot{B}^{s}_{2,2}\) is equivalent to the homogeneous Sobolev space classically denoted by
\[\dot{H}^{s}\stackrel{{\rm def}}{{=}}\dot{W}^{s,2}.\]
Nonhomogeneous versions of these spaces are also defined in a similar way.
Finally, employing truncated Littlewood-Paley decompositions, one can show that the subspace \(\mathcal{S}_{0}\) is dense in \(\dot{B}^{s}_{p,q}\) and \(\dot{W}^{s,p}\), for any \(s\in\mathbb{R}\) and \(1\leq p,q<\infty\), and that a similar statement holds for Chemin-Lerner spaces with suitable modifications.
We refer to [3, Appendix A] for some more details and properties of Besov and Chemin-Lerner spaces in the same notation as in this article, and to [7, 21] for a comprehensive introduction to the subject of Littlewood-Paley decompositions and functional spaces.
We move on now to the main results of this section. Prior to presenting the paradifferential features of the axisymmetric structure, we establish the following version of Hardy's inequality which will be useful in our analysis, below.
**Lemma 2.1**.: _Fix the dimension \(d\geq 2\). Let \(f\) be a smooth function satisfying, for all \(x^{\prime}\in\mathbb{R}^{d-1}\), that_
\[f(0,x^{\prime})=0.\]
_Then, it holds that_
\[\left\|\frac{f}{x_{1}}\right\|_{\dot{W}^{s,p}(\mathbb{R}^{d})}\lesssim_{s,p} \left\|f\right\|_{\dot{W}^{s+1,p}(\mathbb{R}^{d})},\]
_for any \(p\in(1,\infty)\) and all \(s>\frac{1}{p}-1\)._
Proof.: By virtue of the vanishing assumption on \(f\), we can write, for any \((x_{1},x^{\prime})\in\mathbb{R}^{*}\times\mathbb{R}^{d-1}\), that
\[\frac{f(x_{1},x^{\prime})}{x_{1}}=\int_{0}^{1}(\partial_{1}f)(\lambda x_{1},x^{ \prime})d\lambda.\]
It then follows that
\[\left\|\frac{f}{x_{1}}\right\|_{\dot{W}^{s,p}} \leq\int_{0}^{1}\left\|\partial_{1}f(\lambda\cdot,\cdot)\right\| _{\dot{W}^{s,p}}d\lambda\] \[=\int_{0}^{1}\left\|\left|D\right|^{s}(\partial_{1}f)(\lambda \cdot,\cdot)\right\|_{L^{p}}d\lambda\] \[=\int_{0}^{1}\lambda^{-\frac{1}{p}}\big{\|}|D|^{s}m_{\lambda,s}( D)(\partial_{1}f)\big{\|}_{L^{p}}d\lambda\] \[=\int_{0}^{1}\lambda^{-\frac{1}{p}}\big{\|}m_{\lambda,s}(D) \partial_{1}f\big{\|}_{\dot{W}^{s,p}}d\lambda,\]
where \(m_{\lambda,s}(D)\) is the Fourier multiplier operator given, for any \(\lambda\in(0,1)\) and \(s\in\mathbb{R}\), by
\[m_{\lambda,s}(\xi)=\left(\frac{\sqrt{|\lambda\xi_{1}|^{2}+|\xi^{\prime}|^{2}} }{|\xi|}\right)^{s}.\]
Now, we claim that the multiplier norm of \(m_{\lambda,s}(D)\) over \(L^{p}\) is bounded by a constant multiple of \(\max\{1,\lambda^{s}\}\), which leads to
\[\left\|\frac{f}{x_{1}}\right\|_{\dot{W}^{s,p}} \lesssim\int_{0}^{1}\lambda^{-\frac{1}{p}}\max\{1,\lambda^{s}\}d \lambda\left\|\partial_{1}f\right\|_{\dot{W}^{s,p}}\] \[\lesssim\max\left\{\frac{p}{p-1},\frac{1}{s+1-\frac{1}{p}} \right\}\left\|f\right\|_{\dot{W}^{s+1,p}},\]
provided that \(s>\frac{1}{p}-1\), thereby establishing the main estimate of the lemma.
In order to justify the boundedness of \(m_{\lambda,s}(D)\) over \(L^{p}\), it is sufficient, by the Marcinkiewicz-Mikhlin multiplier theorem (see [20, Theorem 6.2.4]), to show that
\[\sup_{\xi\in\mathbb{R}^{d}}\big{|}\xi^{\alpha}\partial_{\xi}^{\alpha}m_{ \lambda,s}(\xi)\big{|}\lesssim\max\{1,\lambda^{s}\}, \tag{2.1}\]
for all multi-indices \(\alpha\in\{0,1\}^{d}\). To that end, we introduce the notation
\[\ell_{i,\lambda}(\xi)\stackrel{{\rm def}}{{=}}\frac{\xi_{i}^{2}} {\lambda^{2}|\xi_{1}|^{2}+|\xi^{\prime}|^{2}}\]
and compute that
\[\xi_{1}\partial_{\xi_{1}}m_{\lambda,s}(\xi) =s\ell_{1,1}(\xi)\Big{(}\lambda^{2}m_{\lambda,s-2}(\xi)-m_{ \lambda,s}(\xi)\Big{)},\] \[\xi_{i}\partial_{\xi_{i}}m_{\lambda,s}(\xi) =s\left(\ell_{i,\lambda}(\xi)-\ell_{i,1}(\xi)\right)m_{\lambda,s} (\xi),\] \[\xi_{1}\partial_{\xi_{1}}\ell_{j,\lambda}(\xi) =2\left(\delta_{1j}-\lambda^{2}\ell_{1,\lambda}(\xi)\right)\ell_{ j,\lambda}(\xi),\] \[\xi_{i}\partial_{\xi_{i}}\ell_{j,\lambda}(\xi) =2\left(\delta_{ij}-\ell_{i,\lambda}(\xi)\right)\ell_{j,\lambda}( \xi),\]
for any integers \(i\in[2,d]\) and \(j\in[1,d]\), where \(\delta_{ij}\) denotes the usual Kronecker delta. Therefore, iterating the preceding calculations and observing that
\[|m_{\lambda,s}(\xi)|+\big{|}\lambda^{2}m_{\lambda,s-2}(\xi)\big{|}\lesssim\max \{1,\lambda^{s}\},\qquad\big{|}\lambda^{2}\ell_{1,\lambda}\big{|}+|\ell_{i, \lambda}(\xi)|\lesssim 1,\]
for all integers \(i\in[2,d]\), it is readily seen that (2.1) holds true, which establishes that the operator norm of \(m_{\lambda,s}(D)\) is controlled by \(\max\{1,\lambda^{s}\}\). This completes the proof of the lemma.
_Remark_.: Note that the assumption on the smoothness of the function \(f\) in Lemma 2.1 can in fact be relaxed. Indeed, as described, for instance, in Theorem 6.6.1 from [8], the trace operator
\[\operatorname{Tr}:\mathcal{S}_{0}(\mathbb{R}^{d})\to\mathcal{S}_{0}(\mathbb{R}^ {d-1})\]
defined by
\[(\operatorname{Tr}f)(x_{1},x^{\prime})\stackrel{{\mathrm{def}}}{{= }}f(0,x^{\prime}),\quad\text{where }(x_{1},x^{\prime})\in\mathbb{R}\times\mathbb{R}^{d-1},\]
has a well defined extension into a bounded operator from \(W^{s+1,p}(\mathbb{R}^{d})\) into \(B_{p,p}^{s+1-\frac{1}{p}}(\mathbb{R}^{d-1})\), for any \(p\in(1,\infty)\), as soon as the condition \(s>\frac{1}{p}-1\) is satisfied. Hence, for \(f\in W^{s+1,p}(\mathbb{R}^{d})\), the condition \(f(0,x^{\prime})\equiv 0\) in Lemma 2.1 can be replaced by \(\operatorname{Tr}f=0\).
The next lemma is a variant of Lemma 2.1 for vector fields with an axisymmetric structure. It will be employed to control specific quantities involving electromagnetic fields, later on. The statement of the lemma below is written in the notation introduced at the beginning of this section.
**Lemma 2.2**.: _Let \(E\) be an axisymmetric divergence-free vector field without swirl and \(B\) be an axisymmetric vector field with pure swirl._
_Then, it holds that_
\[\left\|\frac{\nabla\times E}{r}\right\|_{L^{p}}\lesssim\|E\|_{\dot{W}^{2,p}}\]
_and_
\[\left\|\frac{B_{\theta}}{r}\right\|_{\dot{W}^{s,p}}\lesssim_{s}\|B\|_{\dot{W} ^{s+1,p}}\,,\]
_for any \(p\in(1,\infty)\) and all \(s>\frac{1}{p}-1\)._
_Remark_.: For any given axisymmetric vector field \(E=E_{r}e_{r}+E_{z}e_{z}\) with no swirl, a direct computation gives that the curl \(\nabla\times E=(\partial_{z}E_{r}-\partial_{r}E_{z})e_{\theta}\) is axisymmetric with pure swirl. Similarly, given an axisymmetric vector field \(B=B_{\theta}e_{\theta}\) with pure swirl, another straightforward computation gives that the curl \(\nabla\times B=-\partial_{z}B_{\theta}e_{r}+(\partial_{r}B_{\theta}+\frac{1}{ r}B_{\theta})e_{z}\) is axisymmetric with no swirl.
Proof.: The bound on \(B_{\theta}\) is a consequence of Lemma 2.1. To see this, let us first assume that \(B\) is smooth and has pure swirl. Then, we can write that
\[(B_{1},B_{2},0)=B_{\theta}e_{\theta}=B_{\theta}\left(\frac{-x_{2}}{r},\frac{x _{1}}{r},0\right)\]
to deduce that
\[B_{1}|_{x_{2}=0}\equiv 0,\qquad B_{2}|_{x_{1}=0}\equiv 0,\]
and
\[\frac{B_{\theta}}{r}=-\frac{B_{1}}{x_{2}}=\frac{B_{2}}{x_{1}},\]
as soon as \(r\neq 0.\) Therefore, it is enough to estimate \(\frac{B_{1}}{x_{2}}\) or \(\frac{B_{2}}{x_{1}}.\)
Then, an application of Lemma 2.1 yields that
\[\left\|\frac{B_{\theta}}{r}\right\|_{W^{s,p}}=\left\|\frac{B_{2}}{x_{1}} \right\|_{\dot{W}^{s,p}}\lesssim_{s}\|B_{2}\|_{\dot{W}^{s+1,p}}\,,\]
thereby establishing the desired bound on \(B_{\theta}\) in the case of a smooth vector field. The general nonsmooth case is then obtained by a standard approximation argument.
As for the bound on \(E\), it will follow from a remarkable identity valid for axisymmetric divergence-free vector fields with no swirl. Indeed, the fact that \(E\) is axisymmetric without swirl allows us to write that
\[\nabla\times E=\left(\partial_{z}E_{r}-\partial_{r}E_{z}\right)e_{\theta},\]
\[e_{r}\cdot\nabla\left(\nabla\times E\right)=\left(\partial_{r}\partial_{z}E_{r }-\partial_{r}^{2}E_{z}\right)e_{\theta},\]
which, when combined with the divergence-free condition for axisymmetric fields
\[\partial_{r}E_{r}+\frac{1}{r}E_{r}+\partial_{z}E_{z}=0\]
leads to
\[\frac{\nabla\times E}{r}+e_{r}\cdot\nabla\left(\nabla\times E\right) =\left(\frac{1}{r}\partial_{z}E_{r}-\frac{1}{r}\partial_{r}E_{z}+ \partial_{r}\partial_{z}E_{r}-\partial_{r}^{2}E_{z}\right)e_{\theta}\] \[=-\left(\partial_{r}^{2}E_{z}+\frac{1}{r}\partial_{r}E_{z}+ \partial_{z}^{2}E_{z}\right)e_{\theta}.\]
Then, identifying the action of the Laplacian on axisymmetric functions to deduce that
\[\Delta E_{z}=\partial_{r}^{2}E_{z}+\frac{1}{r}\partial_{r}E_{z}+\partial_{z}^ {2}E_{z},\]
we arrive at the expression
\[\frac{\nabla\times E}{r}=-e_{r}\cdot\nabla\left(\nabla\times E\right)-( \Delta E_{z})e_{\theta}.\]
The bound on \(E\) therefore follows from a direct estimate in \(L^{p}\) on the preceding identity, which completes the proof of the lemma.
We conclude this section with a lemma that extends the range of parameters in the classical paraproduct laws by exploiting a geometric condition which is satisfied by the axisymmetric structure. These extended paradifferential estimates will be employed to obtain important a priori estimates for the Navier-Stokes-Maxwell system (1.1), later on.
**Lemma 2.3**.: _Let \(F,G:[0,T)\times\mathbb{R}^{3}\to\mathbb{R}^{3}\) be such that \(\operatorname{div}F=0\) and_
\[\int_{\mathbb{R}^{3}}\varphi(x-y)\nabla\times F(t,y)dy\quad\text{and}\quad \int_{\mathbb{R}^{3}}\psi(x-y)G(t,y)dy\quad\text{are colinear,} \tag{2.2}\]
_for all \(t\in[0,T)\), \(x\in\mathbb{R}^{3}\), and any radially symmetric \(\varphi,\psi\in\mathcal{S}_{0}(\mathbb{R}^{3})\) (i.e., such that \(\varphi(x)\) and \(\psi(x)\) only depend on \(|x|\)). Further consider parameters in \([1,\infty]\) such that_
\[\frac{1}{a}=\frac{1}{a_{1}}+\frac{1}{a_{2}}\qquad\text{and}\qquad\frac{1}{c} =\frac{1}{c_{1}}+\frac{1}{c_{2}}.\]
_Then, recalling that \(P=(-\Delta)^{-1}\operatorname{curl}\operatorname{curl}\) denotes Leray's projector onto solenoidal vector fields, one has the product estimate_
(2.3) \[\|P(F\times G)\|_{\widetilde{L}^{a}([0,T);\dot{B}^{s+\eta}_{2,c})}\lesssim\|F \|_{\widetilde{L}^{a_{1}}([0,T);\dot{B}^{s}_{2,c_{1}})}\|G\|_{\widetilde{L}^{ a_{2}}([0,T);\dot{B}^{\eta}_{2,c_{2}})}^{\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \
_as soon as \(s\in(-\frac{5}{2},\frac{3}{2})\), and the case \(s=\frac{3}{2}\) is allowed provided that \(c=1\)._
_Remark_.: The preceding result also holds for vector fields which are independent of time in classical Besov spaces (without any norm in time). Accordingly, the lemma also holds in the case of Besov-space-valued Lebesgue spaces. This means that removing the tildes in (2.3), (2.4) and (2.5) produces valid estimates.
_Remark_.: Consider an axisymmetric field \(H:\mathbb{R}^{3}\to\mathbb{R}^{3}\). For any radially symmetric \(\varphi\in\mathcal{S}_{0}\) and any rotation \(R\) around the \(z\)-axis, we find that
\[\varphi*H(Rx)=\int_{\mathbb{R}^{3}}\varphi(R(x-y))H(Ry)dy=\int_{\mathbb{R}^{3} }\varphi(x-y)RH(y)dy=R\varphi*H(x),\]
thereby showing that \(\varphi*H\) is axisymmetric, too. If, furthermore, the field \(H\) has pure swirl, then, employing that \(S_{x}(y)\stackrel{{\mathrm{def}}}{{=}}y-2(y\cdot e_{\theta}(x)) e_{\theta}(x)\), for any \(x\neq 0\), is an isometry and satisfies that
\[\frac{1}{2}\big{(}e_{\theta}(y)+e_{\theta}(S_{x}y)\big{)}=e_{\theta}(y)-\big{(} e_{\theta}(y)\cdot e_{r}(x)\big{)}e_{r}(x)=\big{(}e_{\theta}(y)\cdot e_{\theta}(x )\big{)}e_{\theta}(x),\]
we compute that
\[\int_{\mathbb{R}^{3}}\varphi(x-y)H_{\theta}(y)e_{\theta}(y)dy =\frac{1}{2}\int_{\mathbb{R}^{3}}\big{(}\varphi(x-y)H_{\theta}(y) e_{\theta}(y)+\varphi(x-S_{x}y)H_{\theta}(S_{x}y)e_{\theta}(S_{x}y)\big{)}dy\] \[=\frac{1}{2}\int_{\mathbb{R}^{3}}\varphi(x-y)H_{\theta}(y)\big{(} e_{\theta}(y)+e_{\theta}(S_{x}y)\big{)}dy\] \[=\left(\int_{\mathbb{R}^{3}}\varphi(x-y)H_{\theta}(y)\big{(}e_{ \theta}(y)\cdot e_{\theta}(x)\big{)}dy\right)e_{\theta}(x),\]
which establishes that \(\varphi*H\) has pure swirl, as well.
_Remark_.: The hypothesis (2.2) is satisfied by axisymmetric divergence-free vector fields such that \(F\) has no swirl and \(G\) has pure swirl. Indeed, as previously emphasized, the curl of an axisymmetric vector field with no swirl is axisymmetric with pure swirl. Therefore, in this situation, it holds that \(\nabla\times F\) and \(G\) both have pure swirls, which implies, according to the preceding remark, that their convolutions with radial functions remain axisymmetric with pure swirl and are thus colinear.
_Remark_.: Note that Lemma 2.3 above is an extension of [3, Lemma 3.4] to the three-dimensional setting. In particular, its significance lies in the fact that it allows us to cover the range of parameters \(\eta\in[\frac{3}{2},\frac{5}{2})\). Indeed, without the geometric assumption (2.2) on \(F\) and \(G\), the paradifferential estimates remain valid but may need to be restricted to parameters satisfying \(\eta<\frac{3}{2}\).
Proof.: We follow the method of proof of Lemma 3.4 from [3] and write Bony's decomposition
\[F\times G=T_{F}G-T_{G}F+R(F,G),\]
where the paraproducts are defined by
\[T_{F}G=\sum_{\begin{subarray}{c}j,k\in\mathbb{Z}\\ j-k<-2\end{subarray}}\Delta_{j}F\times\Delta_{k}G,\qquad T_{G}F=\sum_{ \begin{subarray}{c}j,k\in\mathbb{Z}\\ j-k<-2\end{subarray}}\Delta_{j}G\times\Delta_{k}F=-\sum_{\begin{subarray}{c}j, k\in\mathbb{Z}\\ j-k>2\end{subarray}}\Delta_{j}F\times\Delta_{k}G,\]
and the remainder is given by
\[R(F,G)=\sum_{\begin{subarray}{c}j,k\in\mathbb{Z}\\ |j-k|\leq 2\end{subarray}}\Delta_{j}F\times\Delta_{k}G,\]
to deduce that a direct application of classical paraproduct estimates on homogeneous Besov spaces (see [3, Appendix A] or [7], for instance), combined with the fact that \(P\) is bounded over
Besov spaces, yields the validity of (2.3) for parameters \(s\in(-\infty,\frac{3}{2})\) and \(\eta\in(-\infty,\frac{3}{2})\) with \(s+\eta>0\), and the validity of (2.4) for parameters \(\eta\in(-\frac{3}{2},\frac{3}{2})\).
It is important to emphasize here that the restriction \(\eta<\frac{3}{2}\) comes solely from the estimate of \(T_{G}F\). Thus, in order to establish the validity of (2.3) and (2.4) for the full range of parameters, we only need to show now that
\[\|P(T_{G}F)\|_{\widetilde{L}^{a}([0,T);\dot{B}^{s+\eta-\frac{3}{2}}_{2,c})} \lesssim\|F\|_{\widetilde{L}^{a_{1}}([0,T);\dot{B}^{s}_{2,c_{1}})}\,\|G\|_{ \widetilde{L}^{a_{2}}([0,T);\dot{B}^{\eta}_{2,c_{2}})}\,, \tag{2.6}\]
for any \(s\in\mathbb{R}\) and \(\eta\in(-\infty,\frac{5}{2})\), as a consequence of the divergence-free structure of \(F\) and the geometric assumption (2.2).
To that end, assuming first that \(F\) and \(G\) are smooth, we compute that
\[\nabla\times(F\times G)=\nabla(F\cdot G)-G\times(\nabla\times F)-F\times( \nabla\times G)-2F\cdot\nabla G+F\operatorname{div}G-G\operatorname{div}F.\]
Thus, if \(\nabla\times F\) and \(G\) are colinear and \(F\) is divergence free, we arrive at
\[P(F\times G)=(-\Delta)^{-1}\nabla\times\big{(}-F\times(\nabla\times G)-2F \cdot\nabla G+F\operatorname{div}G\big{)}. \tag{2.7}\]
Now, applying the same reasoning to \(\Delta_{k}F\) and \(\Delta_{j}G\), instead of \(F\) and \(G\), and utilizing (2.2) to deduce that \(\nabla\times\Delta_{k}F\) and \(\Delta_{j}G\) are colinear, we obtain that
\[P(T_{G}F)=(-\Delta)^{-1}\nabla\times\Bigg{(}\sum_{\begin{subarray}{c}j,k\in \mathbb{Z}\\ j-k<-2\end{subarray}}\Big{(}\Delta_{k}F\times(\nabla\times\Delta_{j}G)+2 \Delta_{k}F\cdot\nabla\Delta_{j}G-\Delta_{k}F\operatorname{div}\Delta_{j}G \Big{)}\Bigg{)}. \tag{2.8}\]
Therefore, applying classical paraproduct estimates (see [3, Appendix A] or [7]), we conclude, for any \(s\in\mathbb{R}\) and \(\eta<\frac{5}{2}\), that
\[\|P(T_{G}F)\|_{\widetilde{L}^{a}\dot{B}^{s+\eta-\frac{3}{2}}_{2,c}}\] \[\lesssim\|F\|_{\widetilde{L}^{a_{1}}\dot{B}^{s}_{2,c_{1}}}\,\|G \|_{\widetilde{L}^{a_{2}}\dot{B}^{\eta}_{2,c_{2}}}\,,\]
which establishes (2.6), thereby completing the proof of (2.3) and (2.4).
The justification of (2.5) is similar. Indeed, the classical paradifferential estimates apply directly to the paraproduct \(T_{F}G\) and the remainder \(R(F,G)\) in the range of parameters desribed in (2.5). Thus, we see that (2.5) will follow from the justification of the paraproduct estimate
\[\|P(T_{G}F)\|_{\widetilde{L}^{a}([0,T);\dot{B}^{s+1}_{2,c})}\lesssim\|F\|_{ \widetilde{L}^{a_{1}}([0,T);\dot{B}^{s}_{2,c})}\,\|G\|_{\widetilde{L}^{a_{2}} ([0,T);\dot{B}^{\frac{s}{2}}_{2,1})}\,, \tag{2.9}\]
for any \(s\in\mathbb{R}\). As before, in order to prove (2.9), we apply classical paraproduct estimates to (2.8). This leads to
\[\|P(T_{G}F)\|_{\widetilde{L}^{a}\dot{B}^{s+1}_{2,c}}\] \[\lesssim\bigg{\|}\sum_{\begin{subarray}{c}j,k\in\mathbb{Z}\\ j-k<-2\end{subarray}}\Big{(}\Delta_{k}F\times(\nabla\times\Delta_{j}G)+2 \Delta_{k}F\cdot\nabla\Delta_{j}G-\Delta_{k}F\operatorname{div}\Delta_{j}G \Big{)}\bigg{\|}_{\widetilde{L}^{a}\dot{B}^{s}_{2,c}}\] \[\lesssim\|F\|_{\widetilde{L}^{a_{1}}\dot{B}^{s}_{2,c}}\,\|G\|_{ \widetilde{L}^{a_{2}}\dot{B}^{1}_{\infty,1}}\lesssim\|F\|_{\widetilde{L}^{a_{1 }}\dot{B}^{s}_{2,c}}\,\|G\|_{\widetilde{L}^{a_{2}}\dot{B}^{\frac{s}{2}}_{2,1}}\,,\]
for all \(s\in\mathbb{R}\), which establishes the validity of (2.5) and concludes of the proof of the lemma.
_Remark_.: If, instead of (2.2), one merely assumes that \(\nabla\times F\) and \(G\) are colinear, in the sense that \(G\times(\nabla\times F)=0\), then the paradifferential estimate (2.3) remains valid for any parameters \(s\in(-\infty,\frac{3}{2})\) and \(\eta\in(-\infty,\frac{5}{2})\) in the range \(s+\eta>1\). This follows from applying Bony's decomposition to each term of (2.7) and then estimating the resulting paraproducts and remainders as in the proof above. The restriction \(s+\eta>1\) is then a consequence of the estimates of the remainders. Similarly, under the assumption that \(\nabla\times F\) and \(G\) are colinear, one can show that (2.4) and (2.5) remain valid in the respective ranges \(\eta\in(-\frac{1}{2},\frac{5}{2})\) and \(s\in(-\frac{3}{2},\frac{3}{2})\).
## 3. A priori estimates
Here, we establish a priori estimates on smooth solutions of the Navier-Stokes-Maxwell system (1.1). The ensuing bounds will be employed to prove existence of global solutions to that system. For simplicity, from now on, we take \(\nu=1\) in (1.1). However, we emphasize that all estimates below hold for any \(\nu>0\).
We recall first that the only available global bound for smooth solutions of (1.1) corresponds to the \(L^{2}\)-energy estimate
\[\|u\|_{L^{\infty}_{t}L^{2}\cap L^{2}_{t}\dot{H}^{1}}+\|(E,B)\|_{L^{\infty}_{t} L^{2}}+\|j\|_{L^{2}_{t}L^{2}}\lesssim\mathcal{E}_{0}\stackrel{{\rm def }}{{=}}\left\|(u_{0},E_{0},B_{0})\right\|_{L^{2}}. \tag{3.1}\]
As explained in the introduction, this bound does not seem to be enough to construct any kind of solutions. Therefore, we shall aim to propagate some higher regularity for \(u\), \(E\) and \(B\).
In what follows, we recall that are using the notation
\[\Omega\stackrel{{\rm def}}{{=}}\frac{\omega_{\theta}}{r}\qquad \text{and}\qquad\Gamma\stackrel{{\rm def}}{{=}}\frac{B_{\theta}} {r},\]
where
\[\omega_{\theta}=\omega\cdot e_{\theta}=(\nabla\times u)\cdot e_{\theta}\qquad \text{and}\qquad B_{\theta}=B\cdot e_{\theta}.\]
### Controlling the velocity field
In order to control higher regularities for the velocity field, we shall exploit the axisymmetric structure and perform an energy estimate on the equations describing the evolution of \(\omega_{\theta}\) and \(\Omega\), which we derived in Section 1.1. For convenience, we recall here that the equation for \(\omega_{\theta}\) can be written as
\[\partial_{t}\omega_{\theta}+u\cdot\nabla\omega_{\theta}-\big{(}\Delta-\frac{ 1}{r^{2}}\big{)}\omega_{\theta}=\frac{u_{r}}{r}\omega_{\theta}-\partial_{z} \big{(}\Gamma B_{\theta}\big{)}-\frac{1}{c}\partial_{t}E_{r}\frac{B_{\theta}} {r}+\frac{1}{c}\partial_{t}E\cdot\nabla B_{\theta}, \tag{3.2}\]
while the equation for \(\Omega\) reads as
\[\partial_{t}\Omega+u\cdot\nabla\Omega-\big{(}\Delta+\frac{\partial_{r}}{r} \big{)}\Omega=-\partial_{z}\big{(}\Gamma^{2}\big{)}-\frac{1}{c}\partial_{t}E \cdot\nabla\Gamma. \tag{3.3}\]
The following proposition provides a control on the velocity field in terms of some suitable norms of electromagnetic fields. This will be combined with the estimates from Section 3.2 to obtain uniform global bounds in Section 5, later on.
**Proposition 3.1**.: _Let \(T\in(0,\infty]\) and \((u,E,B)\) be a smooth axisymmetric solution of (1.1) on \([0,T)\), where \(u\) and \(E\) have no swirl and \(B\) has pure swirl. Then, there is a universal constant \(C>0\) such that_
\[\|(\omega,\Omega)\|_{L^{\infty}_{t}L^{2}\cap L^{2}_{t}\dot{H}^{1}}\] \[\qquad\lesssim\left(\|(\omega_{0},\Omega_{0})\|_{L^{2}}+\left\| \frac{1}{c}\partial_{t}E\right\|_{L^{2}_{t}\dot{H}^{\frac{1}{2}}}\|B\|_{L^{ \infty}_{t}H^{2}}+\|\Gamma\|_{L^{\infty}_{t}L^{3}}\left\|(B,\Gamma)\right\|_{L ^{2}_{t}\dot{H}^{1}}\right)\exp\big{(}C\mathcal{E}_{0}^{2}\big{)},\]
_where all the norms are taken over the time interval \([0,T)\)._
_Remark_.: Note that the right-hand side in the bound above does not exhibit any time growth, provided that the norms of the electromagnetic fields remain bounded on any time interval \([0,T)\). This is crucial and necessary to prove the global existence of solutions in Theorem 1.1.
Proof.: Firstly, we observe that a straightforward computation relying on the fact \(\omega\) is axisymmetric without swirl yields that
\[e_{r}\cdot\nabla\omega=(\partial_{r}\omega_{\theta})e_{\theta},\qquad e_{ \theta}\cdot\nabla\omega=-\frac{\omega_{\theta}}{r}e_{r}=-\Omega e_{r},\qquad e _{z}\cdot\nabla\omega=(\partial_{z}\omega_{\theta})e_{\theta}.\]
In particular, this implies that
\[|\omega|=|\omega_{\theta}|\qquad\text{and}\qquad|\nabla\omega|\sim|(\partial_{ r}\omega_{\theta},\partial_{z}\omega_{\theta})|+|\Omega|\,, \tag{3.4}\]
which will allow us to estimate \((\omega_{\theta},\Omega)\) instead of \((\omega,\Omega)\).
Now, multiplying (3.2) by \(\omega_{\theta}\), integrating in time and space, and then using the divergence-free condition on \(E\) and \(u\) yields, for all \(t\in[0,T)\), that
\[\begin{split}\frac{1}{2}\left\|\omega_{\theta}(t)\right\|_{L^{2 }}^{2}&+\left\|\nabla\omega_{\theta}\right\|_{L^{2}_{t}L^{2}}^{2 }+\left\|\frac{\omega_{\theta}}{r}\right\|_{L^{2}_{t}L^{2}}^{2}\\ &\leq\frac{1}{2}\left\|\omega_{0}\right\|_{L^{2}}^{2}+\int_{0}^{t }\left\|\frac{u_{r}(\tau)}{r}\right\|_{L^{\infty}}\left\|\omega_{\theta}(\tau )\right\|_{L^{2}}^{2}d\tau+\left|\int_{0}^{t}\int_{\mathbb{R}^{3}}\Gamma(\tau) B(\tau)\partial_{z}\omega_{\theta}(\tau)dxd\tau\right|\\ &\quad+\int_{0}^{t}\frac{1}{c}\left\|(\partial_{t}E)B(\tau) \right\|_{L^{2}}\Big{(}\left\|\nabla\omega_{\theta}(\tau)\right\|_{L^{2}}+ \left\|\frac{\omega_{\theta}}{r}(\tau)\right\|_{L^{2}}\Big{)}d\tau,\end{split} \tag{3.5}\]
where the norms in time are taken over the interval \([0,t)\).
In order to estimate the second term in the right-hand side above, we make use of (1.10) to obtain, for any \(\varepsilon>0\), that
\[\begin{split}\int_{0}^{t}\left\|\frac{u_{r}(\tau)}{r}\right\|_{L ^{\infty}}&\left\|\omega_{\theta}(\tau)\right\|_{L^{2}}^{2}d\tau \\ &\lesssim\int_{0}^{t}\left\|\Omega(\tau)\right\|_{L^{2}}^{\frac{1} {2}}\left\|\nabla\Omega(\tau)\right\|_{L^{2}}^{\frac{1}{2}}\left\|\omega_{ \theta}(\tau)\right\|_{L^{2}}^{2}d\tau\\ &\leq\varepsilon\int_{0}^{t}\left\|\Omega(\tau)\right\|_{L^{2}} \left\|\nabla\Omega(\tau)\right\|_{L^{2}}d\tau+C_{\varepsilon}\int_{0}^{t} \left\|\omega_{\theta}(\tau)\right\|_{L^{2}}^{4}d\tau\\ &\leq\varepsilon\Big{(}\left\|\frac{\omega_{\theta}}{r}\right\|_{L ^{2}_{t}L^{2}}^{2}+\left\|\Omega\right\|_{L^{2}_{t}H^{1}}^{2}\Big{)}+C_{ \varepsilon}\int_{0}^{t}\left\|u(\tau)\right\|_{H^{1}}^{2}\left\|\omega_{ \theta}(\tau)\right\|_{L^{2}}^{2}d\tau,\end{split} \tag{3.6}\]
where we utilized the celebrated Biot-Savart estimate in the last line.
On the other hand, the estimate of the third term in the right-hand side of (3.5) is obtained by employing Holder's inequality, again, followed by the Sobolev embedding \(\dot{H}^{1}\hookrightarrow L^{6}(\mathbb{R}^{3})\) to write, for any \(\varepsilon>0\), that
\[\begin{split}\left|\int_{0}^{t}\int_{\mathbb{R}^{3}}\Gamma(\tau) B(\tau)\partial_{z}\omega_{\theta}(\tau)dxd\tau\right|&\leq C_{ \varepsilon}\int_{0}^{t}\left\|\Gamma B(\tau)\right\|_{L^{2}}^{2}d\tau+ \varepsilon\left\|\partial_{z}\omega_{\theta}\right\|_{L^{2}_{t}L^{2}}^{2}\\ &\leq C_{\varepsilon}\int_{0}^{t}\left\|\Gamma(\tau)\right\|_{L^{ 3}}^{2}\left\|B(\tau)\right\|_{L^{6}}^{2}d\tau+\varepsilon\left\|\nabla\omega_ {\theta}\right\|_{L^{2}_{t}L^{2}}^{2}\\ &\leq C_{\varepsilon}\left\|\Gamma\right\|_{L^{\infty}_{t}L^{3}}^{ 2}\left\|B\right\|_{L^{2}_{t}H^{1}}^{2}+\varepsilon\left\|\nabla\omega_{ \theta}\right\|_{L^{2}_{t}L^{2}}^{2}.\end{split} \tag{3.7}\]
Finally, the last integral in (3.5) can be easily controlled by similar arguments to obtain that
\[\begin{split}\int_{0}^{t}\frac{1}{c}\,\|(\partial_{t}E)B(\tau)\|_{L ^{2}}&\Big{(}\left\|\nabla\omega_{\theta}(\tau)\right\|_{L^{2}}+ \left\|\frac{\omega_{\theta}}{r}(\tau)\right\|_{L^{2}}\Big{)}d\tau\\ &\leq\frac{C_{\varepsilon}}{c^{2}}\,\|\partial_{t}E\|_{L^{2}_{t}L ^{3}}^{2}\,\|B\|_{L^{\infty}_{t}L^{6}}^{2}+\varepsilon\Big{(}\left\|\nabla \omega_{\theta}\right\|_{L^{2}_{t}L^{2}}^{2}+\left\|\frac{\omega_{\theta}}{r} \right\|_{L^{2}_{t}L^{2}}^{2}\Big{)}\\ &\leq\frac{C_{\varepsilon}}{c^{2}}\,\|\partial_{t}E\|_{L^{2}_{t} \dot{H}^{\frac{1}{2}}}^{2}\,\|B\|_{L^{\infty}_{t}\dot{H}^{1}}^{2}+\varepsilon \Big{(}\left\|\nabla\omega_{\theta}\right\|_{L^{2}_{t}L^{2}}^{2}+\left\|\frac{ \omega_{\theta}}{r}\right\|_{L^{2}_{t}L^{2}}^{2}\Big{)}.\end{split} \tag{3.8}\]
All in all, incorporating (3.6), (3.7) and (3.8) into (3.5), and choosing \(\varepsilon\) small enough yields that
\[\begin{split}\left\|\omega(t)\right\|_{L^{2}}^{2}+\left\|\nabla \omega_{\theta}\right\|_{L^{2}_{t}L^{2}}^{2}&+\left\|\frac{ \omega_{\theta}}{r}\right\|_{L^{2}_{t}L^{2}}^{2}\\ &\leq\left\|\omega_{0}\right\|_{L^{2}}^{2}+\frac{1}{4}\left\| \Omega\right\|_{L^{2}_{t}\dot{H}^{1}}^{2}+C\int_{0}^{t}\left\|u(\tau)\right\|_ {\dot{H}^{1}}^{2}\left\|\omega(\tau)\right\|_{L^{2}}^{2}d\tau\\ &\quad+C\left\|\Gamma\right\|_{L^{\infty}_{t}L^{3}}^{2}\left\|B \right\|_{L^{2}_{t}\dot{H}^{1}}^{2}+\frac{C}{c^{2}}\left\|\partial_{t}E\right\| _{L^{2}_{t}\dot{H}^{1}}^{2}\left\|B\right\|_{L^{\infty}_{t}\dot{H}^{1}}^{2}, \end{split} \tag{3.9}\]
where we have utilized (3.4).
Now, we show how to estimate \(\Omega\) in the right-hand side above. To that end, we first perform an \(L^{2}\)-energy estimate on (3.3) followed by a standard application of paraproduct laws to find, for any \(t\in[0,T)\), that
\[\frac{1}{2}\left\|\Omega(t)\right\|_{L^{2}}^{2}+\left\|\Omega \right\|_{L^{2}_{t}\dot{H}^{1}}^{2} \leq\frac{1}{2}\left\|\Omega_{0}\right\|_{L^{2}}^{2}+\left\|\Gamma ^{2}\right\|_{L^{2}_{t}L^{2}}\left\|\Omega\right\|_{L^{2}_{t}\dot{H}^{1}}+\frac {1}{c}\left\|\partial_{t}E\cdot\nabla\Gamma\right\|_{L^{2}_{t}\dot{H}^{-1}} \left\|\Omega\right\|_{L^{2}_{t}\dot{H}^{1}}\] \[\leq\frac{1}{2}\left\|\Omega_{0}\right\|_{L^{2}}^{2}+C\left\| \Gamma\right\|_{L^{4}_{t}L^{4}}^{4}+\frac{C}{c^{2}}\left\|\partial_{t}E\right\| _{L^{2}_{t}\dot{H}^{\frac{1}{2}}}^{2}\left\|\Gamma\right\|_{L^{\infty}_{t} \dot{H}^{1}}^{2}+\frac{1}{2}\left\|\Omega\right\|_{L^{2}_{t}\dot{H}^{1}}^{2},\]
for some universal constant \(C>0\). Therefore, by further employing Holder's and embedding inequalities, we obtain that
\[\left\|\Omega(t)\right\|_{L^{2}}^{2}+\left\|\Omega\right\|_{L^{2}_{t}\dot{H}^{ 1}}^{2}\leq\left\|\Omega_{0}\right\|_{L^{2}}^{2}+C\left\|\Gamma\right\|_{L^{ \infty}_{t}L^{3}}^{2}\left\|\Gamma\right\|_{L^{2}_{t}\dot{H}^{1}}^{2}+\frac{C }{c^{2}}\left\|\partial_{t}E\right\|_{L^{2}_{t}\dot{H}^{\frac{1}{2}}}^{2} \left\|\Gamma\right\|_{L^{\infty}_{t}\dot{H}^{1}}^{2}.\]
Hence, recalling that
\[\Gamma=\frac{B_{\theta}}{r},\]
and employing Lemma 2.2, we arrive at the bound
(3.10) \[\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Finally, an application of the classical Gronwall inequality yields that
\[\left\|(\omega,\Omega)\right\|_{L^{\infty}_{t}L^{2}\cap L^{2}_{t}\dot{ H}^{1}} \lesssim\left(\left\|(\omega_{0},\Omega_{0})\right\|_{L^{2}}+\left\| \frac{1}{c}\partial_{t}E\right\|_{L^{2}_{t}\dot{H}^{\frac{1}{2}}}\left\|B \right\|_{L^{\infty}_{t}H^{2}}+\left\|\Gamma\right\|_{L^{\infty}_{t}L^{3}} \left\|(B,\Gamma)\right\|_{L^{2}_{t}\dot{H}^{1}}\right)\] \[\quad\times\exp\left(C\int_{0}^{t}\left\|u(\tau)\right\|_{\dot{H}^ {1}}^{2}d\tau\right),\]
which, in view of the energy inequality (3.1), concludes the proof of the proposition.
In view of the estimates on the velocity field given in the preceding proposition, we now need to establish the following bounds on the electromagnetic field:
* An asymptotically vanishing bound for \(\frac{1}{c}\partial_{t}E\) of the form (3.11) \[\frac{1}{c}\left\|\partial_{t}E\right\|_{L^{2}_{t}\dot{H}^{\frac{1}{2}}} \lesssim c^{-\alpha}F\Big{(}\left\|(u,E,B)\right\|_{X}\Big{)},\] for some \(\alpha>0\), a (nonlinear) function \(F\) and a suitable functional space \(X\).
* An asymptotically global bound for \(B\) in \(L^{2}_{t}\dot{H}^{1}\) and \(\Gamma\) in \(L^{\infty}_{t}L^{3}\cap L^{2}_{t}\dot{H}^{1}\) of the form (3.12) \[\left\|B\right\|_{L^{2}_{t}\dot{H}^{1}}+\left\|\Gamma\right\|_{L^{\infty}_{t}L ^{3}\cap L^{2}_{t}\dot{H}^{1}}\leq C_{0}+c^{-\beta}F\Big{(}\left\|(u,E,B) \right\|_{X}\Big{)},\] for some \(\beta>0\) and \(C_{0}>0\) depending only on the initial data.
The complete justification of these bounds will be the subject of Section 4, later on.
### Controlling the electromagnetic field
Here, we establish several estimates combining the refined study of the dispersive properties of Maxwell's equations from [3] with the axisymmetric structure. The principal results of this part of the article are obtained in Sections 3.2.1 and 3.2.2, below.
The ensuing bounds will be further combined with the results from the previous section to arrive at a global control of solutions to (1.1) in Section 5, later on.
For the sake of clarity, we recall first the essential results from [3] on Strichartz estimates and maximal parabolic regularity for the three-dimensional damped Maxwell system which are useful in the present work.
**Lemma 3.2**.: _[_3_, Corollary 2.12]_ _Consider a solution \((E,B):[0,T)\times\mathbb{R}^{3}\to\mathbb{R}^{6}\) of the damped Maxwell system_
\[\begin{cases}\frac{1}{c}\partial_{t}E-\nabla\times B+\sigma cE& =G,\\ \frac{1}{c}\partial_{t}B+\nabla\times E&=0,\\ \operatorname{div}B&=0,\end{cases} \tag{3.13}\]
_for some initial data \((E,B)(0,x)=(E_{0},B_{0})(x)\), where \(\sigma>0\) and \(c>0\)._
_For any exponent pairs \((q,r),(\tilde{q},\tilde{r})\in[1,\infty]\times[2,\infty)\) which are admissible in the sense that_
\[\frac{1}{q}+\frac{1}{r}\geq\frac{1}{2}\qquad\text{and}\qquad\frac{1}{\tilde{q} }+\frac{1}{\tilde{r}}\geq\frac{1}{2},\]
_and such that_
\[\frac{1}{q}+\frac{1}{\tilde{q}}\leq 1,\]
_one has the high-frequency estimate_
\[2^{-j\left(1-\frac{2}{r}\right)}\left\|\Delta_{j}(PE,B)\right\|_ {L^{q}([0,T);L^{r})}\] \[\qquad\qquad\lesssim c^{\frac{1}{2}-\frac{1}{r}-\frac{2}{q}}\left\| \Delta_{j}(PE_{0},B_{0})\right\|_{L^{2}}+c^{2-\frac{1}{r}-\frac{1}{r}-\frac{2} {q}-\frac{2}{q}}2^{j\left(1-\frac{2}{r}\right)}\left\|\Delta_{j}PG\right\|_{L ^{\tilde{q}^{\prime}}([0,T);L^{\tilde{r}^{\prime}})},\]
_for all \(j\in\mathbb{Z}\) with \(2^{j}\geq\sigma c\), and the low-frequency estimates_
\[2^{-j\left(\frac{3}{2}-\frac{3}{r}\right)} \left\|\Delta_{j}PE\right\|_{L^{q}([0,T);L^{r})}\] \[\lesssim c^{-\frac{2}{q}}\left\|\Delta_{j}PE_{0}\right\|_{L^{2}}+c ^{-1}2^{j\left(1-\frac{2}{q}\right)}\left\|\Delta_{j}B_{0}\right\|_{L^{2}}+c^{ 1-\frac{2}{q}-\frac{2}{q}}2^{j\left(\frac{3}{2}-\frac{3}{r}\right)}\left\| \Delta_{j}PG\right\|_{L^{\tilde{\sigma}^{\prime}}([0,T);L^{\tilde{\sigma}^{ \prime}})}\]
_and_
\[2^{-j\left(\frac{3}{2}-\frac{3}{r}-\frac{2}{q}\right)} \left\|\Delta_{j}B\right\|_{L^{q}([0,T);L^{r})}\] \[\lesssim c^{-1}2^{j}\left\|\Delta_{j}PE_{0}\right\|_{L^{2}}+ \left\|\Delta_{j}B_{0}\right\|_{L^{2}}+2^{j\left(\frac{5}{2}-\frac{3}{r}- \frac{2}{q}\right)}\left\|\Delta_{j}PG\right\|_{L^{\tilde{\sigma}^{\prime}}([ 0,T);L^{\tilde{\sigma}^{\prime}})},\]
_for all \(j\in\mathbb{Z}\) with \(2^{j}\leq\sigma c\)._
**Lemma 3.3**.: _[_3_, Corollary 2.14]_ _Consider a solution \((E,B):[0,T)\times\mathbb{R}^{3}\to\mathbb{R}^{6}\) of the damped Maxwell system (3.13), for some initial data \((E,B)(0,x)=(E_{0},B_{0})(x)\), where \(\sigma>0\) and \(c>0\)._
_For any \(\chi\in C_{c}^{\infty}(\mathbb{R}^{d})\) and \(s\in\mathbb{R}\), one has the low-frequency estimates_
\[\left\|\chi(c^{-1}D)PE\right\|_{L_{t}^{m}([0,T);\hat{B}_{2,\frac{ \sigma}{m}}^{s+\frac{2}{m}})} \lesssim c^{-\frac{2}{m}}\left\|PE_{0}\right\|_{\hat{B}_{2,q}^{s+ \frac{2}{m}}}+c^{-1}\left\|B_{0}\right\|_{\hat{B}_{2,m}^{s+1}}\] \[+c^{-1+\frac{2}{r}-\frac{2}{m}}\left\|PG\right\|_{L_{t}^{r}([0,T) ;\hat{B}_{2,q}^{s+\frac{2}{m}})},\]
_for any \(1<r\leq m<\infty\) and \(1\leq q\leq\infty\), as well as_
\[\left\|\chi(c^{-1}D)B\right\|_{L_{t}^{m}([0,T);\hat{B}_{2,\frac{ \sigma}{m}}^{s+\frac{2}{m}})} \lesssim c^{-1}\left\|PE_{0}\right\|_{\hat{B}_{2,m}^{s+1}}+\left\|B_{0} \right\|_{\hat{B}_{2,m}^{s}}+\left\|PG\right\|_{L_{t}^{r}([0,T);\hat{B}_{2, \infty}^{s-1+\frac{2}{r}})},\]
_for any \(1<r<m<\infty\) and \(1<r<m<\infty\), and_
\[\left\|\chi(c^{-1}D)B\right\|_{L_{t}^{m}([0,T);\hat{B}_{2,q}^{s+ \frac{2}{m}})} \lesssim c^{-1}\left\|PE_{0}\right\|_{\hat{B}_{2,m}^{s+1}}+\left\|B_{0} \right\|_{\hat{B}_{2,m}^{s}}+\left\|PG\right\|_{L_{t}^{m}([0,T);\hat{B}_{2,q} ^{s-1+\frac{2}{m}})},\]
_for any \(1<m<\infty\) and \(1\leq q\leq\infty\)._
Let us now be more precise about the source term \(G\) which will be used in the application of the preceding two lemmas. Specifically, we will consider the Maxwell system
\[\left\{\begin{aligned} \frac{1}{c}\partial_{t}E-\nabla\times B+ \sigma cE&=-\sigma P(u\times B),&\operatorname{div}E =0,\\ \frac{1}{c}\partial_{t}B+\nabla\times E&=0,& \operatorname{div}B=0,\\ \operatorname{div}u&=0.\end{aligned}\right. \tag{3.14}\]
Furthermore, in order to exploit the dichotomy between high and low frequencies featured in the estimates from the above lemmas, we consider the variants of Besov semi-norms
\[\left\|f\right\|_{\hat{B}_{p,q,<}^{s}}\stackrel{{\rm def}}{{=}} \left(\sum_{\begin{subarray}{c}k\in\mathbb{Z}\\ 2^{k}<\sigma c\end{subarray}}2^{ksq}\left\|\Delta_{k}f\right\|_{L^{p}}^{q} \right)^{\frac{1}{q}}\quad\text{and}\quad\left\|f\right\|_{\hat{B}_{p,q,>}^{s }}\stackrel{{\rm def}}{{=}}\left(\sum_{\begin{subarray}{c}k\in \mathbb{Z}\\ 2^{k}\geq\sigma c\end{subarray}}2^{ksq}\left\|\Delta_{k}f\right\|_{L^{p}}^{q} \right)^{\frac{1}{q}},\]
as well as the corresponding variants of Chemin-Lerner semi-norms
\[\left\|f\right\|_{\widetilde{L}_{t}^{r}\hat{B}_{p,q,<}^{s}}\stackrel{{ \rm def}}{{=}}\left(\sum_{\begin{subarray}{c}k\in\mathbb{Z}\\ 2^{k}<\sigma c\end{subarray}}2^{ksq}\left\|\Delta_{k}f\right\|_{L_{t}^{r}L_{x}^{p} }^{q}\right)^{\frac{1}{q}}\quad\text{and}\quad\left\|f\right\|_{\widetilde{L}_ {t}^{r}\hat{B}_{p,q,>}^{s}}\stackrel{{\rm def}}{{=}}\left(\sum_{ \begin{subarray}{c}k\in\mathbb{Z}\\ 2^{k}\geq\sigma c\end{subarray}}2^{ksq}\left\|\Delta_{k}f\right\|_{L_{t}^{r}L_{x} ^{p}}^{q}\right)^{\frac{1}{q}},\]
for any \(s\in\mathbb{R}\) and \(0<p,q,r\leq\infty\) (with obvious modifications if \(q\) is infinite). These families of semi-norms have been introduced in [3]. We will utilize them extensively throughout the upcoming sections of our work.
#### 3.2.1. Control of high-frequency electromagnetic waves
Here, we establish key bounds on high frequencies of electromagnetic fields. Lemma 3.4 below combines the damped Strichartz estimates for high electromagnetic frequencies from Lemma 3.2 with the paradifferential product laws on axisymmetric vector fields from Lemma 2.3.
The method behind the proof of this lemma is similar to the one used in [3] (see Lemma 3.8, therein). However, here, we further refine the method by introducing an additional high-low frequency decomposition of the source term \(P(u\times B)\) in (3.4). This will allow us to obtain stronger estimates (see (3.15), below).
**Lemma 3.4**.: _Let \(T\in(0,\infty]\) and \((E,B)\) be a smooth axisymmetric solution to (3.14), defined on \([0,T)\), for some initial data \((E_{0},B_{0})\) and some axisymmetric divergence-free vector field \(u\). Assume further that \(E\) and \(u\) are both without swirl and that \(B\) has pure swirl._
_Then, for any \(s\in(-\frac{5}{2},\frac{5}{2})\), \(n\in[1,\infty]\) and any \(q\in[\frac{4}{3},\infty]\), it holds that_
\[\|(E,B)\|_{\widetilde{L}_{t}^{q}\dot{B}_{2,n,>}^{s}}\lesssim c^{-\frac{2}{q} }\left\|(E_{0},B_{0})\right\|_{\dot{B}_{2,n,>}^{s}}+c^{\frac{1}{2}-\frac{2}{q }}\left\|u\right\|_{L_{t}^{\infty}\dot{H}^{1}\cap L_{t}^{2}\dot{H}^{2}}\left\| B\right\|_{\widetilde{L}_{t}^{2}\dot{B}_{2,n}^{s}}.\]
_Moreover, at the endpoint \(s=\frac{5}{2}\), we have that_
\[\|(E,B)\|_{\widetilde{L}_{t}^{q}\dot{B}_{2,1,>}^{s}}\lesssim c^{-\frac{2}{q} }\left\|(E_{0},B_{0})\right\|_{\dot{B}_{2,1,>}^{s}}+c^{-\frac{1}{2}+\frac{2}{p }-\frac{2}{q}}\left\|u\right\|_{L_{t}^{\infty}\dot{H}^{1}\cap L_{t}^{2}\dot{H} ^{2}}\left\|B\right\|_{\widetilde{L}_{t}^{p}\dot{B}_{2,1}^{\frac{5}{2}}},\]
_as soon as \(2\leq p\leq q\leq\infty\). Furthermore, in the case \(p=2\), it holds that_
\[\begin{split}\|(E,B)\|_{\widetilde{L}_{t}^{q}\dot{B}_{2,1,>}^{s} }\lesssim& c^{-\frac{2}{q}}\left\|(E_{0},B_{0})\right\|_{\dot{B}_{ 2,1,>}^{s}}\\ &+c^{\frac{1}{2}-\frac{2}{q}}\left\|u\right\|_{L_{t}^{\infty} \dot{H}^{1}\cap L_{t}^{2}\dot{H}^{2}}\left(\left\|B\right\|_{\widetilde{L}_{t }^{2}\dot{B}_{2,1,>}^{s}}+\left\|B\right\|_{L_{t}^{2}\dot{B}_{2,1,<}^{s}} \right),\end{split} \tag{3.15}\]
_for all \(q\in[2,\infty]\)._
Proof.: Applying Lemma 3.2 yields that
\[\left\|\Delta_{j}(E,B)\right\|_{L_{t}^{q}L^{2}}\lesssim c^{-\frac{2}{q}}\left\| \Delta_{j}(E_{0},B_{0})\right\|_{L^{2}}+c^{-1+2(\frac{1}{p}-\frac{1}{q})}\left\| \Delta_{j}P\big{(}u\times B\big{)}\right\|_{L_{t}^{p}L^{2}},\]
for all \(j\in\mathbb{Z}\), with \(2^{j}\geq\sigma c\), and any \(1\leq p\leq q\leq\infty\). It then follows, for any \(s\in\mathbb{R}\) and any \(n\in[1,\infty]\), that
\[\|(E,B)\|_{\widetilde{L}_{t}^{q}\dot{B}_{2,n,>}^{s}}\lesssim c^{-\frac{2}{q}} \left\|(E_{0},B_{0})\right\|_{\dot{B}_{2,n,>}^{s}}+c^{-1+2(\frac{1}{p}-\frac{1 }{q})}\left\|P\big{(}u\times B\big{)}\right\|_{\widetilde{L}_{t}^{p}\dot{B}_{2, n}^{s}}. \tag{3.16}\]
Thus, choosing \(p=\frac{4}{3}\) and utilizing (2.4) we find, for all \(s\in(-\frac{3}{2},\frac{5}{2})\), \(n\in[1,\infty]\) and any \(q\in[\frac{4}{3},\infty]\), that
\[\|(E,B)\|_{\widetilde{L}_{t}^{q}\dot{B}_{2,n,>}^{s}}\lesssim c^{-\frac{2}{q}} \left\|(E_{0},B_{0})\right\|_{\dot{B}_{2,n,>}^{s}}+c^{\frac{1}{2}-\frac{2}{q}} \left\|u\right\|_{L_{t}^{4}L^{\infty}\cap\widetilde{L}_{t}^{4}\dot{B}_{2, \infty}^{\frac{3}{2}}}\|B\|_{\widetilde{L}_{t}^{2}\dot{B}_{2,n}^{s}}.\]
Hence, we conclude the proof of the first claim in the lemma by employing the embedding
\[L_{t}^{\infty}\dot{H}^{1}\cap L_{t}^{2}\dot{H}^{2}\hookrightarrow L_{t}^{4} \dot{B}_{2,1}^{\frac{3}{2}}\hookrightarrow L_{t}^{4}L^{\infty}\cap\widetilde{L}_ {t}^{4}\dot{B}_{2,\infty}^{\frac{3}{2}}.\]
We now turn to the endpoint case \((s,n)=(\frac{5}{2},1)\), which corresponds to the second bound in the statement of the lemma. The natural attempt to estimate the product \(P(u\times B)\) in that case would be by applying (2.5), which is the corresponding extension of (2.4). Doing so would lead to the control
\[\begin{split}\|(E,B)\|_{\widetilde{L}_{t}^{4}\dot{B}_{2,1,>}^{s}} &\lesssim c^{-\frac{2}{q}}\left\|(E_{0},B_{0})\right\|_{\dot{B}_{2,1,>} ^{5}}+c^{-1+2\left(\frac{1}{4}+\frac{1}{p}-\frac{1}{q}\right)}\left\|P\big{(}u \times B\big{)}\right\|_{\widetilde{L}_{t}^{\left(\frac{1}{4}+\frac{1}{p} \right)^{-1}}\dot{B}_{2,1}^{\frac{5}{2}}}\\ &\lesssim c^{-\frac{2}{q}}\left\|(E_{0},B_{0})\right\|_{\dot{B}_{2, 1,>}^{5}}+c^{-\frac{1}{2}-\frac{2}{q}+\frac{2}{p}}\left\|u\right\|_{\widetilde{L} _{t}^{4}\dot{B}_{2,1}^{\frac{3}{2}}}\left\|B\right\|_{\widetilde{L}_{t}^{p} \dot{B}_{2,1}^{\frac{5}{2}}},\end{split} \tag{3.17}\]
as soon as
\[1\leq\frac{4p}{p+4}\leq q\leq\infty.\]
Observe then that, in order to conclude, the preceding estimate would require a stronger control on the velocity field \(u\), for even though one has that
\[\widetilde{L}_{t}^{\infty}\dot{H}^{1}\cap L_{t}^{2}\dot{H}^{2}\hookrightarrow \widetilde{L}_{t}^{4}\dot{B}_{2,1}^{\frac{3}{2}},\]
it is unclear whether the embedding
\[L_{t}^{\infty}\dot{H}^{1}\cap L_{t}^{2}\dot{H}^{2}\hookrightarrow\widetilde{L}_ {t}^{4}\dot{B}_{2,1}^{\frac{3}{2}}\]
holds or not. Accordingly, (3.17) does not seem to be useful.
Instead, we have to prove the desired estimate by "hand" (that is, by cooking up a suitable interpolation argument). To that end, we first introduce the decomposition
\[u=u_{\ell}+u_{h}\stackrel{{\mathrm{def}}}{{=}}\left(\mathds{1}_ {|D|<\frac{\pi e}{2}}+\mathds{1}_{|D|\geq\frac{\pi e}{2}}\right)u.\]
Then, by splitting the source term \(P(u\times B)\) according to the latter decomposition of \(u\) and by applying (3.16) to each Maxwell system corresponding to the source terms \(P(u_{\ell}\times B)\) and \(P(u_{h}\times B)\), with different values of \(p\), one obtains that
\[\begin{split}\left\|(E,B)\right\|_{\widetilde{L}_{t}^{q}\dot{B}_ {2,1,>}^{\frac{5}{2}}}&\lesssim c^{-\frac{2}{q}}\left\|(E_{0},B_{0}) \right\|_{\dot{B}_{2,1,>}^{s}}+c^{-1+\frac{2}{p}-\frac{2}{q}}\left\|P\big{(}u_ {\ell}\times B\big{)}\right\|_{\widetilde{L}_{t}^{p}\dot{B}_{2,1,>}^{s}}\\ &+c^{\frac{2}{p}-\frac{2}{q}}\left\|P\big{(}u_{h}\times B\big{)} \right\|_{\widetilde{L}_{t}^{\left(\frac{1}{2}+\frac{1}{p}\right)}^{-1}\dot{B} _{2,1,>}^{s}},\end{split} \tag{3.18}\]
for all \(2\leq p\leq q\) and all \(s\in\mathbb{R}\). Therefore, applying the product law estimate (2.5), we infer that
\[\begin{split}\left\|(E,B)\right\|_{\widetilde{L}_{t}^{q}\dot{B}_ {2,1,>}^{\frac{5}{2}}}&\lesssim c^{-\frac{2}{q}}\left\|(E_{0},B_{ 0})\right\|_{\dot{B}_{2,1,>}^{\frac{5}{2}}}\\ &+c^{-1+\frac{2}{p}-\frac{2}{q}}\left\|u_{\ell}\right\|_{ \widetilde{L}_{t}^{\infty}\dot{B}_{2,1}^{\frac{3}{2}}}\left\|B\right\|_{ \widetilde{L}_{t}^{p}\dot{B}_{2,1}^{\frac{5}{2}}}+c^{\frac{2}{p}-\frac{2}{q}} \left\|u_{h}\right\|_{\widetilde{L}_{t}^{2}\dot{B}_{2,1}^{\frac{3}{2}}}\left\| B\right\|_{\widetilde{L}_{t}^{p}\dot{B}_{2,1}^{\frac{5}{2}}}.\end{split}\]
Now, observing that
\[\left\|u_{\ell}\right\|_{\widetilde{L}_{t}^{\infty}\dot{B}_{2,1}^{\frac{3}{2}} }\lesssim c^{\frac{1}{2}}\left\|u_{\ell}\right\|_{L_{t}^{\infty}\dot{B}_{2, \infty}^{1}}\lesssim c^{\frac{1}{2}}\left\|u_{L_{t}^{\infty}\dot{H}^{1}}\right. \tag{3.19}\]
and
\[\left\|u_{h}\right\|_{\widetilde{L}_{t}^{2}\dot{B}_{2,1}^{\frac{3}{2}}}\lesssim c ^{-\frac{1}{2}}\left\|u\right\|_{L_{t}^{2}\dot{H}^{2}}\]
leads to the desired control
\[\left\|(E,B)\right\|_{\widetilde{L}_{t}^{q}\dot{B}_{2,1,>}^{\frac{5}{2}}} \lesssim c^{-\frac{2}{q}}\left\|(E_{0},B_{0})\right\|_{\dot{B}_{2,1,>}^{\frac {5}{2}}}+c^{-\frac{1}{2}-\frac{2}{q}+\frac{2}{p}}\left\|u\right\|_{L_{t}^{ \infty}\dot{H}^{1}\cap L_{t}^{2}\dot{H}^{2}}\left\|B\right\|_{\widetilde{L}_{t }^{p}\dot{B}_{2,1}^{\frac{5}{2}}},\]
for any \(2\leq p\leq q\). This takes care of the second bound in the statement of the lemma.
Finally, in order for us to justify the last estimate of the lemma (which is an improvement of the preceding bound in the case \(p=2\)), we need to further decompose \(B\) in the source term \(P(u_{\ell}\times B)\). To that end, we write
\[B=B_{\ell}+B_{h}\stackrel{{\mathrm{def}}}{{=}}\left(\mathds{1}_{|D |<\frac{\pi e}{2}}+\mathds{1}_{|D|\geq\frac{\pi e}{2}}\right)B,\]
which allows to deduce from (3.18) that
\[\begin{split}\left\|(E,B)\right\|_{\widetilde{L}_{t}^{q}\dot{B}_ {2,1,>}^{\frac{5}{2}}}&\lesssim c^{-\frac{2}{q}}\left\|(E_{0},B_{ 0})\right\|_{\dot{B}_{2,1,>}^{\frac{5}{2}}}+c^{-\frac{2}{q}}\left\|P\big{(}u_{ \ell}\times B_{\ell}\big{)}\right\|_{\widetilde{L}_{t}^{2}\dot{B}_{2,1,>}^{ \frac{5}{2}}}\\ &+c^{-\frac{2}{q}}\left\|P\big{(}u_{\ell}\times B_{h}\big{)} \right\|_{\widetilde{L}_{t}^{2}\dot{B}_{2,1}^{\frac{5}{2}}}+c^{1-\frac{2}{q}} \left\|P\big{(}u_{h}\times B\big{)}\right\|_{L_{t}^{1}\dot{B}_{2,1}^{\frac{5}{2} }},\end{split} \tag{3.20}\]
for all \(q\geq 2\), where we have used that
\[\widetilde{L}_{t}^{1}B_{2,1}^{\frac{5}{2}}=L_{t}^{1}\dot{B}_{2,1}^{\frac{5}{2}}.\]
Now, observe that
\[\operatorname{supp}\left(\widehat{u_{\ell}\times B_{\ell}}\right)\subset\left\{ \xi:\left|\xi\right|<\sigma c\right\},\]
which implies that
\[\Delta_{j}\left(u_{\ell}\times B_{\ell}\right)\equiv 0,\]
whenever \(2^{j}\geq 2\sigma c\). Consequently, it follows that
\[\left\|P\big{(}u_{\ell}\times B_{\ell}\big{)}\right\|_{\widetilde{L}_{t}^{2}B _{2,1,>}^{\frac{5}{2}}}=2^{\frac{5}{2}j}\left\|P\Delta_{j}\big{(}u_{\ell} \times B_{\ell}\big{)}\right\|_{L_{t}^{2}L^{2}}=\left\|P\big{(}u_{\ell}\times B _{\ell}\big{)}\right\|_{L_{t}^{2}B_{2,1,>}^{\frac{5}{2}}},\]
where \(j\) is the only integer value which satisfies \(\sigma c\leq 2^{j}<2\sigma c\).
Hence, by applying (2.5) for classical Besov-space-valued Lebesgue spaces, we obtain that
\[\left\|P\big{(}u_{\ell}\times B_{\ell}\big{)}\right\|_{\widetilde {L}_{t}^{2}\dot{B}_{2,1}^{\frac{5}{2}}} =\left\|P\big{(}u_{\ell}\times B_{\ell}\big{)}\right\|_{L_{t}^{2} \dot{B}_{2,1,>}^{\frac{5}{2}}}\] \[\lesssim\left\|u_{\ell}\right\|_{L_{t}^{\infty}\dot{B}_{2,1}^{ \frac{3}{2}}}\left(\left\|B\right\|_{L_{t}^{2}\dot{B}_{2,1,>}^{\frac{5}{2}}}+ \sum_{\frac{\sigma c}{4}<2^{j}<\sigma c}2^{\frac{5}{2}j}\left\|\Delta_{j}B \right\|_{L_{t}^{2}L^{2}}\right)\] \[\lesssim c^{\frac{1}{2}}\left\|u\right\|_{L_{t}^{\infty}\dot{H}^ {1}}\left(\left\|B\right\|_{\widetilde{L}_{t}^{2}\dot{B}_{2,1,>}^{\frac{5}{2}} }+\left\|B\right\|_{L_{t}^{2}\dot{B}_{2,1,<}^{\frac{5}{2}}}\right).\]
As for the last term in (3.20), applying (2.5), again, in classical Besov-space-valued Lebesgue spaces, entails that
\[\left\|P\big{(}u_{h}\times B\big{)}\right\|_{L_{t}^{1}\dot{B}_{2, 1}^{\frac{5}{2}}} \lesssim\left\|u_{h}\right\|_{L_{t}^{2}\dot{B}_{2,1}^{\frac{3}{2}} }\left\|B\right\|_{L_{t}^{2}\dot{B}_{2,1}^{\frac{5}{2}}}\] \[\lesssim c^{-\frac{1}{2}}\left\|u\right\|_{L_{t}^{2}\dot{H}^{2}} \left(\left\|B\right\|_{\widetilde{L}_{t}^{2}\dot{B}_{2,1,>}^{\frac{5}{2}}}+ \left\|B\right\|_{L_{t}^{2}\dot{B}_{2,1,<}^{\frac{5}{2}}}\right).\]
All in all, gathering the previous estimates and incorporating them into (3.20) yields the control (3.15), which completes the proof of the lemma.
#### 3.2.2. Control of low-frequency electromagnetic waves
It is clear from Lemmas 3.2 and 3.3 that solutions to the damped Maxwell system enjoy various types of bounds in different regions of low and high frequencies.
Here, we intend to shed light on the low-frequency control of electromagnetic fields solving (3.14). In broad terms, the low-frequency bounds on the electric field \(E\) are similar to the same bounds in the high-frequency regime. However, the low-frequency part of the magnetic field \(B\) enjoys parabolic-type estimates, which are consistent with the limiting system (MHD) as \(c\) goes to infinity. A more precise formulation of that principle is given in the next lemma.
**Lemma 3.5**.: _Let \(T\in(0,\infty]\) and \((E,B)\) be a smooth axisymmetric solution to (3.14) on \([0,T)\), for some initial data \((E_{0},B_{0})\) and some axisymmetric divergence-free vector field \(u\). Assume further that \(E\) and \(u\) are both without swirl and that \(B\) has pure swirl._
_Then, for any \(\alpha\in[0,1]\), \(n\in[1,\infty]\) and \((m,p)\in[2,\infty]^{2}\), with_
\[\alpha+\frac{2}{m}\leq\frac{3}{2},\]
_and for any \(s<\frac{5}{2}\), with \(s+\alpha+\frac{2}{m}>0\), one has the following low-frequency estimates_
\[\begin{split}\|E\|_{L_{t}^{q}\dot{B}_{2,n,<}^{s+\alpha+\frac{2}{ m}-\frac{3}{2}}}&\lesssim c^{-\frac{2}{q}}\left\|E_{0}\right\|_{\dot{B}_{2,n,<}^{s+ \alpha+\frac{2}{m}-\frac{3}{2}}}+c^{-1}\left\|B_{0}\right\|_{\dot{B}_{2,q,<}^{ s+\alpha+\frac{2}{m}-\frac{1}{2}-\frac{2}{q}}}\\ &\qquad+c^{2\left(\frac{1}{p}+\frac{1}{m}-\frac{1}{q}\right)-1} \left\|u\right\|_{L_{t}^{m}\dot{B}_{2,1}^{\alpha+\frac{2}{m}}}\left\|B\right\| _{L_{t}^{r}\dot{B}_{2,n}^{s}},\end{split} \tag{3.21}\]
_as soon as \(1<\left(\frac{1}{m}+\frac{1}{p}\right)^{-1}\leq q<\infty\), as well as_
\[\begin{split}\|B\|_{L_{t}^{q}\dot{B}_{2,1,<}^{s+\alpha-\frac{1}{2} +\frac{2}{q}-\frac{2}{p}}}&\lesssim c^{-1}\left\|E_{0}\right\|_ {\dot{B}_{2,q,<}^{s+\alpha+\frac{1}{2}-\frac{2}{p}}}+\left\|B_{0}\right\|_{ \dot{B}_{2,q,<}^{s+\alpha-\frac{1}{2}-\frac{2}{p}}}\\ &\qquad+\left\|u\right\|_{L_{t}^{m}\dot{B}_{2,1}^{\alpha+\frac{2 }{m}}}\left\|B\right\|_{L_{t}^{r}\dot{B}_{2,\infty}^{s}},\end{split} \tag{3.22}\]
_whenever \(1<\left(\frac{1}{m}+\frac{1}{p}\right)^{-1}<q<\infty\)._
_Moreover, in the case where_
\[1<\left(\frac{1}{m}+\frac{1}{p}\right)^{-1}=q<\infty,\]
_it holds that_
\[\begin{split}\|B\|_{L_{t}^{q}\dot{B}_{2,n,<}^{s+\alpha-\frac{1}{ 2}+\frac{2}{q}-\frac{2}{p}}}&\lesssim c^{-1}\left\|E_{0}\right\| _{\dot{B}_{2,q,<}^{s+\alpha+\frac{1}{2}-\frac{2}{p}}}+\left\|B_{0}\right\|_{ \dot{B}_{2,q,<}^{s+\alpha-\frac{1}{2}-\frac{2}{p}}}+\left\|u\right\|_{L_{t}^{ m}\dot{B}_{2,1}^{\alpha+\frac{2}{m}}}\left\|B\right\|_{L_{t}^{r}\dot{B}_{2,n}^{s}}. \end{split} \tag{3.23}\]
_At last, at the endpoint \(s=\frac{5}{2}\), we have_
\[\begin{split}\|E\|_{L_{t}^{q}\dot{B}_{2,n,<}^{1+\alpha+\frac{2}{ m}}}&\lesssim c^{-\frac{2}{q}}\left\|E_{0}\right\|_{\dot{B}_{2,n,<}^{1+ \alpha+\frac{2}{m}}}+c^{-1}\left\|B_{0}\right\|_{\dot{B}_{2,q,<}^{2+\alpha+ \frac{2}{m}-\frac{2}{q}}}\\ &\qquad+c^{2\left(\frac{1}{p}+\frac{1}{m}-\frac{1}{q}\right)-1} \left\|u\right\|_{L_{t}^{m}\dot{B}_{2,1}^{\alpha+\frac{2}{m}}}\left\|B\right\| _{L_{t}^{p}\dot{B}_{2,1}^{\frac{5}{2}}},\\ \|B\|_{L_{t}^{q}\dot{B}_{2,n,<}^{q+\alpha+\frac{2}{m}-\frac{2}{p}} }&\lesssim c^{-1}\left\|E_{0}\right\|_{\dot{B}_{2,q,<}^{3+\alpha- \frac{2}{p}}}+\left\|B_{0}\right\|_{\dot{B}_{2,q,<}^{2+\alpha-\frac{2}{p}}}+ \left\|u\right\|_{L_{t}^{m}\dot{B}_{2,1}^{\alpha+\frac{2}{m}}}\left\|B\right\| _{L_{t}^{p}\dot{B}_{2,1}^{\frac{5}{2}}},\end{split} \tag{3.24}\]
_as long as \(1<\left(\frac{1}{m}+\frac{1}{p}\right)^{-1}\leq q<\infty\)._
Proof.: Similarly to [3, Lemma 3.9], the proof hinges upon the combination of Lemmas 2.3, 3.2 and 3.3. Thus, on the one hand, applying the low-frequency estimate from Lemma 3.2, for parameter values such that
\[r=\tilde{r}=2\]
and
\[\tilde{q}^{\prime}=\left(\frac{1}{m}+\frac{1}{p}\right)^{-1},\]
yields, as soon as \(1\leq\left(\frac{1}{m}+\frac{1}{p}\right)^{-1}\leq q\leq\infty\), that
\[\begin{split} 2^{j\left(s+\alpha+\frac{2}{m}-\frac{3}{2} \right)}\left\|\Delta_{j}E\right\|_{L_{t}^{q}L^{2}}&\lesssim c^{- \frac{2}{q}}\left\|E_{0}\right\|_{\dot{B}_{2,\infty,<}^{s+\alpha+\frac{2}{m}- \frac{3}{2}}}+c^{-1}\left\|B_{0}\right\|_{\dot{B}_{2,\infty,<}^{s+\alpha+ \frac{2}{m}-\frac{1}{2}-\frac{2}{q}}}\\ &\qquad+c^{2\left(\frac{1}{p}+\frac{1}{m}-\frac{1}{q}\right)-1} \left\|P(u\times B)\right\|_{L_{t}^{\left(\frac{1}{m}+\frac{1}{p}\right)^{-1} }}\left(\dot{B}_{2,\infty}^{s+\alpha+\frac{2}{m}-\frac{3}{2}}\right)\end{split}\]
and
\[\begin{split} 2^{j\left(s+\alpha-\frac{1}{2}-\frac{2}{p}+\frac{2}{q} \right)}\left\|\Delta_{j}B\right\|_{L^{q}_{t}L^{2}}&\lesssim c^{-1}\left\|E_{0}\right\|_{\dot{B}^{s+\alpha+\frac{1}{2}-\frac{2}{p}} _{2,\infty,<}}+\left\|B_{0}\right\|_{\dot{B}^{s+\alpha-\frac{1}{2}-\frac{2}{p} }_{2,\infty,<}}\\ &\quad+\left\|P(u\times B)\right\|_{L^{\left(\frac{1}{m}+\frac{1} {p}\right)^{-1}}_{t}\left(\dot{B}^{s+\alpha+\frac{2}{m}-\frac{3}{2}}_{2,\infty }\right)},\end{split} \tag{3.25}\]
for all \(j\in\mathbb{Z}\) with \(\frac{\sigma c}{2}\leq 2^{j}<\sigma c\).
On the other hand, employing the first and the second estimates from Lemma 3.3 entails
\[\begin{split}\left\|\mathds{1}_{\{2^{j}<\frac{\sigma c}{2}\}}2^ {j\left(s+\alpha+\frac{2}{m}-\frac{3}{2}\right)}\left\|\Delta_{j}E\right\|_{ L^{2}_{t}\ell^{n}_{j}}&\lesssim c^{-\frac{2}{q}}\left\|E_{0}\right\|_{\dot{B}^{s+\alpha+\frac{2}{m}-\frac{ 3}{2}}_{2,n,<}}+c^{-1}\left\|B_{0}\right\|_{\dot{B}^{s+\alpha+\frac{2}{m}- \frac{3}{2}-\frac{2}{q}}_{2,q,<}}\\ &\quad+c^{2\left(\frac{1}{p}+\frac{1}{m}-\frac{1}{q}\right)-1} \left\|P(u\times B)\right\|_{L^{\left(\frac{1}{m}+\frac{1}{p}\right)^{-1}}_{t} \left(\dot{B}^{s+\alpha+\frac{2}{m}-\frac{3}{2}}_{2,n}\right)},\end{split}\]
for any \(1<\left(\frac{1}{m}+\frac{1}{p}\right)^{-1}\leq q<\infty\) and \(1\leq n\leq\infty\), as well as
\[\begin{split}\left\|\mathds{1}_{\{2^{j}<\frac{\sigma c}{2}\}}2^ {j\left(s+\alpha-\frac{1}{2}+\frac{2}{q}-\frac{2}{p}\right)}\left\|\Delta_{j}B \right\|_{L^{2}_{t}\ell^{n}_{j}}&\lesssim c^{-1}\left\|E_{0}\right\|_{\dot{B}^{s+\alpha+\frac{1}{2}-\frac{2}{p} }_{2,q,<}}+\left\|B_{0}\right\|_{\dot{B}^{s+\alpha-\frac{1}{2}-\frac{2}{p}}_{2, q,<}}\\ &\quad+\left\|P(u\times B)\right\|_{L^{\left(\frac{1}{m}+\frac{1} {p}\right)^{-1}}_{t}\left(\dot{B}^{s+\alpha+\frac{2}{m}-\frac{3}{2}}_{2,\infty }\right)},\end{split}\]
for any \(1<\left(\frac{1}{m}+\frac{1}{p}\right)^{-1}<q<\infty\).
All in all, by combining the preceding estimates, we arrive at the conclusion that
\[\begin{split}\left\|E\right\|_{L^{q}_{t}\dot{B}^{s+\alpha+\frac{ 2}{m}-\frac{3}{2}}_{2,n,<}}&\lesssim c^{-\frac{2}{q}}\left\|E_{0}\right\|_{\dot{B}^{s+\alpha+\frac{2}{m}- \frac{3}{2}}_{2,n,<}}+c^{-1}\left\|B_{0}\right\|_{\dot{B}^{s+\alpha+\frac{2}{m} -\frac{1}{2}-\frac{2}{q}}_{2,q,<}}\\ &\quad+c^{2\left(\frac{1}{p}+\frac{1}{m}-\frac{1}{q}\right)-1} \left\|P(u\times B)\right\|_{L^{\left(\frac{1}{m}+\frac{1}{p}\right)^{-1}}_{t} \dot{B}^{s+\alpha+\frac{2}{m}-\frac{3}{2}}_{2,n}},\end{split}\]
for any \(1<\left(\frac{1}{m}+\frac{1}{p}\right)^{-1}\leq q<\infty\) and \(1\leq n\leq\infty\), as well as
\[\begin{split}\left\|B\right\|_{L^{q}_{t}\dot{B}^{s+\alpha-\frac{1 }{2}+\frac{2}{q}-\frac{2}{p}}_{2,1,<}}&\lesssim c^{-1}\left\|E_{0} \right\|_{\dot{B}^{s+\alpha+\frac{1}{2}-\frac{2}{p}}_{2,q,<}}+\left\|B_{0} \right\|_{\dot{B}^{s+\alpha-\frac{1}{2}-\frac{2}{p}}_{2,q,<}}+\left\|P(u\times B )\right\|_{L^{\left(\frac{1}{m}+\frac{1}{p}\right)^{-1}}_{t}\left(\dot{B}^{s+ \alpha+\frac{2}{m}-\frac{3}{2}}_{2,\infty}\right)},\end{split}\]
for any \(1<\left(\frac{1}{m}+\frac{1}{p}\right)^{-1}<q<\infty\). Therefore, applying the product estimates from Lemma 2.3 concludes the proof of (3.21) and (3.22).
As for the case \(1<\left(\frac{1}{m}+\frac{1}{p}\right)^{-1}=q<\infty\), we apply the third estimate from Lemma 3.3 instead of the second one to infer that
\[\begin{split}\left\|\mathds{1}_{\{2^{j}<\frac{\sigma c}{2}\}}2^ {j\left(s+\alpha-\frac{1}{2}+\frac{2}{q}-\frac{2}{p}\right)}\left\|\Delta_{j}B \right\|_{L^{2}_{t}\ell^{n}_{j}}&\lesssim c^{-1}\left\|E_{0} \right\|_{\dot{B}^{s+\alpha+\frac{1}{2}-\frac{2}{p}}_{2,q,<}}+\left\|B_{0} \right\|_{\dot{B}^{s+\alpha-\frac{1}{2}-\frac{2}{p}}_{2,q,<}}+\left\|B_{0} \right\|_{\dot{B}^{s+\alpha-\frac{1}{2}-\frac{2}{p}}_{2,q,<}}\\ &\quad+\left\|P(u\times B)\right\|_{L^{\left(\frac{1}{m}+\frac{1} {p}\right)^{-1}}_{t}\left(\dot{B}^{s+\alpha+\frac{2}{m}-\frac{3}{2}}_{2,n} \right)}.\end{split}\]
Hence, combining the latter estimate with (3.25), we obtain that
\[\begin{split}\left\|B\right\|_{L^{q}_{t}\dot{B}^{s+\alpha-\frac{1}{ 2}+\frac{2}{q}-\frac{2}{p}}_{2,n,<}}&\lesssim c^{-1}\left\|E_{0} \right\|_{\dot{B}^{s+\alpha+\frac{1}{2}-\frac{2}{p}}_{2,q,<}}+\left\|B_{0} \right\|_{\dot{B}^{s+\alpha-\frac{1}{2}-\frac{2}{p}}_{2,q,<}}+\left\|P(u\times B )\right\|_{L^{\left(\frac{1}{m}+\frac{1}{p}\right)^{-1}}_{t}\left(\dot{B}^{s+ \alpha+\frac{2}{m}-\frac{3}{2}}_{2,n}\right)}.\end{split}\]
Our claim (3.23) follows then by employing Lemma 2.3, again.
At last, for the endpoint case \(s=\frac{5}{2}\), we deduce from the previous cases above, for any \(n\in[1,\infty]\), that
\[\left\|E\right\|_{L^{q}_{t}\dot{B}^{1+\alpha+\frac{2}{m}}_{2,n,<}} \lesssim c^{-\frac{2}{q}}\left\|E_{0}\right\|_{\dot{B}^{1+\alpha+\frac{2}{ m}}_{2,n,<}}+c^{-1}\left\|B_{0}\right\|_{\dot{B}^{2+\alpha+\frac{2}{m}-\frac{2}{q}}_{2, q,<}}\] \[+c^{2\left(\frac{1}{p}+\frac{1}{m}-\frac{1}{q}\right)-1}\left\|P( u\times B)\right\|_{L_{t}\left(\frac{1}{m}+\frac{1}{p}\right)^{-1}\dot{B}^{1+ \alpha+\frac{2}{m}}_{2,n}},\]
and
\[\left\|B\right\|_{L^{q}_{t}\dot{B}^{2+\alpha+\frac{2}{m}-\frac{2}{p}}_{2,n,<} }\lesssim c^{-1}\left\|E_{0}\right\|_{\dot{B}^{3+\alpha-\frac{2}{p}}_{2,q,<}}+ \left\|B_{0}\right\|_{\dot{B}^{2+\alpha-\frac{2}{p}}_{2,q,<}}+\left\|P(u\times B )\right\|_{L_{t}\left(\frac{1}{m}+\frac{1}{p}\right)^{-1}\left(\dot{B}^{1+ \alpha+\frac{2}{m}}_{2,n}\right)},\]
as long as \(1<\left(\frac{1}{m}+\frac{1}{p}\right)^{-1}\leq q<\infty\). Therefore, applying Lemma 2.3, again, to estimate the source term concludes the proof.
_Remark_.: Observe that combining (3.15) and (3.24), with the choice of parameters
\[q=p=2,\quad m=4,\quad\alpha=n=1,\]
and employing the interpolation inequality
\[\left\|u\right\|_{L^{4}_{t}\dot{B}^{\frac{3}{2}}_{2,1}}\lesssim\left\|u \right\|_{L^{\infty}_{t}\dot{H}^{1}\cap L^{2}_{t}\dot{H}^{2}}\]
produces the useful bound
\[\left\|E\right\|_{L^{2}_{t}\dot{B}^{\frac{5}{2}}_{2,1}}\lesssim c^{-1}\left\| \left(E_{0},B_{0}\right)\right\|_{\dot{B}^{\frac{5}{2}}_{2,1}}+c^{-\frac{1}{2} }\left\|u\right\|_{L^{\infty}_{t}\dot{H}^{1}\cap L^{2}_{t}\dot{H}^{2}}\left( \left\|B\right\|_{\widetilde{L}^{2}_{t}\dot{B}^{\frac{5}{2}}_{2,1,>}}+\left\| B\right\|_{L^{2}_{t}\dot{B}^{\frac{5}{2}}_{2,1,<}}\right). \tag{3.26}\]
The latter control of the electric field will come in handy, later on.
#### 3.2.3. Persistence of regularity
Observe that the case \(q=\infty\) is missing in the bounds from Lemma 3.5, above. Although it is possible to extend the results from Lemma 3.5 to cover that case (at the cost of more restrictive assumptions on the third summability index of Besov norms of the initial data), it is simpler to establish these missing bounds by utilizing an elementary energy estimate for Maxwell's equations, which is the content of the next lemma.
**Lemma 3.6**.: _Let \(T\in(0,\infty]\) and \((E,B)\) be a smooth axisymmetric solution to (3.14) on \([0,T)\), for some divergence-free initial data \((E_{0},B_{0})\) and vector field \(u\). Assume further that \(E\) and \(u\) are both without swirl and that \(B\) is a vector field with pure swirl._
_Then, for any \(s\in(-\frac{3}{2},\frac{5}{2})\), \(\epsilon>0\) with \(s+\epsilon<\frac{5}{2}\) and for all \(p,q\in[2,\infty]\) with \(\frac{1}{2}=\frac{1}{p}+\frac{1}{q}\), it holds that_
\[\left\|(E,B)\right\|_{L^{\infty}_{t}\dot{H}^{s}}+c\left\|E\right\|_{L^{2}_{t} \dot{H}^{s}}\lesssim\left\|(E_{0},B_{0})\right\|_{\dot{H}^{s}}+\left\|u\right\| _{L^{q}_{t}\dot{H}^{\frac{3}{2}-\epsilon}}\left\|B\right\|_{L^{p}_{t}\dot{H}^{s +\epsilon}},\]
_on the time interval \([0,T)\). Moreover, in the endpoint case \(\epsilon=0\), we have that_
\[\left\|(E,B)\right\|_{L^{\infty}_{t}\dot{H}^{s}}+c\left\|E\right\|_{L^{2}_{t} \dot{H}^{s}}\lesssim\left\|(E_{0},B_{0})\right\|_{\dot{H}^{s}}+\left\|u\right\| _{L^{q}_{t}\dot{B}^{\frac{3}{2}}_{2,1}}\left\|B\right\|_{L^{p}_{t}\dot{H}^{s }},\]
_on the time interval \([0,T)\)._
Proof.: We begin with localizing (3.14) in frequencies by applying \(\Delta_{j}\), for \(j\in\mathbb{Z}\). Then, by an \(L^{2}\) energy estimate, we find, for any \(j\in\mathbb{Z}\), that
\[\frac{1}{2c}\left\|\Delta_{j}(E,B)(t)\right\|^{2}_{L^{2}}+\sigma c \left\|\Delta_{j}E\right\|^{2}_{L^{2}_{t}L^{2}} \leq\frac{1}{2c}\left\|\Delta_{j}(E_{0},B_{0})\right\|^{2}_{L^{2}}+ \sigma\left\|\Delta_{j}P(u\times B)\right\|_{L^{2}_{t}L^{2}}\left\|\Delta_{j}E \right\|_{L^{2}_{t}L^{2}}\] \[\leq\frac{1}{2c}\left\|\Delta_{j}(E_{0},B_{0})\right\|^{2}_{L^{2} }+\frac{\sigma}{2c}\left\|\Delta_{j}P(u\times B)\right\|^{2}_{L^{2}_{t}L^{2}}\] \[\quad+\frac{\sigma c}{2}\left\|\Delta_{j}E\right\|^{2}_{L^{2}_{t}L ^{2}}.\]
Therefore, it follows that
\[\|(E,B)(t)\|_{H^{s}}^{2}+\sigma c^{2}\,\|E\|_{L^{2}_{t}\dot{H}^{s}}^{2}\leq\|(E_{0 },B_{0})\|_{H^{s}}^{2}+\sigma\,\|P(u\times B)\|_{L^{2}_{t}\dot{H}^{s}}^{2}\,,\]
for any \(s\in(-\frac{3}{2},\frac{5}{2})\).
Finally, we conclude the proof by employing Lemma 2.3 to control the source term in the right-hand side, above.
## 4. Asymptotic analysis of electromagnetic fields
In this section, we are going to make (3.11) and (3.12) precise. In particular, we are going to study the convergence
\[\frac{1}{c}\partial_{t}E\to 0,\]
as \(c\to\infty\).
For simplicity, we will drop \(\sigma\) from the equations (1.1) by fixing its value \(\sigma=1\), and we emphasize that the analysis we perform here holds for any non-negative value of that parameter.
### Asymptotic analysis of Ampere's equation
From (1.1), observe, at least formally, that \(E\) vanishes when \(c\) goes to infinity. Moreover, Ampere's equation allows us to obtain
\[j\to\nabla\times B,\]
as \(c\to\infty\), in some suitable weak sense.
In the next lemma, we establish a more precise description of the preceding convergence in adequate functional spaces. This step is crucial in the proof of Theorem 1.1 and will come in handy in Section 5.
**Proposition 4.1**.: _Let \(T\in(0,\infty]\) and \((u,E,B)\) be smooth solution to the Navier-Stokes-Maxwell equations (1.1), defined on \([0,T)\), where \(u\) and \(E\) have no swirl and \(B\) has pure swirl._
_Then, on the time interval \([0,T)\), for all \(s\in[0,\frac{1}{2}]\), \(p\in[2,\infty]\) and for any \(c>0\), it holds that_
\[\left\|\frac{1}{c}\partial_{t}E\right\|_{L^{p}_{t}\dot{H}^{s}}=\|j-\nabla \times B\|_{L^{p}_{t}\dot{H}^{s}}\lesssim c^{-\frac{2}{p}}\,\|\nabla\times B_{ 0}-j_{0}\|_{\dot{H}^{s}}+c^{-\frac{2}{p}}\,\|E_{0}\|_{\dot{H}^{s+1}}+c^{-\left( \frac{2}{p}+1\right)}\mathcal{A}_{s}(u,E,B),\]
_where_
\[\mathcal{A}_{s}(u,E,B)\stackrel{{\rm def}}{{=}} \Big{(}\,\|u\|_{L^{\infty}_{t}\dot{H}^{1}\cap L^{2}_{t}\dot{H}^{2}}+ \mathcal{E}_{0}\,\|B\|_{L^{\infty}_{t}\dot{H}^{\frac{3}{2}}}\,\Big{)}\Big{(} \,\|B\|_{L^{\infty}_{t}\dot{H}^{s+\frac{3}{2}}}+c\,\|E\|_{L^{2}_{t}\dot{H}^{s+ \frac{3}{2}}}\,\Big{)}\] \[\quad+\|u\|_{L^{\infty}_{t}L^{2}}^{\frac{1}{2}-s}\,\|u\|_{L^{ \infty}_{t}\dot{H}^{1}}^{s+\frac{1}{2}}\,\|u\|_{L^{2}_{t}\dot{H}^{2}}\,\|B\|_{L ^{\infty}_{t}\dot{H}^{\frac{3}{2}}}\,.\]
Proof.: The proof relies on a key idea from [4, Proposition 3.3], which we adapt to the three-dimensional setting.
We begin by applying a time derivative to Ampere's equation to obtain the following damped wave equation for \(E\)
\[\frac{1}{c}\partial_{tt}E-c\Delta E+c\,\partial_{t}E=-\partial_{t}P(u\times B),\]
where we have used Faraday's equation and Ohm's law, as well.
Then, we localize in frequencies by applying \(\Delta_{j}\), for \(j\in\mathbb{Z}\), and we perform an \(L^{2}\) energy estimate followed by Holder's inequality to find, for all \(t\in[0,T)\), that
\[\frac{1}{2}\frac{d}{dt}\!\left(\frac{1}{c^{4}}\left\|\Delta_{j} \partial_{t}E(t)\right\|_{L^{2}}^{2}+\frac{1}{c^{2}}\left\|\Delta_{j}\nabla E( t)\right\|_{L^{2}}^{2}\right)+\frac{1}{c^{2}}\left\|\Delta_{j}\partial_{t}E(t) \right\|_{L^{2}}^{2}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\leq\frac{1}{c^{3}} \left|\int_{\mathbb{R}^{3}}\Delta_{j}\partial_{t}P(u\times B)\Delta_{j} \partial_{t}E(t,x)dx\right|\] \[\qquad\qquad\qquad\qquad\qquad\leq\frac{1}{c^{3}}\left\|\Delta_{j }\partial_{t}P(u\times B)(t)\right\|_{L^{2}}\left\|\Delta_{j}\partial_{t}E(t) \right\|_{L^{2}}\] \[\qquad\qquad\qquad\qquad\qquad\leq\frac{1}{2c^{4}}\left\|\Delta_ {j}\partial_{t}P(u\times B)(t)\right\|_{L^{2}}^{2}+\frac{1}{2c^{2}}\left\| \Delta_{j}\partial_{t}E(t)\right\|_{L^{2}}^{2}.\]
Therefore, we find, for any \(s\in\mathbb{R}\), that
\[\frac{d}{dt}\left(\frac{1}{c^{4}}\left\|\partial_{t}E(t)\right\|_{H^{s}}^{2}+ \frac{1}{c^{2}}\left\|E(t)\right\|_{H^{s+1}}^{2}\right)+\frac{1}{c^{2}}\left\| \partial_{t}E(t)\right\|_{H^{s}}^{2}\leq\frac{1}{c^{4}}\left\|\partial_{t}P(u \times B)(t)\right\|_{H^{s}}^{2}.\]
Consequently, employing Holder's inequality followed by Young's inequality for products, we obtain that
\[\frac{1}{c^{2-\frac{2}{p}}}\left\|\partial_{t}E\right\|_{L^{p}_{t }\dot{H}^{s}} \lesssim\frac{1}{c^{2}}\left\|\partial_{t}E\right\|_{L^{\widetilde {n}}\dot{H}^{s}}+\frac{1}{c}\left\|\partial_{t}E\right\|_{L^{2}_{t}\dot{H}^{s}}\] \[\lesssim\frac{1}{c}\Big{(}\left\|\nabla\times B_{0}-j_{0}\right\| _{H^{s}}+\left\|E_{0}\right\|_{\dot{H}^{s+1}}\Big{)}+\frac{1}{c^{2}}\left\| \partial_{t}P(u\times B)\right\|_{L^{2}_{t}\dot{H}^{s}}, \tag{4.1}\]
for every \(p\in[2,\infty]\), where we have used Ampere's equation to express the initial data of \(\frac{1}{c}\partial_{t}E\) in terms of \(B_{0}\) and \(j_{0}\).
Hence, we are now left to control \(\partial_{t}P(u\times B)\). To this end, we shall first transform the time derivative into spatial derivatives by using the momentum and Faraday's equations from (1.1). Accordingly, we obtain that
\[\partial_{t}P(u\times B)=P\left(\partial_{t}u\times B\right)+P\left(u\times \partial_{t}B\right)=-\sum_{i=1}^{4}\mathcal{I}_{i},\]
where
\[\mathcal{I}_{1} \overset{\mathrm{def}}{=}P\Big{(}P\left(u\cdot\nabla u\right) \times B\Big{)}, \mathcal{I}_{2} \overset{\mathrm{def}}{=}-P\Big{(}\Delta u\times B\Big{)},\] \[\mathcal{I}_{3} \overset{\mathrm{def}}{=}-P\Big{(}P\left(j\times B\right)\times B \Big{)}, \mathcal{I}_{4} \overset{\mathrm{def}}{=}cP\Big{(}u\times(\nabla\times E)\Big{)}.\]
We are now going to utilize (2.3) from Lemma 2.3 to estimate each term separately.
For \(\mathcal{I}_{1}\), we find that
\[\left\|\mathcal{I}_{1}\right\|_{L^{2}_{t}\dot{H}^{s}} \lesssim\left\|u\cdot\nabla u\right\|_{L^{2}_{t}\dot{H}^{s}}\left\| B\right\|_{L^{\infty}_{t}\dot{H}^{\frac{3}{2}}}\] \[\lesssim\left\|u\right\|_{L^{\infty}_{t}\dot{H}^{s+\frac{1}{2}}} \left\|\nabla u\right\|_{L^{2}_{t}\dot{H}^{1}}\left\|B\right\|_{L^{\infty}_{t} \dot{H}^{\frac{3}{2}}}.\]
Therefore, as long as \(s\in[0,\frac{1}{2}]\), we obtain, by interpolation, that
\[\left\|\mathcal{I}_{1}\right\|_{L^{2}_{t}\dot{H}^{s}}\lesssim\left\|u\right\|_{L ^{\infty}_{t}L^{2}}^{\frac{1}{2}-s}\left\|u\right\|_{L^{\infty}_{t}\dot{H}^{1}} \left\|u\right\|_{L^{2}_{t}\dot{H}^{2}}\left\|B\right\|_{L^{\infty}_{t}\dot{H} ^{\frac{3}{2}}}.\]
As for \(\mathcal{I}_{2}\), we employ Lemma 2.3, again, to find, for any \(s\in[0,\frac{1}{2}]\), that
\[\left\|\mathcal{I}_{2}\right\|_{L^{2}_{t}\dot{H}^{s}} \lesssim\left\|\Delta u\right\|_{L^{2}_{t}L^{2}}\left\|B\right\|_{L ^{\infty}_{t}\dot{H}^{s+\frac{3}{2}}}\] \[\lesssim\left\|u\right\|_{L^{2}_{t}\dot{H}^{2}}\left\|B\right\|_{L ^{\infty}_{t}\dot{H}^{s+\frac{3}{2}}}.\]
In order to estimate \(\mathcal{I}_{3}\), we utilize Lemma 2.3 twice to obtain, for any \(s\in[0,\frac{1}{2}]\), that
\[\left\|\mathcal{I}_{3}\right\|_{L_{t}^{2}\dot{H}^{s}} \lesssim\left\|P\big{(}j\times B\big{)}\right\|_{L_{t}^{2}L^{2}} \left\|B\right\|_{L_{t}^{\infty}\dot{H}^{s}+\frac{3}{2}}\] \[\lesssim\left\|j\right\|_{L_{t}^{2}L^{2}}\left\|B\right\|_{L_{t}^ {\infty}\dot{H}^{\frac{3}{2}}}\left\|B\right\|_{L_{t}^{\infty}\dot{H}^{s+\frac {3}{2}}}\] \[\lesssim\mathcal{E}_{0}\left\|B\right\|_{L_{t}^{\infty}\dot{H}^{ \frac{3}{2}}}\left\|B\right\|_{L_{t}^{\infty}\dot{H}^{s+\frac{3}{2}}},\]
where we have used the energy estimate (3.1) in the last inequality. Finally, the control of the last term \(\mathcal{I}_{4}\) is achieved by classical product laws, which give that
\[\left\|\mathcal{I}_{4}\right\|_{L_{t}^{2}\dot{H}^{s}}\lesssim\left\|u\right\|_ {L_{t}^{\infty}\dot{H}^{1}}\left\|cE\right\|_{L_{t}^{2}\dot{H}^{s+\frac{3}{2} }}.\]
All in all, gathering the previous bounds yields
\[\left\|\partial_{t}(u\times B)\right\|_{L_{t}^{2}\dot{H}^{s}}\lesssim\mathcal{ A}_{s}(u,E,B), \tag{4.2}\]
where \(\mathcal{A}_{s}(u,E,B)\) is defined in the statement of the proposition, above. The proof is then concluded by incorporating (4.2) into (4.1).
### Almost-parabolic estimates on the magnetic field
When considering the limiting system (MHD), one can show, with standard energy estimates, that the quantities \(B\) and \(\Gamma=\frac{B_{\theta}}{r}\) are globally controlled in \(L_{t}^{2}\dot{H}^{1}\) and \(L_{t}^{\infty}L^{p}\cap L_{t}^{2}\dot{H}^{1}\), for all \(p\in[1,\infty]\), respectively.
Here, we establish an asymptotic version of these bounds for (1.1), thereby justifying (3.12). The key observation in the proof below consists in considering the term \(\frac{1}{c^{2}}\partial_{tt}\Gamma\) in the equation
\[\frac{1}{c^{2}}\partial_{tt}\Gamma+\partial_{t}\Gamma+u\cdot\nabla\Gamma- \big{(}\Delta+\frac{\partial_{r}}{r}\big{)}\Gamma=0 \tag{4.3}\]
as an error, for large values of \(c\). Accordingly, one should treat (4.3) as a parabolic equation with a vanishing source term, as \(c\to\infty\). A more precise statement of that is given in the next proposition
**Proposition 4.2**.: _Let \(T\in\mathbb{R}^{+}\cup\{\infty\}\) and \((u,E,B)\) be a smooth axisymmetric solution to (1.1), defined on \([0,T)\), where \(u\) and \(E\) have no swirl and \(B\) has pure swirl._
_Then, it holds that_
\[\left\|B\right\|_{L_{t}^{2}\dot{H}^{1}}\leq\mathcal{E}_{0}+\left\|\frac{1}{c} \partial_{t}E\right\|_{L_{t}^{2}L^{2}}. \tag{4.4}\]
_Moreover, \(\Gamma\) enjoys the bounds_
\[\left\|\Gamma\right\|_{L_{t}^{2}\dot{H}^{1}}\lesssim\left\|\Gamma_{0}\right\| _{L^{2}}+c^{-1}\left\|E_{0}\right\|_{\dot{H}^{2}}+c^{-1}\left(\left\|B\right\| _{L_{t}^{\infty}\dot{H}^{2}}+c\left\|E\right\|_{L_{t}^{2}\dot{H}^{2}}\right), \tag{4.5}\]
_and, for any \(p\in[2,\infty)\),_
\[\left\|\Gamma\right\|_{L_{t}^{\infty}L^{p}}\lesssim\left\|\Gamma_{0}\right\| _{L^{p}}+c^{-1}\left\|E_{0}\right\|_{\dot{W}^{2,p}}+\left\|E\right\|_{L_{t}^{2} W^{2,p}}, \tag{4.6}\]
_where all the time norms above are taken over the whole interval \([0,T)\)._
_Remark_.: Due to Lemma 3.6 and Proposition 4.1, observe that the last terms in the right-hand side of (4.4) and (4.5) can be seen as errors for large values of \(c\).
Likewise, by virtue of (3.26) and the embedding
\[\dot{B}_{2,1}^{\frac{5}{2}}\hookrightarrow\dot{W}^{2,3}(\mathbb{R}^{3}),\]
note that (4.6) implies, for \(p=3\), that
\[\left\|\Gamma\right\|_{L_{t}^{\infty}L^{3}}\lesssim\left\|\Gamma_{0}\right\| _{L^{3}}+c^{-1}\left\|(E_{0},B_{0})\right\|_{\dot{B}_{2,1}^{\frac{5}{2}}}+c^{- \frac{1}{2}}\left\|u\right\|_{L_{t}^{\infty}\dot{H}^{1}\cap L_{t}^{2}\dot{H}^{ 2}}\left(\left\|B\right\|_{\widetilde{L}_{t}^{2}\dot{B}_{2,1,>}^{\frac{5}{2}} }+\left\|B\right\|_{L_{t}^{2}\dot{B}_{2,1,<}^{\frac{5}{2}}}\right).\]
This bound will come in handy, later on.
Proof.: It is readily seen that, by virtue of the energy inequality (3.1), Ampere's equation from (1.1) entails that
\[\left\|B\right\|_{L^{2}_{t}\dot{H}^{1}}\leq\left\|j\right\|_{L^{2}_{t}L^{2}}+ \left\|\frac{1}{c}\partial_{t}E\right\|_{L^{2}_{t}L^{2}}\leq\mathcal{E}_{0}+ \left\|\frac{1}{c}\partial_{t}E\right\|_{L^{2}_{t}L^{2}},\]
thereby establishing (4.4).
We now focus on the proof of (4.5) and (4.6). To that end, multiplying (4.3) by \(\Gamma|\Gamma|^{p-2}\), integrating with respect to space variables and using the identities
\[-\int_{\mathbb{R}^{3}}(\Delta\Gamma)\Gamma|\Gamma|^{p-2}dx-\int_{\mathbb{R}^{3 }}\frac{\partial_{r}\Gamma}{r}\Gamma|\Gamma|^{p-2}dx=\underbrace{\frac{4(p-1) }{p^{2}}\left\|\nabla(|\Gamma(t)|^{\frac{p}{2}})\right\|_{L^{2}}^{2}+\frac{2 \pi}{p}\int_{\mathbb{R}}|\Gamma(t,r=0,z)|^{p}dz}_{\stackrel{{ \rm def}}{{=}}X_{p}(t)}\]
and
\[\int_{\mathbb{R}^{3}}(\partial_{tt}\Gamma)\Gamma|\Gamma|^{p-2}dx=\partial_{tt }\Big{(}\underbrace{\frac{1}{p}\left\|\Gamma(t)\right\|_{L^{p}}^{p}}_{ \stackrel{{\rm def}}{{=}}Y_{p}(t)}\Big{)}-\underbrace{(p-1)\int _{\mathbb{R}^{3}}|\partial_{t}\Gamma|^{2}|\Gamma|^{p-2}dx}_{\stackrel{{ \rm def}}{{=}}A_{p}(t)},\]
yields, for all \(t\in[0,T)\), that
\[\frac{1}{c^{2}}Y_{p}^{\prime\prime}(t)+Y_{p}^{\prime}(t)+X_{p}(t)=\frac{1}{c^ {2}}A_{p}(t). \tag{4.7}\]
Therefore, integrating in time, we obtain that
\[Y_{p}(t)+\int_{0}^{t}X_{p}(\tau)d\tau=Y_{p}(0)+\frac{1}{c^{2}}Y_{p}^{\prime}(0 )-\frac{1}{c^{2}}Y_{p}^{\prime}(t)+\frac{1}{c^{2}}\int_{0}^{t}A_{p}(\tau)d\tau. \tag{4.8}\]
Now, we need to take care of the term \(-\frac{1}{c^{2}}Y_{p}^{\prime}(t)\), above. To that end, observing that (4.7) can be rewritten as
\[\frac{1}{c^{2}}\frac{d}{dt}\Big{(}Y_{p}^{\prime}(t)e^{c^{2}t}\Big{)}=\Big{(} \frac{1}{c^{2}}A_{p}(t)-X_{p}(t),\Big{)}e^{c^{2}t},\]
which we then integrate with respect to the time variable, we find that
\[-\frac{1}{c^{2}}Y_{p}^{\prime}(t)=-\frac{1}{c^{2}}Y_{p}^{\prime}(0)e^{-c^{2}t }+\int_{0}^{t}e^{-c^{2}(t-\tau)}\Big{(}X_{p}(\tau)-\frac{1}{c^{2}}A_{p}(\tau) \Big{)}d\tau.\]
Hence, plugging the latter identity into (4.8) yields that
\[\begin{split} Y_{p}(t)+\int_{0}^{t}X_{p}(\tau)d\tau& =Y_{p}(0)+\frac{1-e^{-c^{2}t}}{c^{2}}Y_{p}^{\prime}(0)+\int_{0}^{ t}e^{-c^{2}(t-\tau)}X_{p}(\tau)d\tau\\ &\quad+\frac{1}{c^{2}}\int_{0}^{t}A_{p}(\tau)d\tau-\frac{1}{c^{2} }\int_{0}^{t}e^{-c^{2}(t-\tau)}A_{p}(\tau)d\tau.\end{split} \tag{4.9}\]
Next, utilizing Faraday's equation from (1.1), we write, for any \(t\in(0,T)\), that
\[(\partial_{t}\Gamma)=\left(\frac{\partial_{t}B}{r}\right)\cdot e_{\theta}=-c \left(\frac{\nabla\times E}{r}\right)\cdot e_{\theta}, \tag{4.10}\]
whereby
\[(\partial_{t}\Gamma)\left|{}_{t=0}\right.=-c\left(\frac{\nabla\times E_{0}}{r }\right)\cdot e_{\theta}.\]
Accordingly, it follows that
\[\frac{1}{c^{2}}|Y_{p}^{\prime}(0)|=\frac{1}{c^{2}}\left|\Big{(}\int_ {\mathbb{R}^{3}}(\partial_{t}\Gamma)\Gamma|\Gamma|^{p-2}dx\Big{)}_{\big{|}_{t=0}}\right| \leq\frac{1}{c}\int_{\mathbb{R}^{3}}\frac{|\nabla\times E_{0}|}{r} |\Gamma_{0}|^{p-1}dx\] \[\leq\frac{1}{c}\left\|\frac{\nabla\times E_{0}}{r}\right\|_{L^{p} }\left\|\Gamma_{0}\right\|_{L^{p}}^{p-1}.\]
Thus, recalling the inequality
\[ab\leq\varepsilon^{\alpha}\frac{a^{\alpha}}{\alpha}+\varepsilon^{-\alpha^{ \prime}}\frac{b^{\alpha^{\prime}}}{\alpha^{\prime}}, \tag{4.11}\]
for any \(\varepsilon>0\) and \(\alpha\in(1,\infty)\), where \(\alpha^{\prime}\) denotes the conjugate of \(\alpha\), and by virtue of Lemma 2.2, we end up with
\[\begin{split}\frac{1}{c^{2}}|Y_{p}^{\prime}(0)|& \leq\left(\left(\frac{1}{c}\left\|\frac{\nabla\times E_{0}}{r} \right\|_{L^{p}}\right)^{\frac{1}{p}}\left\|\Gamma_{0}\right\|_{L^{p}}^{\frac {1}{p^{\prime}}}\right)^{p}\\ &\leq\left(\frac{1}{pc}\left\|\frac{\nabla\times E_{0}}{r} \right\|_{L^{p}}+\frac{1}{p^{\prime}}\left\|\Gamma_{0}\right\|_{L^{p}}\right)^ {p}\\ &\lesssim\Big{(}\frac{1}{c}\left\|E_{0}\right\|_{\dot{W}^{2,p}}+ \left\|\Gamma_{0}\right\|_{L^{p}}\Big{)}^{p}.\end{split} \tag{4.12}\]
Consequently, for \(p=2\), we find that (4.9) entails that
\[\left\|\Gamma(t)\right\|_{L^{2}}^{2}\lesssim\left\|\Gamma_{0}\right\|_{L^{2}} ^{2}+c^{-2}\left(\left\|E_{0}\right\|_{\dot{H}^{2}}+\left\|\partial_{t}\Gamma \right\|_{L^{2}_{t}L^{2}}\right)^{2}\]
and
\[\left\|\Gamma\right\|_{L^{2}_{t}\dot{H}^{1}}^{2}\lesssim\left\|\Gamma_{0} \right\|_{L^{2}}^{2}+c^{-2}\left(\left\|E_{0}\right\|_{\dot{H}^{2}}+\left\| \Gamma\right\|_{L^{\infty}_{t}\dot{H}^{1}}+\left\|\partial_{t}\Gamma\right\|_ {L^{2}_{t}L^{2}}\right)^{2}.\]
Finally, employing (4.10) and Lemma 2.2 in the last bound leads to (4.5), and (4.6) in the case \(p=2\).
Then, in order to deal with the range \(p\in(2,\infty)\), we deduce from (4.9) that
\[\left\|\Gamma\right\|_{L^{\infty}_{t}L^{p}}^{p}\lesssim\left\|\Gamma_{0} \right\|_{L^{\infty}_{t}(L^{p})}^{p}+\frac{1}{c^{2}}|Y_{p}^{\prime}(0)|+\frac{ 1}{c^{2}}\int_{0}^{t}\left\|\partial_{t}\Gamma(\tau)\right\|_{L^{p}}^{2}\left\| \Gamma(\tau)\right\|_{L^{p}}^{p-2}d\tau.\]
Therefore, employing (4.12) to control \(Y_{p}^{\prime}(0)\) yields
\[\left\|\Gamma\right\|_{L^{\infty}_{t}L^{p}}^{p}\lesssim\left\|\Gamma_{0} \right\|_{L^{\infty}_{t}L^{p}}^{p}+\left(\frac{1}{c}\left\|E_{0}\right\|_{\dot {W}^{2,p}}+\left\|\Gamma_{0}\right\|_{L^{p}}\right)^{p}+\frac{1}{c^{2}}\left\| \partial_{t}\Gamma\right\|_{L^{2}_{t}L^{p}}^{2}\left\|\Gamma\right\|_{L^{ \infty}_{t}L^{p}}^{p-2}.\]
After that, we utilize (4.11) to obtain that
\[\left\|\Gamma\right\|_{L^{\infty}_{t}L^{p}}^{p}\lesssim\left\|\Gamma_{0} \right\|_{L^{\infty}_{t}L^{p}}^{p}+\left(\frac{1}{c}\left\|E_{0}\right\|_{\dot {W}^{2,p}}+\left\|\Gamma_{0}\right\|_{L^{p}}\right)^{p}+\frac{1}{c^{p}}\left\| \partial_{t}\Gamma\right\|_{L^{2}_{t}L^{p}}^{p}.\]
At last, by using (4.10) and Lemma 2.2, again, we arrive at the desired bound (4.6). This completes the proof of the proposition.
## 5. Closing the estimates and proof of Theorem 1.1
We are now ready to establish the final global bounds which lead to the existence of global solutions of the Navier-Stokes-Maxwell equations (1.1). This is going to be done, first, by gathering all the estimates from the previous sections to produce a nonlinear energy estimate. Then, under adequate assumptions in the regime \(c\to\infty\), the nonlinear energy bound will allow us to deduce the desired global control, uniformly with respect to the speed of light.
Throughout this section, for the sake of simplicity and clarity, the subscript "\(t\)" that appears in time-Lebesgue spaces \(L^{p}_{t}\) should be understood as the endpoint of the time interval, i.e., for example, we will be using the notation
\[\left\|f\right\|_{L^{p}_{t}\dot{H}^{s}}\stackrel{{\mathrm{def}}}{{= }}\left\|f\right\|_{L^{p}\left([0,t);\dot{H}^{s}(\mathbb{R}^{3})\right)}.\]
Accordingly, all the quantities involving time-Lebesgue norms are continuous functions on the real half-line \([0,\infty)\), for all \(p\in[1,\infty]\).
### Approximation scheme and compactness
Solutions of (1.1) will be constructed by a standard compactness method based on the smooth approximation of solutions. Although this is now classical in the literature, we briefly recall here the principle ideas in that method. This will allow us to justify all the formal computations in the derivation of our global a priori bounds, below.
We follow the approach laid out in [3, Section 3.1], for instance. Thus, we begin with approximating the Navier-Stokes-Maxwell equations (1.1) by a new system of equations which has a unique smooth solution.
An admissible approximation should preserve the structures satisfied by the original system such as, in our case, the energy inequality (1.2) and the axisymmetric properties. A possible choice of that approximation is given, for \(n\in\mathbb{N}\), by
supplemented with the initial data \((u_{n},E_{n},B_{n})|_{t=0}\stackrel{{\mathrm{def}}}{{=}}S_{n}(u_ {0},E_{0},B_{0})\), where \(S_{n}\) is a radial cutoff Fourier multiplier which restricts the frequencies to the set \(\{\left|\xi\right|\leq 2^{n}\}\) and converges to the identity as \(n\to\infty\).
Showing that the approximate system, for any fixed \(n\in\mathbb{N}\), has a unique global solution is a routine procedure based on standard methods. Moreover, one can show that the corresponding solutions are smooth in time and space and satisfy the energy inequality
\[\left\|\left(u_{n},E_{n},B_{n}\right)(t)\right\|_{L^{2}}^{2}+2\nu\int_{0}^{t} \left\|\nabla u_{n}(\tau)\right\|_{L^{2}}^{2}d\tau+\frac{2}{\sigma}\int_{0}^{ t}\left\|j_{n}(\tau)\right\|_{L^{2}}^{2}d\tau=\left\|S_{n}(u_{0},E_{0},B_{0}) \right\|_{L^{2}}^{2}\leq\mathcal{E}_{0}^{2},\]
where we recall that
\[\mathcal{E}_{0}=\left\|\left(u_{0},E_{0},B_{0}\right)\right\|_{L^{2}}.\]
If, furthermore, the initial data are assumed to be axisymmetric, and the cutoff operator \(S_{n}\) does not alter that structure (which is the case when \(S_{n}\) can be characterized as a convolution with a radial function), then the approximate smooth solution remains axisymmetric for all times.
Noting, once again, that the energy inequality above is not sufficient to ensure the stability, as \(n\to\infty\), of the nonlinear term
\[j_{n}\times B_{n},\]
our strategy thus consists in looking for better bounds in higher regularity spaces, uniformly with respect to the regularizing parameter \(n\). In particular, we will obtain new bounds which will allow us to establish the strong relative compactness of all vector field \(u_{n}\), \(E_{n}\), \(B_{n}\) and \(j_{n}\) in \(L^{2}_{\mathrm{loc},t,x}\) and, then, conclude that the approximate solutions converge, as \(n\to\infty\), to an exact solution of (1.1).
With such strong bounds, the full justification of the stability of the approximate system follows from standard compactness arguments, which we will therefore omit. We refer to [3] for some details on similar arguments applied to construction of global solutions of the Euler-Maxwell system in two dimensions of space.
Normally, we should prove the a priori estimates on the approximate system above. However, as usual, since the approximate system enjoys the same structure as the original one, we will, from now on, assume that the solutions to (1.1) are smooth and we will perform all estimates directly on (1.1).
### Weak-strong uniqueness
The functional spaces used in Theorem 1.1 are sufficient to prove uniqueness results for (1.1). Although this is not hard to show, we choose to provide in the following proposition a weak-strong stability result with a self-contained proof which covers the uniqueness of solutions claimed in Theorem 1.1. See also [6, Proposition 3.11] for a similar weak-strong principle for the same system.
**Proposition 5.1** (\(L^{2}\) weak-strong stability).: _Let \(c>0\) and \((u_{i},E_{i},B_{i})_{i\in\{1,2\}}\) be two weak solutions of (1.1) associated with the same initial data and satisfying the energy inequality (1.2). Assume moreover that_
\[u_{2}\in L^{2}_{\rm loc}(\mathbb{R}^{+};L^{\infty}),\qquad E_{2}\in L^{2}_{ \rm loc}(\mathbb{R}^{+};L^{3}),\qquad B_{2}\in L^{\infty}_{\rm loc}(\mathbb{R }^{+};L^{3}).\]
_Then, the two solutions are equal._
Proof.: We define the difference of the two solutions by
\[\widetilde{u}\stackrel{{\rm def}}{{=}}u_{1}-u_{2},\qquad \widetilde{E}\stackrel{{\rm def}}{{=}}E_{1}-E_{2},\qquad \widetilde{B}\stackrel{{\rm def}}{{=}}B_{1}-B_{2},\qquad \widetilde{j}\stackrel{{\rm def}}{{=}}j_{1}-j_{2},\]
and we compute, for any \(t>0\), that
\[\int_{\mathbb{R}^{3}} \left(u_{1}\cdot u_{2}+E_{1}\cdot E_{2}+B_{1}\cdot B_{2}\right) \left(t\right)dx+\frac{2}{\sigma}\int_{0}^{t}\int_{\mathbb{R}^{3}}j_{1}\cdot j _{2}dxd\tau+2\nu\int_{0}^{t}\int_{\mathbb{R}^{3}}\nabla u_{1}\cdot\nabla u_{2} dxd\tau\] \[=-\int_{0}^{t}\int_{\mathbb{R}^{3}}(j_{2}\times\widetilde{B}) \cdot\widetilde{u}dxd\tau+\int_{0}^{t}\int_{\mathbb{R}^{3}}(\widetilde{j} \times\widetilde{B})\cdot u_{2}dxd\tau-\int_{0}^{t}\int_{\mathbb{R}^{3}}\left( u_{2}\otimes\widetilde{u}\right):\nabla\widetilde{u}dxd\tau.\]
Note that the computations above can be rigorously justified by smoothing out the two solutions and following the proof of [18, Lemma 2.1], for instance.
Therefore, setting
\[F(t)\stackrel{{\rm def}}{{=}}\frac{1}{2}\left(\|\widetilde{u}(t) \|_{L^{2}}^{2}+\|\widetilde{E}(t)\|_{L^{2}}^{2}+\|\widetilde{B}(t)\|_{L^{2}}^ {2}\right),\]
and making use of the energy inequality (1.2), which is assumed to be satisfied by both solutions, we obtain, for any \(\varepsilon>0\), that
\[F(t)+\nu\left\|\nabla\widetilde{u}\right\|_{L^{2}_{t,x}}^{2}+ \frac{1}{\sigma}|\widetilde{j}\|_{L^{2}_{t,x}}^{2} \leq\int_{0}^{t}\left\|j_{2}(\tau)\right\|_{L^{3}}\|\widetilde{B}( \tau)\|_{L^{2}}\left\|\widetilde{u}(\tau)\right\|_{L^{6}}d\tau\] \[\quad+\int_{0}^{t}\left\|u_{2}(\tau)\right\|_{L^{\infty}}\left(\| \widetilde{j}(\tau)\|_{L^{2}}\|\widetilde{B}(\tau)\|_{L^{2}}+\|\widetilde{u}( \tau)\|_{L^{2}}\left\|\nabla\widetilde{u}(\tau)\right\|_{L^{2}}\right)d\tau\] \[\leq\varepsilon+\frac{\nu}{4}\left\|\nabla\widetilde{u}\right\|_{L ^{2}_{t,x}}^{2}+\frac{C}{\nu}\int_{0}^{t}\left\|j_{2}(\tau)\right\|_{L^{3}}^{2} \|\widetilde{B}(\tau)\|_{L^{2}}^{2}d\tau\] \[\quad+\frac{\nu}{4}\left\|\nabla\widetilde{u}\right\|_{L^{2}_{t,x }}^{2}+\frac{1}{\nu}\int_{0}^{t}\left\|u_{2}(\tau)\right\|_{L^{\infty}}^{2}\| \widetilde{u}(\tau)\|_{L^{2}}^{2}d\tau\] \[\quad+\frac{1}{2\sigma}\|\widetilde{j}\|_{L^{2}_{t,x}}^{2}+\frac {\sigma}{2}\int_{0}^{t}\left\|u_{2}(\tau)\right\|_{L^{\infty}}^{2}\|\widetilde{ B}(\tau)\|_{L^{2}}^{2}d\tau,\]
where \(C>0\) is the constant from the embedding \(\dot{H}^{1}\hookrightarrow L^{6}(\mathbb{R}^{3})\).
Hence, by further simplifying the preceding bound and applying Gronwall's lemma, we arrive at the conclusion that
\[F(t)+\frac{\nu}{2}\left\|\nabla\widetilde{u}\right\|_{L^{2}_{t,x}}^{2}+\frac{1} {2\sigma}\|\widetilde{j}\|_{L^{2}_{t,x}}^{2}\leq\varepsilon\exp\left(C_{\nu, \sigma}\int_{0}^{t}\left(\|j_{2}(\tau)\|_{L^{3}}^{2}+\|u_{2}(\tau)\|_{L^{ \infty}}^{2}\right)d\tau\right), \tag{5.1}\]
for any \(\varepsilon>0\) and some constant \(C_{\nu,\sigma}>0\).
Accordingly, by further exploiting Ohm's law
\[j_{2}=cE_{2}+P(u_{2}\times B_{2}),\]
and employing the additional bounds on the second solution \((u_{2},E_{2},B_{2})\), it is readily seen then that the right-hand side in (5.1) is finite and vanishes as \(\varepsilon\to 0\). This concludes the proof of the weak-strong uniqueness.
### Existence of global solutions
Here, we prove existence of global solutions of (1.1), as it is claimed in Theorem 1.1. In view of the arguments laid out in Section 5.1, above, this proof is reduced to establishing adequate a priori global estimates for (1.1).
The proof is split into two parts. The first part is devoted to the case of rough profiles
\[c^{-1}(E_{0}^{c},B_{0}^{c})\in\dot{B}_{2,1}^{\frac{5}{2}},\]
uniformly in \(c>0\). This means that the \(\dot{B}_{2,1}^{\frac{5}{2}}\) norm of the initial data is allowed to blow up, as \(c\to\infty\), with a rate which is at most of order \(c\).
In the second part of the proof below, we deal with the case of regular profiles, i.e.,
\[(E_{0}^{c},B_{0}^{c})\in\dot{B}_{2,1}^{\frac{5}{2}},\]
uniformly with respect to \(c>0\).
#### 5.3.1. Rough profiles
Here, we assume that
\[c^{-1}(u_{0}^{c},E_{0}^{c},B_{0}^{c})\in\dot{B}_{2,1}^{\frac{5}{2}},\]
uniformly with respect to \(c\in(0,\infty)\), meaning that \((u_{0}^{c},E_{0}^{c},B_{0}^{c})\) can be unbounded in \(\dot{B}_{2,1}^{\frac{5}{2}}\) as \(c\to\infty\).
_Control of the velocity field._ The principal control of velocity fields is obtained in Proposition 4.2. For convenience, we rewrite here its main estimate:
\[\begin{split}&\left\|(\omega,\Omega)\right\|_{L^{\infty}_{t}L^{2} \cap L^{2}_{t}\dot{H}^{1}}\\ &\qquad\lesssim\left(\left\|(\omega_{0},\Omega_{0})\right\|_{L^{ 2}}+\left\|\frac{1}{c}\partial_{t}E\right\|_{L^{2}_{t}\dot{H}^{\frac{1}{2}}} \left\|B\right\|_{L^{\infty}_{t}H^{2}}+\left\|\Gamma\right\|_{L^{\infty}_{t}L^ {3}}\left\|(B,\Gamma)\right\|_{L^{2}_{t}\dot{H}^{1}}\right)e^{C\mathcal{E}_{0} ^{2}}.\end{split} \tag{5.2}\]
We also notice, due to the energy inequality (1.2), that
\[\mathcal{E}_{0}\leq\mathcal{E}_{t}\stackrel{{\rm def}}{{=}}\left\| (u,E,B)\right\|_{L^{\infty}_{t}L^{2}}+\left\|u\right\|_{L^{2}_{t}\dot{H}^{1}} +\left\|j\right\|_{L^{2}_{t}L^{2}}\lesssim\mathcal{E}_{0}. \tag{5.3}\]
Moreover, the standard embeddings
\[\left\|u\right\|_{L^{4}_{t}\dot{B}_{2,1}^{\frac{3}{2}}}\lesssim\left\|u\right\| _{L^{\infty}_{t}\dot{H}^{1}\cap L^{2}_{t}\dot{H}^{2}}\lesssim\left\|\omega \right\|_{L^{\infty}_{t}L^{2}\cap L^{2}_{t}\dot{H}^{1}}\]
will be used regularly without explicit reference. The justification of the first inequality above is done by applying an abstract interpolation argument, whereas the second one straightforwardly follows from the Biot-Savart law.
_Control of high electromagnetic frequencies._ The control of high electromagnetic frequencies is given by Lemma 3.4. More precisely, we obtain from (3.15), with the specific values \(q\in\{2,\infty\}\), that
\[\begin{split} c^{-1}\left\|(E,B)\right\|_{\widetilde{L}_{t}^{ \infty}\dot{B}_{2,1,>}^{\frac{5}{2}}}&+\left\|(E,B)\right\|_{ \widetilde{L}_{t}^{2}\dot{B}_{2,1,>}^{\frac{5}{2}}}\\ &\lesssim c^{-1}\left\|(E_{0},B_{0})\right\|_{\dot{B}_{2,1,>}^{ \frac{5}{2}}}\\ &+c^{-\frac{1}{2}}\left\|u\right\|_{L_{t}^{\infty}\dot{H}^{1} \cap L_{t}^{2}\dot{H}^{2}}\left(\left\|B\right\|_{\widetilde{L}_{t}^{2}\dot{B} _{2,1,>}^{\frac{5}{2}}}+\left\|B\right\|_{L_{t}^{2}\dot{B}_{2,1,<}^{\frac{5}{ 2}}}\right).\end{split} \tag{5.4}\]
_Control of low electromagnetic frequencies--Part A._ The control of low electromagnetic frequencies relies on Lemma 3.5, above. Specifically, applying (3.23) with the values \(q=2\), \(p=m=4\), \(\alpha=1\) and \(s=\frac{3}{2}\) yields that
\[\left\|B\right\|_{L_{t}^{2}\dot{B}_{2,1,<}^{\frac{5}{2}}}\lesssim c^{-1}\left\| E_{0}\right\|_{\dot{B}_{2,2,<}^{\frac{5}{2}}}+\left\|B_{0}\right\|_{\dot{B}_{2,2, <}^{\frac{3}{2}}}+\left\|u\right\|_{L_{t}^{4}\dot{B}_{2,1}^{\frac{3}{2}}}\left\| B\right\|_{L_{t}^{4}\dot{B}_{2,1}^{\frac{3}{2}}}.\]
Likewise, employing (3.23), again, with the values \(p=2\) and \(s=2\) entails that
\[\left\|B\right\|_{L_{t}^{2}\dot{B}_{2,1,<}^{\frac{5}{2}}}\lesssim c^{-1}\left\| E_{0}\right\|_{\dot{B}_{2,2,<}^{\frac{5}{2}}}+\left\|B_{0}\right\|_{\dot{B}_{2,2, <}^{\frac{3}{2}}}+\left\|u\right\|_{L_{t}^{4}\dot{B}_{2,1}^{\frac{3}{2}}}\left\| B\right\|_{L_{t}^{2}\dot{B}_{2,1}^{2}}.\]
Actually, since (3.14) is a linear system, by splitting high and low frequencies of \(B\) in the source term \(P(u\times B)\), one can straightforwardly adapt the proofs of the preceding estimates to obtain the more useful control
\[\begin{split}\left\|B\right\|_{\dot{L}_{t}^{2}\dot{B}_{2,1,<}^{ \frac{5}{2}}}&\lesssim\left\|(E_{0},B_{0})\right\|_{\dot{B}_{2,2,<}^{\frac{3}{2}}}\\ &\quad+\left\|u\right\|_{L_{t}^{4}\dot{B}_{2,1}^{\frac{3}{2}}} \left(\left\|1_{\{|D|<\frac{\sigma\epsilon}{2}\}}B\right\|_{L_{t}^{4}\dot{B}_{2,1}^{\frac{3}{2}}}+\left\|1_{\{|D|\geq\frac{\sigma\epsilon}{2}\}}B\right\|_{L_ {t}^{2}\dot{B}_{2,1}^{2}}\right)\\ &\lesssim\left\|(E_{0},B_{0})\right\|_{\dot{H}^{\frac{3}{2}}}\\ &\quad+\left\|u\right\|_{L_{t}^{\infty}\dot{H}^{1}\cap L_{t}^{2} \dot{H}^{2}}\left(\left\|B\right\|_{L_{t}^{4}\dot{B}_{2,1,<}^{\frac{3}{2}}}+c^ {-\frac{1}{2}}\left\|B\right\|_{L_{t}^{2}\dot{B}_{2,1,<}^{\frac{5}{2}}}+c^{- \frac{1}{2}}\left\|B\right\|_{\widetilde{L}_{t}^{2}\dot{B}_{2,1,>}^{\frac{5}{ 2}}}\right).\end{split} \tag{5.5}\]
Before we proceed with the proof, let us establish the energy estimate for \((E,B)\) in Sobolev spaces, which will then be combined with the preceding estimate to complete the proof of the low-frequency bounds on electromagnetic fields.
_Energy estimate for electromagnetic fields._ Applying Lemma 3.6 with the values \(s=\frac{3}{2}\) and \((p,q,\varepsilon)=(2,\infty,\frac{1}{2})\), and then with the values \((p,q)=(4,4)\) at the endpoint \(\varepsilon=0\), we find that
\[\left\|(E,B)\right\|_{L_{t}^{\infty}\dot{H}^{\frac{3}{2}}}+c\left\|E\right\|_{ L_{t}^{2}\dot{H}^{\frac{3}{2}}}\lesssim\left\|(E_{0},B_{0})\right\|_{\dot{H}^{ \frac{3}{2}}}+\left\|u\right\|_{L_{t}^{\infty}\dot{H}^{1}}\left\|B\right\|_{L_ {t}^{2}\dot{H}^{2}},\]
and
\[\left\|(E,B)\right\|_{L_{t}^{\infty}\dot{H}^{\frac{3}{2}}}+c\left\|E\right\|_{ L_{t}^{2}\dot{H}^{\frac{3}{2}}}\lesssim\left\|(E_{0},B_{0})\right\|_{\dot{H}^{ \frac{3}{2}}}+\left\|u\right\|_{L_{t}^{4}\dot{B}_{2,1}^{\frac{3}{2}}}\left\|B \right\|_{L_{t}^{4}\dot{H}^{\frac{3}{2}}}.\]
In fact, by spliting high and low frequencies of \(B\) in the right-hand side of the preceding estimates, it is possible to adapt their proofs to obtain the following more useful bound
\[\left\|(E,B)\right\|_{L_{t}^{\infty}\dot{H}^{\frac{3}{2}}}+c\left\| E\right\|_{L_{t}^{2}\dot{H}^{\frac{3}{2}}} \lesssim\left\|(E_{0},B_{0})\right\|_{\dot{H}^{\frac{3}{2}}}\\ +\left\|u\right\|_{L_{t}^{\infty}\dot{H}^{1}\cap L_{t}^{2}\dot{H} ^{2}}\left(\left\|1_{\{|D|<\frac{\sigma\epsilon}{2}\}}B\right\|_{L_{t}^{4} \dot{H}^{\frac{3}{2}}}+\left\|1_{\{|D|\geq\frac{\sigma\epsilon}{2}\}}B\right\|_{L _{t}^{2}\dot{H}^{2}}\right).\]
Therefore, similarly to (5.5), we infer that
\[\begin{split}\left\|(E,B)\right\|_{L^{\infty}_{t}\dot{H}^{\frac{3}{2 }}}+c\left\|E\right\|_{L^{2}_{t}\dot{H}^{\frac{3}{2}}}\\ \lesssim\left\|(E_{0},B_{0})\right\|_{\dot{H}^{\frac{3}{2}}}\\ +\left\|u\right\|_{L^{\infty}_{t}\dot{H}^{1}\cap L^{2}_{t}\dot{H}^ {2}}\left(\left\|B\right\|_{L^{4}_{t}\dot{B}^{\frac{3}{2}}_{2,1,<}}+c^{-\frac{ 1}{2}}\left\|B\right\|_{L^{2}_{t}\dot{B}^{\frac{5}{2}}_{2,1,<}}+c^{-\frac{1}{2 }}\left\|B\right\|_{\widetilde{L}^{2}_{t}\dot{B}^{\frac{5}{2}}_{2,1,>}}\right). \end{split} \tag{5.6}\]
_Control of low electromagnetic frequencies--Part B._ Now, we carry on with the estimates of low frequencies of electromagnetic fields by first combining (5.5) and (5.6) to find that
\[\begin{split}\left\|(E,B)\right\|_{L^{\infty}_{t}\dot{H}^{\frac{3 }{2}}}&+c\left\|E\right\|_{L^{2}_{t}\dot{H}^{\frac{3}{2}}}+ \left\|B\right\|_{L^{2}_{t}\dot{B}^{\frac{5}{2}}_{2,1,<}}\\ &\lesssim\left\|(E_{0},B_{0})\right\|_{\dot{H}^{\frac{3}{2}}}+c^{ -\frac{1}{2}}\left\|u\right\|_{L^{\infty}_{t}\dot{H}^{1}\cap L^{2}_{t}\dot{H}^ {2}}\left(\left\|B\right\|_{L^{2}_{t}\dot{B}^{\frac{5}{2}}_{2,1,<}}+\left\|B \right\|_{\widetilde{L}^{2}_{t}\dot{B}^{\frac{5}{2}}_{2,1,>}}\right)\\ &+\left\|u\right\|_{L^{\infty}_{t}\dot{H}^{1}\cap L^{2}_{t}\dot{H }^{2}}\left\|B\right\|_{L^{4}_{t}\dot{B}^{\frac{3}{2}}_{2,1,<}}.\end{split} \tag{5.7}\]
Note that one cannot expect to observe any decay, with respect to \(c\), in the nonlinear term
\[\left\|u\right\|_{L^{\infty}_{t}\dot{H}^{1}\cap L^{2}_{t}\dot{H}^{2}}\left\|B \right\|_{L^{4}_{t}\dot{B}^{\frac{3}{2}}_{2,1,<}}.\]
Accordingly, at this stage, the bound (5.7) does not seem to be helpful in obtaining a global control without additional conditions on the size of the initial data.
Nevertheless, our key observation here is that the space \(L^{4}_{t}\dot{B}^{\frac{5}{2}}_{2,1}\) can be obtained by interpolating \(L^{\infty}_{t}\dot{H}^{\frac{3}{2}}\), \(L^{2}_{t}\dot{B}^{\frac{5}{2}}_{2,1}\) and \(L^{2}_{t}\dot{H}^{1}\). Note that \(L^{\infty}_{t}\dot{H}^{\frac{3}{2}}\) and \(L^{2}_{t}\dot{B}^{\frac{5}{2}}_{2,1,<}\) appear in the left-hand side of (5.7), whereas, \(L^{2}_{t}\dot{H}^{1}\) is a good space for \(B\), because, in view of (4.4), the magnetic field is asymptotically globally bounded in that space. Based on this remark, we write, by interpolation, that
\[\begin{split}\left\|B\right\|_{L^{4}_{t}\dot{B}^{\frac{3}{2}}_{2, 1,<}}&\lesssim\left(\int_{0}^{t}\left\|B(\tau)\right\|^{\frac{4}{3 }}_{\dot{H}^{1}}\left\|B(\tau)\right\|^{\frac{8}{3}}_{\dot{B}^{\frac{7}{2}}_{2, 1,<}}d\tau\right)^{\frac{1}{4}}\\ &\lesssim\left\|B\right\|^{\frac{1}{3}}_{L^{2}_{t}\dot{H}^{1}} \left\|B\right\|^{\frac{2}{3}}\\ &\lesssim\left\|B\right\|^{\frac{1}{3}}_{L^{2}_{t}\dot{H}^{1}} \left(\left\|B\right\|_{L^{\infty}_{t}\dot{B}^{\frac{3}{2}}_{2,2,<}}+\left\|B \right\|_{L^{2}_{t}\dot{B}^{\frac{5}{2}}_{2,1,<}}\right)^{\frac{2}{3}},\end{split}\]
which leads to
\[\begin{split}\left\|(E,B)\right\|_{L^{\infty}_{t}\dot{H}^{\frac{3}{ 2}}}&+c\left\|E\right\|_{L^{2}_{t}\dot{H}^{\frac{3}{2}}}+\left\|B \right\|_{L^{2}_{t}\dot{B}^{\frac{5}{2}}_{2,1,<}}\\ &\lesssim\left\|(E_{0},B_{0})\right\|_{\dot{H}^{\frac{3}{2}}}+c^{ -\frac{1}{2}}\left\|u\right\|_{L^{\infty}_{t}\dot{H}^{1}\cap L^{2}_{t}\dot{H}^ {2}}\left(\left\|B\right\|_{L^{2}_{t}\dot{B}^{\frac{5}{2}}_{2,1,<}}+\left\|B \right\|_{\widetilde{L}^{2}_{t}\dot{B}^{\frac{5}{2}}_{2,1,>}}\right)\\ &\quad+\left\|u\right\|_{L^{\infty}_{t}\dot{H}^{1}\cap L^{2}_{t} \dot{H}^{2}}\left\|B\right\|^{\frac{1}{3}}_{L^{2}_{t}\dot{H}^{1}}\left(\left\|B \right\|_{L^{\infty}_{t}\dot{B}^{\frac{3}{2}}_{2,2,<}}+\left\|B\right\|_{L^{2 }_{t}\dot{B}^{\frac{5}{2}}_{2,1,<}}\right)^{\frac{2}{3}}.\end{split} \tag{5.8}\]
In order to complete our summary of all relevant low-frequency bounds on electromagnetic fields, we recall the estimate
\[\left\|E\right\|_{L^{2}_{t}\dot{B}^{\frac{5}{2}}_{2,1}}\lesssim c^{-1}\left\|(E_ {0},B_{0})\right\|_{\dot{B}^{\frac{5}{2}}_{2,1}}+c^{-\frac{1}{2}}\left\|u\right\| _{L^{\infty}_{t}\dot{H}^{1}\cap L^{2}_{t}\dot{H}^{2}}\left(\left\|B\right\|_{ \widetilde{L}^{2}_{t}\dot{B}^{\frac{5}{2}}_{2,1,>}}+\left\|B\right\|_{L^{2}_{t} \dot{B}^{\frac{5}{2}}_{2,1,<}}\right), \tag{5.9}\]
which is established in (3.26).
_Decay of electric fields and almost-parabolic bounds on magnetic fields._ For later use, we recall and add some precision to the bounds proved in Section 4.2, above.
More specifically, observe first that Proposition 4.1, with the values \(s\in\{0,\frac{1}{2}\}\), provides us with the decay estimates
\[\begin{split}\left\|\frac{1}{c}\partial_{t}E\right\|_{L_{t}^{2}L^{ 2}}&\lesssim c^{-1}\left\|\nabla\times B_{0}-j_{0}\right\|_{L^{2} }+c^{-1}\left\|E_{0}\right\|_{\dot{H}^{1}}+c^{-2}\left\|u\right\|_{L_{t}^{ \infty}\dot{H}^{1}\cap L_{t}^{2}\dot{H}^{2}}^{2}\left\|B\right\|_{L_{t}^{ \infty}\dot{H}^{\frac{3}{2}}}\\ &+c^{-2}(1+\mathcal{E}_{0})\Big{(}\left\|u\right\|_{L_{t}^{ \infty}\dot{H}^{1}\cap L_{t}^{2}\dot{H}^{2}}+\left\|B\right\|_{L_{t}^{\infty} \dot{H}^{\frac{3}{2}}}\Big{)}\\ &\times\Big{(}\left\|B\right\|_{L_{t}^{\infty}\dot{H}^{\frac{3}{2 }}}+c\left\|E\right\|_{L_{t}^{2}\dot{H}^{\frac{3}{2}}}\Big{)}\end{split} \tag{5.10}\]
and
\[\begin{split}\left\|\frac{1}{c}\partial_{t}E\right\|_{L_{t}^{2} \dot{H}^{\frac{1}{2}}}&\lesssim c^{-1}\left\|\nabla\times B_{0}-j_{0} \right\|_{\dot{H}^{\frac{1}{2}}}+c^{-1}\left\|E_{0}\right\|_{\dot{H}^{\frac{3} {2}}}+c^{-2}\left\|u\right\|_{L_{t}^{\infty}\dot{H}^{1}\cap L_{t}^{2}\dot{H}^{ 2}}^{2}\left\|B\right\|_{L_{t}^{\infty}\dot{H}^{\frac{3}{2}}}\\ &+c^{-2}(1+\mathcal{E}_{0})\Big{(}\left\|u\right\|_{L_{t}^{ \infty}\dot{H}^{1}\cap L_{t}^{2}\dot{H}^{2}}+\left\|B\right\|_{L_{t}^{\infty} \dot{H}^{\frac{3}{2}}}\Big{)}\\ &\times\Big{(}\left\|B\right\|_{L_{t}^{\infty}\dot{H}^{2}}+c \left\|E\right\|_{L_{t}^{2}\dot{H}^{2}}\Big{)}.\end{split} \tag{5.11}\]
As for the almost-parabolic estimates of \(B\), we first utilize Lemma 2.3 to write that
\[\begin{split} c^{-1}\left\|\nabla\times B_{0}-j_{0}\right\|_{L^{ 2}}&=c^{-1}\left\|\nabla\times B_{0}-cE_{0}-P(u_{0}\times B_{0}) \right\|_{L^{2}}\\ &\lesssim c^{-1}\left(\left\|B_{0}\right\|_{\dot{H}^{1}}+\left\|u _{0}\right\|_{L^{2}}\left\|B_{0}\right\|_{\dot{H}^{\frac{3}{2}}}\right)+\left\| E_{0}\right\|_{L^{2}}\\ &\lesssim c^{-1}\left(\left\|B_{0}\right\|_{\dot{H}^{1}}+\mathcal{ E}_{0}\left\|B_{0}\right\|_{\dot{H}^{\frac{3}{2}}}\right)+\mathcal{E}_{0}.\end{split}\]
Then, by substituting the preceding control in (5.10), and by further incorporating the resulting bound in the estimate from Proposition 4.2, we find that
\[\begin{split}\left\|B\right\|_{L_{t}^{2}\dot{H}^{1}}& \lesssim\mathcal{E}_{0}+c^{-1}\big{(}\left\|(E_{0},B_{0})\right\|_{\dot{H}^{1} }+\mathcal{E}_{0}\left\|B_{0}\right\|_{\dot{H}^{\frac{3}{2}}}\big{)}+c^{-2} \left\|u\right\|_{L_{t}^{\infty}\dot{H}^{1}\cap L_{t}^{2}\dot{H}^{2}}^{2} \left\|B\right\|_{L_{t}^{\infty}\dot{H}^{\frac{3}{2}}}\\ &+c^{-2}\left(1+\mathcal{E}_{0}\right)\left(\left\|u\right\|_{L_{t }^{\infty}\dot{H}^{1}\cap L_{t}^{2}\dot{H}^{2}}+\left\|B\right\|_{L_{t}^{ \infty}\dot{H}^{\frac{3}{2}}}\right)\left(\left\|B\right\|_{L_{t}^{\infty}\dot {H}^{\frac{3}{2}}}+\left\|cE\right\|_{L_{t}^{\infty}\dot{H}^{\frac{3}{2}}} \right),\end{split} \tag{5.12}\]
which provides an asymptotic parabolic regularity estimate on \(B\).
Finally, we emphasize that we will also make use of the similar almost-parabolic estimates on \(\Gamma\) obtained in Proposition 4.2. More precisely, the relevant estimates on \(\Gamma\) are
\[\left\|\Gamma\right\|_{L_{t}^{2}\dot{H}^{1}}\lesssim\left\|\Gamma_{0}\right\|_{ L^{2}}+c^{-1}\left\|E_{0}\right\|_{\dot{H}^{2}}+c^{-1}\left(\left\|B\right\|_{L_{t}^{ \infty}\dot{H}^{2}}+c\left\|E\right\|_{L_{t}^{2}\dot{H}^{2}}\right) \tag{5.13}\]
and
\[\left\|\Gamma\right\|_{L_{t}^{\infty}L^{3}}\lesssim\left\|\Gamma_{0}\right\|_{ L^{3}}+c^{-1}\left\|E_{0}\right\|_{\dot{H}^{\frac{5}{2}}}+\left\|E\right\|_{L_{t}^{2} \dot{H}^{\frac{5}{2}}}. \tag{5.14}\]
_Nonlinear energy estimate._ Here, we gather all the bounds above to produce a nonlinear energy estimate. To that end, let us first introduce, for any \(t\geq 0\), the functional \(\mathcal{H}(t)\) given by
\[\begin{split}\mathcal{H}(t)&\stackrel{{ \mathrm{def}}}{{=}}\left\|(\omega,\Omega)\right\|_{L_{t}^{\infty}L^{2}\cap L_{t}^{2}\dot{H}^{1}}+ \mathcal{E}_{t}+\left\|\Gamma\right\|_{L_{t}^{\infty}L^{3}}\\ &\quad+\left\|(E,B)\right\|_{L_{t}^{\infty}\dot{H}^{\frac{3}{2}}} +c^{-1}\left\|(E,B)\right\|_{L_{t}^{\infty}\dot{B}_{2,1}^{\frac{5}{2}}}+c \left\|E\right\|_{L_{t}^{2}\dot{H}^{\frac{3}{2}}}\\ &\quad+\left\|(E,B)\right\|_{\dot{L}_{t}^{2}\dot{B}_{2,1,>}^{\frac{5 }{2}}}+\left\|B\right\|_{L_{t}^{2}\dot{B}_{2,1,<}^{\frac{5}{2}}}+\left\|E \right\|_{L_{t}^{2}\dot{B}_{2,1}^{\frac{5}{2}}}+\left\|B\right\|_{L_{t}^{2} \dot{H}^{1}}\\ &\quad+\left\|\nabla\times B-j\right\|_{L_{t}^{\infty}\dot{H}^{ \frac{3}{2}}},\end{split}\]
where \(\mathcal{E}_{t}\) is given in (5.3) and all the time-norms are taken over the whole interval \([0,t)\). Accordingly, we conventionally define
\[\mathcal{H}(0) \stackrel{{\rm def}}{{=}}\left\|(\omega_{0},\Omega_{0} )\right\|_{L^{2}}+\mathcal{E}_{0}+\left\|\Gamma_{0}\right\|_{L^{3}}\] \[\quad+\left\|(E_{0},B_{0})\right\|_{\dot{H}^{\frac{3}{2}}}+c^{-1} \left\|(E_{0},B_{0})\right\|_{\dot{B}^{\frac{5}{2}}_{2,1}}+\left\|\nabla\times B _{0}-j_{0}\right\|_{\dot{H}^{\frac{1}{2}}}.\]
In particular, note, for all \(t\geq 0,\) that
\[\mathcal{H}(0)\leq\mathcal{H}(t). \tag{5.15}\]
Further observe that \(\left\|\Gamma_{0}\right\|_{L^{3}}\lesssim\left\|B_{0}\right\|_{\dot{H}^{\frac {3}{2}}}\), by virtue of Lemma 2.2.
Now, we claim an estimate of the form
\[\mathcal{H}(t)\leq C_{0}+P(\mathcal{H}(t)),\]
for \(t\geq 0,\) some constant \(C_{0}>0\) depending only on the initial data, uniformly with respect to \(c,\) and some polynomial \(P\in\mathbb{R}^{+}[X]\) whose coefficients vanish asymptotically as \(c\to\infty.\) Owing to the latter bound above, Lemma 5.2 below will eventually allow us to deduce the desired global estimates, under adequate conditions on the data.
In order to reach such a bound, we first proceed with the control of \(\frac{1}{c}\partial_{t}E\) by observing that (5.11) yields that
\[\left\|\frac{1}{c}\partial_{t}E\right\|_{L^{2}_{t}\dot{H}^{\frac{1}{2}}} \lesssim c^{-1}\mathcal{H}(0)+c^{-2}\mathcal{H}^{3}(t)+c^{-2}(1+\mathcal{E}_{ 0})\mathcal{H}(t)\Big{(}\left\|B\right\|_{L^{\infty}_{t}\dot{H}^{2}}+c\left\| E\right\|_{L^{2}_{t}\dot{H}^{2}}\Big{)}.\]
Therefore, noticing, by an interpolation argument, that
\[\left\|B\right\|_{L^{\infty}_{t}\dot{H}^{2}}+c\left\|E\right\|_{L^{2}_{t}\dot{ H}^{2}}\lesssim c^{\frac{1}{2}}\mathcal{H}(t), \tag{5.16}\]
it then follows that
\[\left\|\frac{1}{c}\partial_{t}E\right\|_{L^{2}_{t}\dot{H}^{\frac{1}{2}}} \lesssim c^{-1}\mathcal{H}(t)+c^{-\frac{3}{2}}(1+\mathcal{E}_{0})\mathcal{H}^ {2}(t)+c^{-2}\mathcal{H}^{3}(t). \tag{5.17}\]
Next, we exploit the almost-parabolic estimate on \(B\) given in (5.12). To that end, note first, by splitting the frequencies of \(B_{0},\) that
\[c^{-1}\left\|(E_{0},B_{0})\right\|_{\dot{H}^{1}} \leq c^{-1}\left(\left\|(E_{0},B_{0})\right\|_{\dot{B}^{1}_{2,2, <}}+\left\|(E_{0},B_{0})\right\|_{\dot{B}^{1}_{2,2,>}}\right)\] \[\lesssim\left\|(E_{0},B_{0})\right\|_{L^{2}}+c^{-\frac{3}{2}} \left\|(E_{0},B_{0})\right\|_{\dot{H}^{\frac{3}{2}}}\] \[\lesssim\mathcal{E}_{0}+c^{-\frac{3}{2}}\mathcal{H}(0).\]
In addition to that, using the fact that \(\mathcal{H}(0)\leq\mathcal{H}(t),\) one sees that
\[c^{-1}\mathcal{E}_{0}\left\|B_{0}\right\|_{\dot{H}^{\frac{3}{2}}} \leq\mathcal{E}_{0}+c^{-2}\mathcal{E}_{0}\left\|B_{0}\right\|_{ \dot{H}^{\frac{3}{2}}}^{2}\] \[\leq\mathcal{E}_{0}+c^{-2}(1+\mathcal{E}_{0})\mathcal{H}^{2}(t).\]
Accordingly, (5.12) yields, for \(c\geq 1,\) that
\[\left\|B\right\|_{L^{2}_{t}\dot{H}^{1}}\lesssim\ \ \mathcal{H}(0)+c^{-2}(1+ \mathcal{E}_{0})\mathcal{H}^{2}(t)+c^{-2}\mathcal{H}^{3}(t) \tag{5.18}\]
and, since \(\mathcal{H}(0)\leq\mathcal{H}(t),\) that
\[\left\|B\right\|_{L^{2}_{t}\dot{H}^{1}}\lesssim\ \ \mathcal{E}_{0}+c^{-\frac{3}{2}} \mathcal{H}(t)+c^{-2}(1+\mathcal{E}_{0})\mathcal{H}^{2}(t)+c^{-2}\mathcal{H}^ {3}(t). \tag{5.19}\]
We turn now our attention to the bounds on \(\Gamma\) given by (5.13) and (5.14). Observing, by a simple interpolation argument, that
\[c^{-1}\left\|E_{0}\right\|_{\dot{H}^{2}}\leq c^{-\frac{1}{2}}\left(c^{-1} \left\|E_{0}\right\|_{\dot{H}^{\frac{5}{2}}}\right)^{\frac{1}{2}}\left\|E_{0} \right\|_{\dot{H}^{\frac{3}{2}}}^{\frac{1}{2}}\leq c^{-\frac{1}{2}}\mathcal{H}(0 )\leq c^{-\frac{1}{2}}\mathcal{H}(t),\]
it follows, by incorporating (5.16) into (5.13), that
\[\left\|\Gamma\right\|_{L^{2}_{t}\dot{H}^{1}}\lesssim\left\|\Gamma_{0}\right\|_{L ^{2}}+c^{-\frac{1}{2}}\mathcal{H}(t). \tag{5.20}\]
As for the \(L^{\infty}_{t}L^{3}\) bound on \(\Gamma\), we begin by deducing from (5.9) that
\[\left\|E\right\|_{L^{2}_{t}\dot{B}^{\frac{5}{2}}_{2,1}}\lesssim\mathcal{H}(0)+ c^{-\frac{1}{2}}\mathcal{H}^{2}(t), \tag{5.21}\]
which, in view of (5.14), yields that
\[\left\|\Gamma\right\|_{L^{\infty}_{t}L^{3}}\lesssim\mathcal{H}(0)+c^{-\frac{1 }{2}}\mathcal{H}^{2}(t). \tag{5.22}\]
Now, we establish a control of velocity fields in terms of the functional \(\mathcal{H}(t)\). To that end, employing (5.16) and the simple fact that
\[\mathcal{E}_{0}\lesssim\mathcal{E}_{t}\leq c^{\frac{1}{2}}\mathcal{H}(t), \quad\text{for all }c\geq 1,\]
one deduces that
\[\left\|B\right\|_{L^{\infty}_{t}H^{2}}\leq\mathcal{E}_{0}+c^{\frac{1}{2}} \mathcal{H}(t)\leq 2c^{\frac{1}{2}}\mathcal{H}(t).\]
Hence, (5.17) provides us with the bound
\[\left\|\frac{1}{c}\partial_{t}E\right\|_{L^{2}_{t}\dot{H}^{\frac{1}{2}}} \left\|B\right\|_{L^{\infty}_{t}H^{2}}\lesssim\left(c^{-\frac{1}{2}}\mathcal{ H}^{2}(t)+c^{-1}\mathcal{H}^{3}(t)+c^{-\frac{3}{2}}\mathcal{H}^{4}(t)\right)e^{ \mathcal{E}_{0}^{2}}. \tag{5.23}\]
On the other hand, combining (5.19), (5.20) and (5.22) yields, for any \(c\geq 1\), that
\[\left\|\Gamma\right\|_{L^{\infty}_{t}L^{3}} \left\|(B,\Gamma)\right\|_{L^{2}_{t}\dot{H}^{1}}\] \[\lesssim\left(\mathcal{H}(0)+c^{-\frac{1}{2}}\mathcal{H}^{2}(t) \right)\left(\left\|\Gamma_{0}\right\|_{L^{2}}+\mathcal{E}_{0}+c^{-\frac{1}{2} }\mathcal{H}(t)+c^{-2}(1+\mathcal{E}_{0})\mathcal{H}^{2}(t)+c^{-2}\mathcal{H} ^{3}(t)\right)\] \[\lesssim\mathcal{H}(0)\left(\left\|\Gamma_{0}\right\|_{L^{2}}+ \mathcal{E}_{0}\right)+c^{-\frac{1}{2}}\mathcal{H}^{2}(t)\left(\left\|\Gamma_ {0}\right\|_{L^{2}}+\mathcal{E}_{0}\right)\] \[\quad+\mathcal{H}(0)(1+\mathcal{E}_{0})\Bigg{(}c^{-\frac{1}{2}} \mathcal{H}(t)+c^{-2}\mathcal{H}^{2}(t)+c^{-2}\mathcal{H}^{3}(t)\Bigg{)}\] \[\quad+(1+\mathcal{E}_{0})\left(c^{-1}\mathcal{H}^{3}(t)+c^{-\frac {5}{2}}\mathcal{H}^{4}(t)+c^{-\frac{5}{2}}\mathcal{H}^{5}(t)\right).\]
Accordingly, by employing (5.15) and \(c\geq 1\), again, we find that
\[\begin{split}\left\|\Gamma\right\|_{L^{\infty}_{t}L^{3}}& \left\|(B,\Gamma)\right\|_{L^{2}_{t}\dot{H}^{1}}\\ &\lesssim\mathcal{H}(0)\left(\left\|\Gamma_{0}\right\|_{L^{2}}+1 \right)e^{\mathcal{E}_{0}^{2}}\\ &\quad+\underbrace{\left(\left\|\Gamma_{0}\right\|_{L^{2}}+1 \right)\left(c^{-\frac{1}{2}}\mathcal{H}^{2}(t)+c^{-1}\mathcal{H}^{3}(t)+c^{- \frac{3}{2}}\mathcal{H}^{4}(t)+c^{-\frac{5}{2}}\mathcal{H}^{5}(t)\right)e^{C \mathcal{E}_{0}^{2}}}_{\stackrel{{\text{def}}}{{=}}P_{*}( \mathcal{H}(t))}.\end{split} \tag{5.24}\]
Consequently, by incorporating (5.23) and (5.24) into (5.2), we end up with
\[\left\|(\omega,\Omega)\right\|_{L^{\infty}_{t}L^{2}\cap L^{2}_{t}\dot{H}^{1}} \lesssim\mathcal{H}(0)\left(\left\|\Gamma_{0}\right\|_{L^{2}}+1\right)e^{C \mathcal{E}_{0}^{2}}+P_{*}(\mathcal{H}(t)), \tag{5.25}\]
where the value of the constant \(C>0\) can be adapted to guarantee the validity of the estimate.
In the next step, we gather the high and low-frequency estimates of electromagnetic fields. To that end, on the one hand, note that (5.4) yields
\[c^{-1}\left\|(E,B)\right\|_{\widetilde{L}^{\infty}_{t}\dot{B}^{\frac{5}{2}}_{2, 1,>}}+\left\|(E,B)\right\|_{\widetilde{L}^{2}_{t}\dot{B}^{\frac{5}{2}}_{2,1,>}} \lesssim\mathcal{H}(0)+P_{*}(\mathcal{H}(t)). \tag{5.26}\]
On the other hand, (5.8) entails that
\[\|(E,B)\|_{L^{\infty}_{t}\dot{H}^{\frac{3}{2}}}+c\,\|E\|_{L^{2}_{t} \dot{H}^{\frac{3}{2}}} +\|B\|_{L^{2}_{t}\dot{B}^{\frac{5}{2}}_{2,1,<}}\] \[\lesssim\mathcal{H}(0)+c^{-\frac{1}{2}}\mathcal{H}^{2}(t)+\| \omega\|_{L^{\infty}_{t}L^{2}\cap L^{2}_{t}\dot{H}^{1}}\,\|B\|_{L^{2}_{t}\dot{ H}^{1}}^{\frac{1}{2}}\,\mathcal{H}^{\frac{2}{3}}(t).\]
Therefore, we employ (5.25) to obtain that
\[\|(E,B)\|_{L^{\infty}_{t}\dot{H}^{\frac{3}{2}}} +c\,\|E\|_{L^{2}_{t}\dot{H}^{\frac{3}{2}}}+\|B\|_{L^{2}_{t}\dot{B} ^{\frac{5}{2}}_{2,1,<}}\] \[\lesssim\mathcal{H}(0)+P_{*}(\mathcal{H}(t))+\left(\mathcal{H}( 0)\left(\|\Gamma_{0}\|_{L^{2}}+1\right)e^{C\mathcal{E}_{0}^{2}}+P_{*}( \mathcal{H}(t))\right)\|B\|_{L^{2}_{t}\dot{H}^{1}}^{\frac{1}{3}}\,\mathcal{H}^ {\frac{2}{3}}(t)\] \[\lesssim\mathcal{H}(0)+(\mathcal{H}(t)+1)P_{*}(\mathcal{H}(t))+ \mathcal{H}(0)\left(\|\Gamma_{0}\|_{L^{2}}+1\right)e^{C\mathcal{E}_{0}^{2}}\, \|B\|_{L^{2}_{t}\dot{H}^{1}}^{\frac{1}{3}}\,\mathcal{H}^{\frac{2}{3}}(t),\]
which implies, for any \(\lambda>0\), that
\[\|(E,B)\|_{L^{\infty}_{t}\dot{H}^{\frac{3}{2}}} +c\,\|E\|_{L^{2}_{t}\dot{H}^{\frac{3}{2}}}+\|B\|_{L^{2}_{t}\dot{B} ^{\frac{5}{2}}_{2,1,<}}\] \[\lesssim\mathcal{H}(0)+(\mathcal{H}(t+1)P_{*}(\mathcal{H}(t))+ \lambda^{-2}\mathcal{H}^{3}(0)\left(\|\Gamma_{0}\|_{L^{2}}+1\right)^{3}e^{C \mathcal{E}_{0}^{2}}\,\|B\|_{L^{2}_{t}\dot{H}^{1}}+\lambda\mathcal{H}(t).\]
After that, we employ (5.19) which leads to the control
\[\|(E,B)\|_{L^{\infty}_{t}\dot{H}^{\frac{3}{2}}}+c\,\|E\|_{L^{2}_ {t}\dot{H}^{\frac{3}{2}}}+\|B\|_{L^{2}_{t}\dot{B}^{\frac{5}{2}}_{2,1,<}}\] \[\lesssim\mathcal{H}(0)+(\mathcal{H}(t)+1)P_{*}(\mathcal{H}(t))\] \[\quad+\lambda^{-2}\mathcal{H}^{3}(0)\left(\|\Gamma_{0}\|_{L^{2}}+ 1\right)^{3}\left(1+c^{-\frac{3}{2}}\mathcal{H}(t)+c^{-2}\mathcal{H}^{2}(t)+c ^{-2}\mathcal{H}^{3}(t)\right)e^{C\mathcal{E}_{0}^{2}}+\lambda\mathcal{H}(t).\]
At last, in view of (5.15), it follows that
\[\|(E,B)\|_{L^{\infty}_{t}\dot{H}^{\frac{3}{2}}}+c\,\|E\|_{L^{2}_{ t}\dot{H}^{\frac{3}{2}}}+\|B\|_{L^{2}_{t}\dot{B}^{\frac{5}{2}}_{2,1,<}}\] \[\lesssim\mathcal{H}(0)+(\mathcal{H}(t)+1)P_{*}(\mathcal{H}(t))+ \lambda^{-2}\mathcal{H}^{3}(0)\left(\|\Gamma_{0}\|_{L^{2}}+1\right)^{3}e^{C \mathcal{E}_{0}^{2}}\] \[\qquad+\lambda^{-2}\left(\|\Gamma_{0}\|_{L^{2}}+1\right)^{3} \left(c^{-\frac{3}{2}}\mathcal{H}^{4}(t)+c^{-2}\mathcal{H}^{5}(t)+c^{-2} \mathcal{H}^{6}(t)\right)e^{C\mathcal{E}_{0}^{2}}+\lambda\mathcal{H}(t). \tag{5.27}\]
As for the remaining piece involving \(\nabla\times B-j\) which is required to construct the functional \(\mathcal{H}(t)\), it is dealt with by recasting the bound from Proposition 4.1, for \(p=\infty\) and \(s=\frac{1}{2}\), combined with (5.16), to find that
\[\|\nabla\times B-j\|_{L^{\infty}_{t}\dot{H}^{\frac{1}{2}}}\lesssim\mathcal{H} (0)+c^{-\frac{1}{2}}\mathcal{H}^{2}(t)+c^{-1}\mathcal{H}^{3}(t),\]
whereby, obtaining that
\[\|\nabla\times B-j\|_{L^{\infty}_{t}\dot{H}^{\frac{1}{2}}}\lesssim\mathcal{H} (0)+P_{*}(\mathcal{H}(t)). \tag{5.28}\]
All in all, gathering the bounds (5.18), (5.21), (5.22), (5.25), (5.26), (5.27) and (5.28) to construct the functional \(\mathcal{H}(t)\) and choosing \(\lambda\) small enough in such a way that the term \(\lambda\mathcal{H}(t)\) can be absorbed by the left-hand side of the final estimate, we end up with the bound
\[\mathcal{H}(t)\leq C_{0}+\mathcal{P}(\mathcal{H}(t)),\]
for all \(t\geq 0\), where we set
\[C_{0}\stackrel{{\rm def}}{{=}}C\mathcal{H}(0)\left(1+\mathcal{H} (0)\right)^{2}\left(1+\|\Gamma_{0}\|_{L^{2}}\right)^{3}e^{C\mathcal{E}_{0}^{2}}, \tag{5.29}\]
and
\[\mathcal{P}(\mathcal{H}(t))\stackrel{{\rm def}}{{=}}C\left(\| \Gamma_{0}\|_{L^{2}}+1\right)^{3}\left(c^{-\frac{1}{2}}\mathcal{H}^{2}(t)+c^{- \frac{1}{2}}\mathcal{H}^{3}(t)+c^{-1}\mathcal{H}^{4}(t)+c^{-\frac{3}{2}} \mathcal{H}^{5}(t)+c^{-2}\mathcal{H}^{6}(t)\right)e^{C\mathcal{E}_{0}^{2}}.\]
We recall that the (possibly large) constant \(C>0\) is universal. From now on, it is fixed.
The completion of the proof hinges now on a direct application of the following simple lemma.
**Lemma 5.2**.: _Let \(t\mapsto x(t)\) be a non-negative continuous function defined for all \(t\geq 0\). Consider another function \(t\mapsto F(t)\) which is assumed to be non-negative and increasing. Further suppose that there is \(x_{0}>0\) such that_
\[x(0)\leq x_{0},\]
_and, for any \(t\geq 0\), that_
\[x(t)\leq x_{0}+F(x(t)).\]
_If, moreover, \(F\) satisfies the condition that_
\[F(2x_{0})<x_{0},\]
_then, \(x(t)\) enjoys the bound_
\[x(t)<2x_{0},\]
_for all \(t\geq 0\)._
Proof.: Define the set
\[\mathcal{I}\stackrel{{\mathrm{def}}}{{=}}\left\{t\geq 0:x(t)<2x_{0} \right\}.\]
This set is non-empty and open in \([0,\infty)\). In order to prove the desired global bound, we only have to show that \(\mathcal{I}\) is closed, as well.
To that end, let \((t_{n})_{n\in\mathbb{N}}\) be a sequence of elements in \(\mathcal{I}\), converging to some limit point \(t\in[0,\infty)\). Since \(t_{n}\in\mathcal{I}\), we deduce, for all \(n\in\mathbb{N}\), that
\[x(t_{n})\leq x_{0}+F(x(t_{n}))\leq x_{0}+F(2x_{0})<2x_{0}.\]
By continuity of \(t\mapsto x(t)\), taking \(n\to\infty\) in the foregoing inequalities yields that \(t\in\mathcal{I}\), thereby showing that \(\mathcal{I}\) is closed and completing the proof of the lemma.
We are now back to the proof of Theorem 1.1. By applying Lemma 5.2 with \(F(t)=\mathcal{P}(t)\), we arrive at the conclusion, for all \(t\geq 0\), that
\[\mathcal{H}(t)\leq 2C_{0},\]
as soon as
\[\mathcal{P}(2C_{0})<C_{0}. \tag{5.30}\]
In particular, recall that
\[\lim_{c\to\infty}\mathcal{P}(2C_{0})=0,\]
which implies the existence of another constant \(c_{0}>0\), which only depends on the initial data, such that (5.30) is satisfied for all \(c>c_{0}\). It then follows that the all norms involved in the construction of \(\mathcal{H}(t)\) are bounded.
Finally, in order to complete the justification of all uniform bounds claimed in the statement of Theorem 1.1, we only need to observe that the control of \(j\) in \(L_{t}^{\infty}L^{2}\cap L_{t}^{2}\dot{H}^{\frac{1}{2}}\) follows directly from an application of Proposition 4.1. This completes the proof of Theorem 1.1 in the case of rough profiles.
#### 5.3.2. Regular profiles
We proceed with the completion of the proof of Theorem 1.1 in the case of regular profiles. Specifically, our task now consists in showing that if, furthermore, we assume initially that
\[(E_{0}^{c},B_{0}^{c})\in\dot{B}_{2,1}^{\frac{5}{2}},\]
uniformly in \(c>0\), then the regularity of \((E,B)\) in \(\dot{B}_{2,1}^{\frac{5}{2}}\) is propagated for all times \(t>0\), uniformly in \(c\in(c_{0},\infty)\). Note that this cannot be done by a direct application of the energy estimates from Lemma 3.6. Instead, we need to exploit the techniques from Lemmas 3.4 and 3.5.
Again, for simplicity, we will henceforth omit the index "\(c\)" referring to the dependence of the solution on the speed of light. Also, let us point out that all the Lebesgue spaces in time are now taken over the whole positive real-line \(\mathbb{R}^{+}\). Moreover, the constant \(C_{0}\) defined in (5.29) will be allowed to change from one line to another, as long as that change only involves norms of the initial data which remain uniformly bounded in \(c\).
_Preliminary bounds on magnetic fields._ The control of high frequencies is deduced, again, from Lemma 3.4 with a suitable choice of parameters. To see that, we proceed with a bootstrap argument by first recasting the bound
\[\|(E,B)\|_{\widetilde{L}_{t}^{4}\dot{B}_{2,1,>}^{\frac{5}{2}}}\lesssim c^{- \frac{1}{2}}\left\|(E_{0},B_{0})\right\|_{\dot{B}_{2,1,>}^{\frac{5}{2}}}+\|u \|_{L_{t}^{\infty}\dot{H}^{1}\cap L_{t}^{2}\dot{H}^{2}}\left(\|B\|_{\widetilde {L}_{t}^{2}\dot{B}_{2,1,>}^{\frac{5}{2}}}+\|B\|_{L_{t}^{2}\dot{B}_{2,1,<}^{ \frac{5}{2}}}\right),\]
from (3.15), by setting \(q=4\) therein. Therefore, recalling that we have already established the uniform estimates
\[\begin{split} u&\in L_{t}^{\infty}\dot{H}^{1}\cap L _{t}^{2}\dot{H}^{2},\\ B&\in\widetilde{L}_{t}^{2}\dot{B}_{2,1,>}^{\frac{5} {2}}\cap L_{t}^{2}\dot{B}_{2,1,<}^{\frac{5}{2}}\cap L_{t}^{\infty}L^{2}\cap L_ {t}^{2}\dot{H}^{1},\\ c^{-1}B&\in L_{t}^{\infty}\dot{B}_{2,1,<}^{\frac{5} {2}},\end{split} \tag{5.31}\]
one directly deduces that
\[B\in\widetilde{L}_{t}^{4}\dot{B}_{2,1,>}^{\frac{5}{2}},\]
uniformly with respect to \(c\in(c_{0},\infty)\).
Now, we take care of the low frequencies by first applying the last estimate from Lemma 3.3 with values
\[s=2,\quad m=4\quad\text{and}\quad q=1,\]
to find that
\[\|B\|_{L_{t}^{4}\dot{B}_{2,1,<}^{\frac{5}{2}}}\lesssim\|(E_{0},B_{0})\|_{\dot{ B}_{2,1}^{2}}+\|P(u\times B)\|_{L_{t}^{4}\dot{B}_{2,1}^{\frac{3}{2}}}.\]
Therefore, by further employing classical product laws, which are contained in Lemma 2.3, and interpolation inequalities, we obtain that
\[\|B\|_{L_{t}^{4}\dot{B}_{2,1,<}^{\frac{5}{2}}} \lesssim\|(E_{0},B_{0})\|_{\dot{B}_{2,1}^{2}}+\|u\|_{L_{t}^{4}\dot {B}_{2,1}^{\frac{3}{2}}}\,\|B\|_{L_{t}^{\infty}\dot{B}_{2,1}^{\frac{3}{2}}}\] \[\lesssim\|(E_{0},B_{0})\|_{\dot{B}_{2,1}^{2}}+\|u\|_{L_{t}^{\infty }\dot{H}^{1}\cap L_{t}^{2}\dot{H}^{2}}\,\|B\|_{L_{t}^{\infty}\dot{B}_{2,1}^{ \frac{3}{2}}}\] \[\lesssim\|(E_{0},B_{0})\|_{\dot{B}_{2,1}^{2}}+\|u\|_{L_{t}^{\infty }\dot{H}^{1}\cap L_{t}^{2}\dot{H}^{2}}\left(\,\|B\|_{L_{t}^{\infty}\dot{B}_{2, 1,>}^{\frac{3}{2}}}+\|B\|_{L_{t}^{\infty}\dot{B}_{2,1,<}^{\frac{3}{2}}}\,\right)\] \[\lesssim\|(E_{0},B_{0})\|_{\dot{B}_{2,1}^{2}}+\|u\|_{L_{t}^{\infty }\dot{H}^{1}\cap L_{t}^{2}\dot{H}^{2}}\left(c^{-1}\,\|B\|_{L_{t}^{\infty}\dot{B} _{2,1,>}^{\frac{5}{2}}}+\|B\|_{L_{t}^{\infty}\dot{B}_{2,1,<}^{\frac{3}{2}}} \right).\]
Thus, in view of the uniform bounds recalled in (5.31), in order to control low frequencies, it only remains to show that
\[B\in L_{t}^{\infty}\dot{B}_{2,1,<}^{\frac{3}{2}}.\]
Instead, we are going to prove the slightly better bound
\[B\in\widetilde{L}_{t}^{\infty}\dot{B}_{2,1,<}^{\frac{3}{2}}. \tag{5.32}\]
To that end, we employ the low-frequency estimates from Lemma 3.2 with the values
\[q=\infty\quad\text{and}\quad r=\widetilde{r}=\widetilde{q}=2\]
to deduce that
\[\|B\|_{\widetilde{L}_{t}^{\infty}\dot{B}_{2,1,<}^{\frac{3}{2}}}\lesssim\|(E_{0 },B_{0})\|_{\dot{B}_{2,1}^{\frac{3}{2}}}+\|P(u\times B)\|_{\widetilde{L}_{t}^{ 2}\dot{B}_{2,1}^{\frac{3}{2}}}\,.\]
Therefore, by further applying the paraproduct estimate (2.3), we obtain that
\[\|B\|_{\widetilde{L}_{t}^{\infty}\dot{B}_{2,1,<}^{\frac{3}{2}}}\lesssim\|(E_{ 0},B_{0})\|_{\dot{B}_{2,1}^{\frac{3}{2}}}+\|u\|_{L_{t}^{\infty}\dot{B}_{2,\infty }^{\frac{1}{2}}}\,\|B\|_{\widetilde{L}_{t}^{2}\dot{B}_{2,1}^{2}}\,.\]
Accordingly, we conclude that the bound (5.32) holds uniformly, by an application of the embeddings
\[L_{t}^{2}\dot{B}_{2,\infty}^{1}\cap L_{t}^{2}\dot{B}_{2,\infty}^{\frac{5}{2}} \hookrightarrow\widetilde{L}_{t}^{2}\dot{B}_{2,\infty}^{1}\cap\widetilde{L}_{ t}^{2}\dot{B}_{2,\infty}^{\frac{5}{2}}\hookrightarrow\widetilde{L}_{t}^{2}\dot{B}_{2,1}^{2}\]
combined with the bounds (5.31).
All in all, gathering the preceding high and low-frequency estimates, we have shown that
\[B\in L_{t}^{4}\dot{B}_{2,1}^{\frac{5}{2}}, \tag{5.33}\]
uniformly in \(c\).
Next, we improve the previous bound to obtain
\[B\in\widetilde{L}_{t}^{4}\dot{B}_{2,1}^{\frac{5}{2}}. \tag{5.34}\]
Note that the same bound on high frequencies is already established at the start of this step and, thus, we only need to take care of the corresponding estimate on the remaining frequencies. To that end, we first apply the low-frequency estimates from Lemma 3.2 with values
\[q=4,\quad\widetilde{q}=\frac{4}{3}\quad\text{and}\quad r=\widetilde{r}=2\]
to obtain that
\[\|B\|_{\widetilde{L}_{t}^{4}\dot{B}_{2,1,<}^{\frac{5}{2}}}\lesssim\|(E_{0},B _{0})\|_{\dot{B}_{2,1}^{2}}+\|P(u\times B)\|_{\widetilde{L}_{t}^{4}\dot{B}_{2,1}^{\frac{3}{2}}}\,.\]
Therefore, by further employing (2.3) to estimate the product above, we find that
\[\|B\|_{\widetilde{L}_{t}^{4}\dot{B}_{2,1,<}^{\frac{5}{2}}}\lesssim\|(E_{0},B _{0})\|_{\dot{B}_{2,1}^{2}}+\|u\|_{L_{t}^{\infty}\dot{B}_{2,\infty}^{1}}\,\|B \|_{\widetilde{L}_{t}^{4}\dot{B}_{2,1}^{2}}\,.\]
Thus, one deduces, thanks to the embeddings
\[L_{t}^{4}\dot{B}_{2,1}^{\frac{3}{2}}\cap L_{t}^{4}\dot{B}_{2,1}^{\frac{5}{2}} \hookrightarrow\widetilde{L}_{t}^{4}\dot{B}_{2,\infty}^{\frac{3}{2}}\cap \widetilde{L}_{t}^{4}\dot{B}_{2,\infty}^{\frac{5}{2}}\hookrightarrow\widetilde{L }_{t}^{4}\dot{B}_{2,1}^{2},\]
that
\[\|B\|_{\widetilde{L}_{t}^{4}\dot{B}_{2,1,<}^{\frac{5}{2}}}\lesssim\|(E_{0},B _{0})\|_{\dot{B}_{2,1}^{2}}+\|u\|_{L_{t}^{\infty}\dot{H}^{1}}\,\|B\|_{L_{t}^{4} \dot{B}_{2,1}^{\frac{3}{2}}\cap L_{t}^{4}\dot{B}_{2,1}^{\frac{5}{2}}}\,.\]
Hence, due to the bounds (5.31) and (5.33), we see that the right-hand side in the preceding estimate is finite as soon as \(B\in L_{t}^{4}\dot{B}_{2,1}^{\frac{2}{2}}\). This bound turns out to be a consequence of the embeddings
\[L_{t}^{\infty}L^{2}\cap L_{t}^{2}\dot{H}^{1}\cap L_{t}^{4}\dot{B}_{2,1}^{\frac{ 5}{2}}\hookrightarrow L_{t}^{4}\dot{B}_{2,1}^{\frac{1}{2}}\cap L_{t}^{4}\dot{B }_{2,1}^{\frac{5}{2}}\hookrightarrow L_{t}^{4}\dot{B}_{2,1}^{\frac{3}{2}},\]
whereby \(B\in\widetilde{L}_{t}^{4}\dot{B}_{2,1,<}^{\frac{5}{2}}.\) In conclusion, we have shown that (5.34) holds uniformly with respect to the speed of light \(c\in(c_{0},\infty)\).
_Propagation of initial regularity._ We are now in a position to show the propagation of the \(\dot{B}_{2,1}^{\frac{5}{2}}\) regularity of the electromagnetic field, uniformly with respect to the speed of light.
As before, we deal first with high frequencies by recasting the second estimate from Lemma 3.4, with the values
\[q=\infty,\quad p=4,\]
to find that
\[\|(E,B)\|_{\widetilde{L}_{t}^{\infty}B_{2,1,>}^{\frac{5}{2}}}\lesssim\|(E_{0},B_ {0})\|_{\dot{B}_{2,1,>}^{\frac{5}{2}}}+\|u\|_{L_{t}^{\infty}\dot{H}^{1}\cap L _{t}^{2}\dot{H}^{2}}\,\|B\|_{\widetilde{L}_{t}^{4}\dot{B}_{2,1}^{\frac{5}{2}}}\,.\]
Thus, it follows that
\[(E,B)\in\widetilde{L}_{t}^{\infty}\dot{B}_{2,1,>}^{\frac{5}{2}},\]
uniformly in \(c\in(c_{0},\infty)\), by virtue of the bounds (5.31) and (5.34).
As for low frequencies, we proceed by applying the corresponding estimate from Lemma 3.2 with the values
\[r=\tilde{r}=2,\quad q=\infty,\quad\tilde{q}=\frac{4}{3},\]
to find that
\[\|E\|_{\widetilde{L}_{t}^{\infty}\dot{B}_{2,1,<}^{\frac{5}{2}}} \lesssim\|(E_{0},B_{0})\|_{\dot{B}_{2,1}^{\frac{5}{2}}}+c^{-\frac {1}{2}}\,\|P(u\times B)\|_{\widetilde{L}_{t}^{4}\dot{B}_{2,1,<}^{\frac{5}{2}}}\] \[\lesssim\|(E_{0},B_{0})\|_{\dot{B}_{2,1}^{\frac{5}{2}}}+\|P(u \times B)\|_{\widetilde{L}_{t}^{4}\dot{B}_{2,1,<}^{2}}\,,\]
and
\[\|B\|_{\widetilde{L}_{t}^{\infty}\dot{B}_{2,1,<}^{\frac{5}{2}}} \lesssim\|(E_{0},B_{0})\|_{\dot{B}_{2,1}^{\frac{5}{2}}}+\|P(u \times B)\|_{\widetilde{L}_{t}^{4}\dot{B}_{2,1}^{2}}\,.\]
Therefore, by employing (2.3) to estimate the products above, we obtain, for any \(\varepsilon\in(0,\frac{1}{2})\), that
\[\|(E,B)\|_{\widetilde{L}_{t}^{\infty}\dot{B}_{2,1,<}^{\frac{5}{2}}} \lesssim\|(E_{0},B_{0})\|_{\dot{B}_{2,1}^{\frac{5}{2}}}+\|u\|_{ \widetilde{L}_{t}^{4}\dot{B}_{2,\infty}^{\frac{3}{2}-\varepsilon}}\,\|B\|_{ \widetilde{L}_{t}^{\infty}\dot{B}_{2,1}^{2+\varepsilon}}\,.\]
Hence, utilizing the interpolation inequalities
\[\|u\|_{\widetilde{L}_{t}^{4}\dot{B}_{2,\infty}^{\frac{3}{2}- \varepsilon}}\lesssim\|u\|_{L_{t}^{4}\dot{B}_{2,\infty}^{\frac{3}{2}- \varepsilon}} \lesssim\|u\|_{L_{t}^{4}\dot{H}^{\frac{1}{2}}}^{\varepsilon}\,\|u\| _{L_{t}^{4}\dot{H}^{\frac{3}{2}}}^{1-\varepsilon}\] \[\lesssim\|u\|_{L_{t}^{\infty}L^{2}\cap L_{t}^{2}\dot{H}^{1}}\,\|u \|_{L_{t}^{\infty}\dot{H}^{1}\cap L_{t}^{2}\dot{H}^{2}}^{1-\varepsilon}\]
and
\[\|B\|_{\widetilde{L}_{t}^{\infty}\dot{B}_{2,1}^{2+\varepsilon}} \lesssim\|B\|_{\widetilde{L}_{t}^{\infty}\dot{B}_{2,1}^{2}}^{ \frac{1}{2}+\varepsilon}\,\|B\|_{\widetilde{L}_{t}^{\infty}\dot{B}_{2,\infty }^{\frac{5}{2}}}^{\frac{1}{2}-\varepsilon}\] \[\lesssim\|B\|_{\widetilde{L}_{t}^{\infty}\dot{B}_{2,1}^{2}}^{ \frac{1}{2}+\varepsilon}\,\|B\|_{L_{t}^{\infty}\dot{H}^{1}\cap L_{t}^{2}\dot{H} ^{2}}^{\frac{1}{2}-\varepsilon}\,,\]
we deduce, for any \(\lambda\in(0,1)\), that
\[\|(E,B)\|_{\widetilde{L}_{t}^{\infty}\dot{B}_{2,1,<}^{\frac{5}{2} }} \lesssim\|(E_{0},B_{0})\|_{\dot{B}_{2,1}^{\frac{5}{2}}}+\lambda\,\|B\|_{ \widetilde{L}_{t}^{\infty}\dot{B}_{2,1,<}^{\frac{5}{2}}}\] \[\quad+C_{\lambda}\left(\|u\|_{L_{t}^{\infty}L^{2}\cap L_{t}^{2} \dot{H}^{1}}\,\|u\|_{L_{t}^{\infty}\dot{H}^{1}\cap L_{t}^{2}\dot{H}^{2}}^{1- \varepsilon}\right)^{\frac{1}{2}-\varepsilon}\|B\|_{L_{t}^{\infty}\dot{H}^{ \frac{3}{2}}}\,.\]
Finally, choosing \(\lambda\) small enough, we arrive at the conclusion that
\[\|(E,B)\|_{\widetilde{L}_{t}^{\infty}\dot{B}_{2,1,<}^{\frac{5}{2}}}\lesssim\|(E _{0},B_{0})\|_{\dot{B}_{2,1}^{\frac{5}{2}}}+\left(\|u\|_{L_{t}^{\infty}L^{2} \cap L_{t}^{2}\dot{H}^{1}}^{\varepsilon}\,\|u\|_{L_{t}^{\infty}\dot{H}^{1}\cap L _{t}^{2}\dot{H}^{2}}^{1-\varepsilon}\,\|B\|_{L_{t}^{\infty}\dot{H}^{\frac{3}{2} }}\,.\]
Again, due to the bounds (5.31), the right-hand side above is finite, uniformly with respect to \(c\in(c_{0},\infty)\), thereby yielding the desired control for the low frequencies of \(E\) and \(B\). This completes the proof of Theorem 1.1.
## 6. Convergence and proof of Theorem 1.3
Let \((u^{c},E^{c},B^{c})_{c>c_{0}}\) and \((u,B)\) be the solutions of (1.1) and (MHD), given by Theorem 1.1 and Corollary 1.2, respectively. We further introduce the fluctuations
\[\widetilde{u}\stackrel{{\rm def}}{{=}}u^{c}-u,\qquad\widetilde{ B}\stackrel{{\rm def}}{{=}}B^{c}-B,\]
with a corresponding similar notation \(\widetilde{u}_{0}\), \(\widetilde{B}_{0}\) for their initial data, and the time-dependent function
\[f(t)\stackrel{{\rm def}}{{=}}\left\|\nabla\Big{(}u(t),u^{c}(t), B(t),B^{c}(t)\Big{)}\right\|_{L^{3}},\]
for all \(t>0\). In view of Theorem 1.1 and Corollary 1.2, observe that \(f\in L^{2}(\mathbb{R}^{+})\) with
\[\int_{0}^{\infty}f^{2}(\tau)d\tau\leq C_{0}, \tag{6.1}\]
uniformly in \(c\in(c_{0},\infty)\), for some constant \(C_{0}>0\) depending only on the initial data.
We proceed now in four steps:
1. The \(L^{2}\) energy estimate.
2. An interpolation argument.
3. Convergence of the velocity in the endpoint space \(\dot{H}^{1}\).
4. Convergence of the magnetic field in the endpoint space \(\dot{H}^{\frac{3}{2}}\).
### The \(L^{2}\) energy estimate
First, it is readily seen that the fluctuations \(\widetilde{u}\), \(\widetilde{B}\) are solutions of the perturbed MHD equations
\[\left\{\begin{aligned} &\partial_{t}\widetilde{u}+u\cdot\nabla \widetilde{u}-\nu\Delta\widetilde{u}+\nabla\widetilde{p}=-\widetilde{u}\cdot \nabla u^{c}+\widetilde{B}\cdot\nabla B+B^{c}\cdot\nabla\widetilde{B}+\frac{1}{ c}\partial_{t}E^{c}\times B^{c},\\ &\partial_{t}\widetilde{B}+u\cdot\nabla\widetilde{B}-\frac{1}{ \sigma}\Delta\widetilde{B}=-\widetilde{u}\cdot\nabla B^{c}+\widetilde{B} \cdot\nabla u+B^{c}\cdot\nabla\widetilde{u}+\nabla\times\left(\frac{1}{c} \partial_{t}E^{c}\right).\end{aligned}\right. \tag{6.2}\]
Therefore, by virtue of the identities
\[\int_{\mathbb{R}^{3}}(u\cdot\nabla\widetilde{u})\cdot\widetilde{u}\;dx=\int_{ \mathbb{R}^{3}}(u\cdot\nabla\widetilde{B})\cdot\widetilde{B}\;dx=0\]
and
\[\int_{\mathbb{R}^{3}}(B^{c}\cdot\nabla\widetilde{B})\cdot\widetilde{u}\;dx+ \int_{\mathbb{R}^{3}}(B^{c}\cdot\nabla\widetilde{u})\cdot\widetilde{B}\;dx=0,\]
performing an \(L^{2}\) energy estimate yields, for all \(t>0\), that
\[\|(\widetilde{u},\widetilde{B})(t)\|_{L^{2}}^{2}+\int_{0}^{t}\|( \widetilde{u},\widetilde{B})(\tau)\|_{\dot{H}^{1}}^{2}d\tau \lesssim\|(\widetilde{u}_{0},\widetilde{B}_{0})\|_{L^{2}}^{2}+ \int_{0}^{t}\|(\widetilde{u},\widetilde{B})(\tau)\|_{L^{3}}^{2}f(\tau)d\tau\] \[\quad+\left\|\frac{1}{c}\partial_{t}E^{c}\times B^{c}\right\|_{L^ {2}_{t}\dot{H}^{-1}}^{2}+\left\|\frac{1}{c}\partial_{t}E^{c}\right\|_{L^{2}_{ t}L^{2}}^{2}\] \[\lesssim\|(\widetilde{u}_{0},\widetilde{B}_{0})\|_{L^{2}}^{2}+ \int_{0}^{t}\|(\widetilde{u},\widetilde{B})(\tau)\|_{L^{2}}\|(\widetilde{u}, \widetilde{B})(\tau)\|_{\dot{H}^{1}}f(\tau)d\tau\] \[\quad+\left\|\frac{1}{c}\partial_{t}E^{c}\right\|_{L^{2}_{t}\dot{ H}^{\frac{1}{2}}}^{2}\|B^{c}\|_{L^{2}_{t}L^{2}}^{2}+\left\|\frac{1}{c} \partial_{t}E^{c}\right\|_{L^{2}_{t}L^{2}}^{2}.\]
Thus, we find, for any \(\lambda>0\), that
\[\|(\widetilde{u},\widetilde{B})(t)\|_{L^{2}}^{2} +\int_{0}^{t}\|(\widetilde{u},\widetilde{B})(\tau)\|_{\dot{H}^{1}}^{ 2}d\tau\] \[\lesssim\|(\widetilde{u}_{0},\widetilde{B}_{0})\|_{L^{2}}^{2}+ \lambda^{-1}\int_{0}^{t}\|(\widetilde{u},\widetilde{B})(\tau)\|_{L^{2}}^{2}f^{ 2}(\tau)d\tau+\lambda\int_{0}^{t}\|(\widetilde{u},\widetilde{B})(\tau)\|_{ \dot{H}^{1}}^{2}d\tau\] \[\quad+\left\|\frac{1}{c}\partial_{t}E^{c}\right\|_{L^{2}_{t}H^{ \frac{1}{2}}}^{2}\|B^{c}\|_{L^{\infty}_{t}L^{2}}^{2}+\left\|\frac{1}{c} \partial_{t}E^{c}\right\|_{L^{2}_{t}L^{2}}^{2}.\]
Hence, choosing \(\lambda\) small enough, utilizing the energy inequality (1.2) and applying Gronwall's lemma, we deduce, for some universal constant \(C>0\), that
\[\|(\widetilde{u},\widetilde{B})(t)\|_{L^{2}}^{2}+\int_{0}^{t}\|( \widetilde{u},\widetilde{B})(\tau)\|_{\dot{H}^{1}}^{2}d\tau\] \[\lesssim\left(\|(\widetilde{u}_{0},\widetilde{B}_{0})\|_{L^{2}}^{ 2}+(\mathcal{E}_{0}^{2}+1)\left\|\frac{1}{c}\partial_{t}E^{c}\right\|_{L^{2}_ {t}H^{\frac{1}{2}}}^{2}\right)\exp\left(C\int_{0}^{t}f^{2}(\tau)d\tau\right).\]
Consequently, by virtue of (5.10), (5.11) and (6.1), we end up with
\[\sup_{\tau\in[0,\infty)}\|(\widetilde{u},\widetilde{B})(\tau)\|_{L^{2}}^{2}+ \int_{0}^{\infty}\|(\widetilde{u},\widetilde{B})(\tau)\|_{\dot{H}^{1}}^{2}d \tau\leq C_{0}\underbrace{\left(\|(\widetilde{u}_{0},\widetilde{B}_{0})\|_{L^ {2}}^{2}+c^{-2}\right)}_{\stackrel{{\text{def}}}{{=}}\Theta_{c}}, \tag{6.3}\]
where \(C_{0}>0\) is another constant which depends only on the size of the initial data. In the sequel, this constant will be allowed to change value as long as it remains independent of \(c\). This establishes the convergence of \(\widetilde{u}\) and \(\widetilde{B}\) to zero in the energy space \(L^{\infty}_{t}L^{2}\cap L^{2}_{t}\dot{H}^{1}\).
### An interpolation argument
The results in this step are standard. Indeed, since both solutions \((u^{c},B^{c})\) and \((u,B)\) belong to the space
\[L^{\infty}(\mathbb{R}^{+};\dot{H}^{1}\times\dot{H}^{\frac{3}{2}})\cap L^{2}( \mathbb{R}^{+};\dot{H}^{2}\times\dot{H}^{\frac{5}{2}}),\]
uniformly in \(c\), it follows, by interpolation with (6.3), for any \(s\in[0,1]\), that
\[\sup_{\tau\in[0,\infty)}\|(\widetilde{u},\widetilde{B})(\tau)\|_{\dot{H}^{s} \times\dot{H}^{\frac{3s}{2}}}^{2}+\int_{0}^{\infty}\|(\widetilde{u},\widetilde {B})(\tau)\|_{\dot{H}^{s+1}\times\dot{H}^{\frac{3s}{2}+1}}^{2}d\tau\leq C_{0} \Theta_{c}^{1-s},\]
whereby we deduce, for all \(s\in[0,1)\), that
\[\lim_{c\to\infty}\left(\sup_{\tau\in[0,\infty)}\|(\widetilde{u},\widetilde{B} )(\tau)\|_{\dot{H}^{s}\times\dot{H}^{\frac{3s}{2}}}^{2}+\int_{0}^{\infty}\|( \widetilde{u},\widetilde{B})(\tau)\|_{\dot{H}^{s+1}\times\dot{H}^{\frac{3s}{2} +1}}^{2}d\tau\right)=0.\]
We are now left with proving the convergence of solutions in the spaces corresponding to the endpoint case \(s=1\), above.
### Convergence of the velocity in the endpoint space \(\dot{H}^{1}\)
With the bounds established in the previous step, we are now going to show that the convergence of the velocity in the endpoint case \(s=1\) is a straightforward consequence of the fact that the source term \(\frac{1}{c}\partial_{t}E^{c}\times B^{c}\) in the momentum equation vanishes in \(L^{2}_{t}L^{2}\), as \(c\to\infty\).
To that end, performing an \(\dot{H}^{1}\) energy estimate for the first equation in (6.2), one sees that
\[\sup_{\tau\in[0,\infty)}\|\widetilde{u}(\tau)\|_{\dot{H}^{1}}^{2}+\int_{0}^{ \infty}\|\widetilde{u}(\tau)\|_{\dot{H}^{2}}^{2}d\tau\lesssim\|\widetilde{u}_{0 }\|_{\dot{H}^{1}}^{2}+\|F\|_{L^{2}_{t}L^{2}}^{2}+\left\|\frac{1}{c}\partial_{t }E^{c}\times B^{c}\right\|_{L^{2}_{t}L^{2}}^{2},\]
where we denote
\[F\stackrel{{\rm def}}{{=}}-u\cdot\nabla\widetilde{u}-\widetilde{u} \cdot\nabla u^{c}+\widetilde{B}\cdot\nabla B+B^{c}\cdot\nabla\widetilde{B}.\]
Moreover, by a direct application of standard paraproduct laws and interpolation inequalities, we infer that
\[\|F\|_{L_{t}^{2}L^{2}} \lesssim\|(u,u^{c})\|_{L_{t}^{4}\dot{B}_{2,1}^{\frac{3}{2}}}\| \widetilde{u}\|_{L_{t}^{4}\dot{H}^{1}}+\|(B,B^{c})\|_{L_{t}^{4}\dot{H}^{1}}\| \widetilde{B}\|_{L_{t}^{4}\dot{B}_{2,1}^{\frac{3}{2}}}\] \[\lesssim\|(u,u^{c})\|_{L_{t}^{\infty}\dot{H}^{1}\cap L_{t}^{2} \dot{H}^{2}}\left\|\widetilde{u}\right\|_{L_{t}^{\infty}\dot{H}^{\frac{1}{2}} \cap L_{t}^{2}\dot{H}^{\frac{3}{2}}}+\|(B,B^{c})\|_{L_{t}^{\infty}\dot{H}^{ \frac{1}{2}}\cap L_{t}^{2}\dot{H}^{\frac{3}{2}}}\left\|\widetilde{B}\right\| _{L_{t}^{\infty}\dot{H}^{1}\cap L_{t}^{2}\dot{H}^{2}},\]
and
\[\left\|\frac{1}{c}\partial_{t}E^{c}\times B^{c}\right\|_{L_{t}^{ 2}L^{2}} \lesssim\left\|\frac{1}{c}\partial_{t}E^{c}\right\|_{L_{t}^{2} \dot{H}^{\frac{1}{2}}}\left\|B^{c}\right\|_{L_{t}^{\infty}\dot{H}^{1}}\] \[\lesssim\left\|\frac{1}{c}\partial_{t}E^{c}\right\|_{L_{t}^{2} \dot{H}^{\frac{1}{2}}}\left\|B^{c}\right\|_{L_{t}^{\infty}H^{\frac{3}{2}}}\] \[\leq C_{0}\left\|\frac{1}{c}\partial_{t}E^{c}\right\|_{L_{t}^{2} \dot{H}^{\frac{1}{2}}}.\]
Therefore, by virtue of (5.11) and the bounds in Theorem 1.1 and Corollary 1.2, we end up with
\[\sup_{\tau\in[0,\infty)}\|\widetilde{u}(\tau)\|_{\dot{H}^{1}}^{2}+\int_{0}^{ \infty}\|\widetilde{u}(\tau)\|_{\dot{H}^{2}}^{2}d\tau\lesssim\|\widetilde{u}_ {0}\|_{\dot{H}^{1}}^{2}+C_{0}\left(\|\widetilde{u}\|_{L_{t}^{\infty}\dot{H}^{ \frac{1}{2}}\cap L_{t}^{2}\dot{H}^{\frac{3}{2}}}^{2}+\|\widetilde{B}\|_{L_{t}^ {\infty}\dot{H}^{i}\cap L_{t}^{2}\dot{H}^{2}}^{2}+c^{-2}\right).\]
Consequently, owing to the convergence results from the previous steps, we arrive at the conclusion that
\[\lim_{c\to\infty}\left(\sup_{\tau\in[0,\infty)}\|\widetilde{u}(\tau)\|_{\dot {H}^{1}}^{2}+\int_{0}^{\infty}\|\widetilde{u}(\tau)\|_{\dot{H}^{2}}^{2}d\tau \right)=0,\]
for we are assuming that \(\widetilde{u}_{0}\) vanishes in \(\dot{H}^{1}\).
### Convergence of the magnetic field in the endpoint space \(\dot{H}^{\frac{3}{2}}\)
One could try to mimic the proof from the previous step and perform a \(\dot{H}^{\frac{3}{2}}\) energy estimate for \(\widetilde{B}\). This method would require a decay of \(\frac{1}{c}\partial_{t}E^{c}\) in \(L_{t}^{2}\dot{H}^{\frac{3}{2}}\), which is not an available information here.
Instead, our proof below is inspired from a Compactness Extrapolation Lemma (see [4, Lemma 1.4]), which reduces the justification of the convergence in an endpoint setting to the analysis of the evanescence of some high frequencies. To see that, we first fix \(\varepsilon\in(0,1)\) and pick any real number \(s\in[0,1-\varepsilon)\). Then, by utilizing the results from Section 6.2, we obtain that
\[\sup_{\tau\in[0,\infty)}\int_{|\xi|<(\Theta_{c})^{\frac{\varepsilon +s-1}{3(1-s)}}}\left|\xi\right|^{3}\left|\mathcal{F}(\widetilde{B})(\tau,\xi) \right|^{2}d\xi+ \int_{0}^{\infty}\int_{|\xi|<(\Theta_{c})^{\frac{\varepsilon+s-1}{ 3(1-s)}}}\left|\xi\right|^{5}\left|\mathcal{F}(\widetilde{B})(\tau,\xi) \right|^{2}d\xi d\tau\] \[\leq\Theta_{c}^{\varepsilon+s-1}\left(\|\widetilde{B}\|_{L_{t}^ {\infty}\dot{H}^{\frac{3s}{2}}}^{2}+\|\widetilde{B}\|_{L_{t}^{2}\dot{H}^{ \frac{3s}{2}+1}}^{2}\right)\] \[\leq C_{0}\Theta_{c}^{\varepsilon}.\]
For simplicity, we are now going to take \(s=0\) and \(\varepsilon=\frac{1}{4}\), thereby establishing that
\[\lim_{c\to\infty}\left(\sup_{\tau\in[0,\infty)}\|\mathds{1}_{\{|D|<\Theta_{c} ^{-\frac{1}{4}}\}}\widetilde{B}(\tau)\|_{\dot{H}^{\frac{3}{2}}}^{2}+\int_{0}^{ \infty}\|\mathds{1}_{\{|D|<\Theta_{c}^{-\frac{1}{4}}\}}\widetilde{B}(\tau)\|_ {\dot{H}^{\frac{3}{2}}}^{2}d\tau\right)=0,\]
which takes care of frequencies in \(\{|\xi|<\Theta_{c}^{-\frac{1}{4}}\}\).
We now deal with the high frequencies in \(\{|\xi|\geq\Theta_{c}^{-\frac{1}{4}}\}\). To that end, we first recall, by Corollary 1.2 and (1.11), that the bound
\[B\in\widetilde{L}^{\infty}(\mathbb{R}^{+};\dot{B}_{2,2}^{\frac{3}{2}})\cap L^{2} (\mathbb{R}^{+};\dot{H}^{\frac{5}{2}})\]
holds uniformly with respect to \(c\in(c_{0},\infty)\). Therefore, we find that
\[\lim_{c\to\infty} \Big{(}\sup_{\tau\in[0,\infty)}\|\mathds{1}_{\{|D|\geq\Theta_{c}^ {-\frac{1}{4}}\}}\widetilde{B}(\tau)\|_{\dot{H}^{\frac{3}{2}}}^{2}+\int_{0}^{ \infty}\|\mathds{1}_{\{|D|\geq\Theta_{c}^{-\frac{1}{4}}\}}\widetilde{B}(\tau) \|_{\dot{H}^{\frac{5}{2}}}^{2}d\tau\Big{)}\] \[\leq\lim_{c\to\infty}\Big{(}\sup_{\tau\in[0,\infty)}\|\mathds{1}_ {\{|D|\geq\Theta_{c}^{-\frac{1}{4}}\}}B^{c}(\tau)\|_{\dot{H}^{\frac{3}{2}}}^{2 }+\int_{0}^{\infty}\|\mathds{1}_{\{|D|\geq\Theta_{c}^{-\frac{1}{4}}\}}B^{c}( \tau)\|_{\dot{H}^{\frac{5}{2}}}^{2}d\tau\Big{)}\] \[\quad+\underbrace{\lim_{c\to\infty}\Big{(}\sup_{\tau\in[0,\infty) }\|\mathds{1}_{\{|D|\geq\Theta_{c}^{-\frac{1}{4}}\}}B(\tau)\|_{\dot{H}^{\frac{ 3}{2}}}^{2}+\int_{0}^{\infty}\|\mathds{1}_{\{|D|\geq\Theta_{c}^{-\frac{1}{4}} \}}B(\tau)\|_{\dot{H}^{\frac{5}{2}}}^{2}d\tau\Big{)}}_{=0},\]
where we exploited the principle described in (1.12).
The treatment of the terms involving \(B^{c}\) in the right-hand side above requires yet another application of Lemma 3.2. To that end, first observing that
\[\|B^{c}\|_{L_{t}^{\infty}\dot{B}_{2,2,>}^{\frac{3}{2}}} \leq c^{-1}\,\|B^{c}\|_{\widetilde{L}_{t}^{\infty}\dot{B}_{2,1,>}^ {\frac{5}{2}}},\] \[\|B^{c}\|_{L_{t}^{2}\dot{B}_{2,2,>}^{\frac{5}{2}}} \leq\|B^{c}\|_{\widetilde{L}_{t}^{2}\dot{B}_{2,1,>}^{\frac{5}{2}}},\]
it is readily seen that the decay in the hyperbolic region, i.e., when frequencies are localized in \(\{|\xi|\gtrsim\sigma c\}\), follows from (5.4) as soon as it is assumed initially that
\[\lim_{c\to\infty}\left(c^{-1}\|(E_{0}^{c},B_{0}^{c})\|_{\dot{B}_{2,1,>}^{\frac {5}{2}}}\right)=0.\]
Now, in order to study the vanishing of the remaining frequencies of \(B^{c}\), i.e., frequencies localized in \(\{\Theta_{c}^{-\frac{1}{4}}\leq|\xi|\lesssim\sigma c\}\), we employ the low-frequency estimate from Lemma 3.2 with the values
\[r=\tilde{r}=\tilde{q}=2\quad\text{and}\quad q\in\{2,\infty\},\]
which leads to
\[\|\mathds{1}_{\{|D|\geq\Theta_{c}^{-\frac{1}{4}}\}}B^{c}\|_{L_{t} ^{\infty}\dot{B}_{2,2,<}^{\frac{3}{2}}} +\|\mathds{1}_{\{|D|\geq\Theta_{c}^{-\frac{1}{4}}\}}B^{c}\|_{L_{t} ^{2}\dot{B}_{2,2,<}^{\frac{5}{2}}}\] \[\lesssim\left\|\mathds{1}_{\{|D|\geq\Theta_{c}^{-\frac{1}{4}}\}} (E_{0}^{c},B_{0}^{c})\right\|_{\dot{H}^{\frac{3}{2}}}+\left\|\mathds{1}_{\{|D| \geq\Theta_{c}^{-\frac{1}{4}}\}}P(u^{c}\times B^{c})\right\|_{L_{t}^{2}\dot{H} ^{\frac{3}{2}}}.\]
Accordingly, due to the strong convergence of \((E_{0}^{c},B_{0}^{c})_{c>0}\) in \(\dot{H}^{\frac{3}{2}}\), it is then readily seen that the first term in the right-hand side above vanishes as \(c\to\infty\).
As for the second term in the right-hand side, it is controlled first by exploiting the localization in high-frequencies to write that
\[\left\|\mathds{1}_{\{|D|\geq\Theta_{c}^{-\frac{1}{4}}\}}P(u^{c}\times B^{c}) \right\|_{L_{t}^{2}\dot{H}^{\frac{3}{2}}}\leq\Theta_{c}^{\frac{1}{8}}\,\|P(u^ {c}\times B^{c})\|_{L_{t}^{2}\dot{H}^{2}}\,.\]
Therefore, by utilizing the product law (2.3), we arrive at the conclusion that
\[\left\|\mathds{1}_{\{|D|\geq\Theta_{c}^{-\frac{1}{4}}\}}P(u^{c} \times B^{c})\right\|_{L_{t}^{2}\dot{H}^{\frac{3}{2}}} \lesssim\Theta_{c}^{\frac{1}{8}}\,\|u^{c}\|_{L_{t}^{4}\dot{B}_{2, 1}^{\frac{3}{2}}}\,\|B^{c}\|_{L_{t}^{4}\dot{H}^{2}}\] \[\lesssim\Theta_{c}^{\frac{1}{8}}\,\|u^{c}\|_{L_{t}^{4}\dot{B}_{2, 1}^{\frac{3}{2}}}\,\|B^{c}\|_{L_{t}^{\infty}\dot{H}^{\frac{3}{2}}\cap L_{t}^{2} \dot{H}^{\frac{5}{2}}}\,.\]
Hence, due to the bounds from Theorem 1.1 and the fact that
\[\lim_{c\to\infty}\Theta_{c}=0,\]
we deduce that the right-hand side above vanishes in the limit \(c\to\infty\), thereby concluding the proof Theorem 1.3.
|
2310.00507 | Precision Rydberg State Spectroscopy with Slow Electrons and Proton
Radius Puzzle | The so-called proton radius puzzle (the current discrepancy of proton radii
determined from spectroscopic measurements in ordinary versus muonic hydrogen)
could be addressed via an accurate measurement of the Rydberg constant, because
the proton radius and the Rydberg constant values are linked through
high-precision optical spectroscopy. We argue that, with manageable additional
experimental effort, it might be possible to improve circular Rydberg state
spectroscopy, potentially leading to an important contribution to the
clarification of the puzzle. Our proposal involves circular and near-circular
Rydberg states of hydrogen with principal quantum number around $n=18$, whose
classical velocity on a Bohr orbit is slower than that of the fastest
macroscopic man-made object, the Parker Solar Probe. We obtain improved
estimates for the quality factor of pertinent transitions, and illustrate a few
recent improvements in instrumentation which facilitate pertinent experiments. | Ulrich D. Jentschura, Dylan C. Yost | 2023-09-30T22:06:01Z | http://arxiv.org/abs/2310.00507v1 | # Precision Rydberg State Spectroscopy with Slow Electrons and Proton Radius Puzzle
###### Abstract
The so-called proton radius puzzle (the current discrepancy of proton radii determined from spectroscopic measurements in ordinary versus muonic hydrogen) could be addressed via an accurate measurement of the Rydberg constant, because the proton radius and the Rydberg constant values are linked through high-precision optical spectroscopy. We argue that, with manageable additional experimental effort, it might be possible to improve circular Rydberg state spectroscopy, potentially leading to an important contribution to the clarification of the puzzle. Our proposal involves circular and near-circular Rydberg states of hydrogen with principal quantum number around \(n=18\), whose classical velocity on a Bohr orbit is slower than that of the fastest macroscopic man-made object, the Parker Solar Probe. We obtain improved estimates for the quality factor of pertinent transitions, and illustrate a few recent improvements in instrumentation which facilitate pertinent experiments.
## I Introduction
The Rydberg constant is of consummate importance for our understanding of fundamental physics. Notably, this constant is an important input datum for the calculation of transition frequencies in hydrogen and deuterium (see Table II of Ref. [1] and Refs. [2; 3]). In addition to the Rydberg constant, accurate values of the proton and deuteron radii are also required in order to calculate transition frequencies in hydrogen and deuterium. Conversely, one can infer proton and deuteron radii from precise values of hydrogen and deuterium frequencies (see Refs. [1; 3] and Table 45 of Ref. [2]).
With the advent of muonic hydrogen spectroscopic measurements [4; 5], the CODATA value of the proton radius has shifted from a 2006 value of about \(R_{p}\approx 0.88\,\mathrm{fm}\) to a 2018 value of about \(R_{p}\approx 0.84\,\mathrm{fm}\), entailing a concomitant change in the Rydberg constant [2; 3]. From the 2006 to the 2018 CODATA adjustments [2; 3], the Rydberg constant has shifted by much more than the uncertainty associated with the 2006 value (see Fig. 1).
One of the most attractive experimental pathways to the determination of the Rydberg constant involves highly excited Rydberg states in atomic hydrogen, as described in Ref. [6] by a research group working at the Massachusetts Institute of Technology (MIT). Within the same group, a value for the Rydberg constant was obtained in an unpublished thesis by de Vries [7] (labelled as "Rydberg state" in Fig. 1),
\[cR_{\infty}|_{\mathrm{deVries}}=3\,289\,841\,960\,306(69)\,\mathrm{kHz}\,, \tag{1}\]
which is consistent with the CODATA 2006 value, and barely consistent with the 2018 CODATA value from Ref. [3]:
\[cR_{\infty}|_{\mathrm{CODATA,2018}}=3\,289\,841\,960\,250(7)\,\mathrm{kHz}\,. \tag{2}\]
The 2006 CODATA value is discrepant,
\[cR_{\infty}|_{\mathrm{CODATA,2006}}=3\,289\,841\,960\,360(21)\,\mathrm{kHz}\,. \tag{3}\]
A comparison of the three values of the Rydberg constant is made in Fig. 1, where we use as the reference value
\[R_{0}=\left.R_{\infty}\right|_{\mathrm{CODATA,2018}}\,. \tag{4}\]
The situation is interesting because, before the advent of muonic hydrogen spectroscopy, values of the Rydberg constant and of the proton radius inferred from hydrogen and deuterium spectroscopy _alone_ (without any additional input from scattering experiments) were consistent with the 2006 CODATA value for both the 2006 CODATA value of the Rydberg constant, as well as the 2006 CODATA values of the proton and deuteron radii. This is discussed in detail in the discussion surrounding Table 45 of Ref. [2], where it is pointed out that the proton radius \(R_{p}\), the deuteron radius \(R_{d}\), and the Rydberg constant can all be deduced using input data exclusively from hydrogen and deuterium spectroscopy.
Traditionally, the Rydberg constant has been determined on the basis of Rydberg-state spectroscopy of atomic hydrogen [8; 9; 10; 11; 12; 13; 14]. An improved measurement of the Rydberg constant would thus constitute an important contribution to a resolution of the proton radius
Figure 1: We examine the values for the Rydberg constant, converted to frequency units, from CODATA adjustments and from the (unpublished, grayed) result communicated in Ref. [7]. The CODATA (2006) value was reported in Ref. [2], and the CODATA (2018) value is from Ref. [3]. The reference value \(R_{0}\) is from the 2018 adjustment.
puzzle [15]. In a remarkable investigation dating about 20 years back, circular Rydberg states around quantum numbers \(n\approx 30\) have been investigated with the ultimate aim of an improved measurement of the Rydberg constant [7]. Inspired by the importance of Rydberg states, it has been pointed out in Refs. [16; 17; 18] that Rydberg-state measurements in hydrogenlike ions of medium charge numbers could potentially offer an alternative route to a determination of the Rydberg constant.
The purpose of this paper is threefold. First, we update the calculation of the quality factors for transitions among circular Rydberg states, in comparison to the estimate provided in Eq. (6) of Ref. [16]. Second, we discuss the status of quantum electrodynamic theory of Rydberg states, demonstrate that the theory is very well under control on the level of accuracy required for a determination of the Rydberg constant on the level of precision required for a resolution of the proton radius puzzle, and discuss the relative suppression of a number of notoriously problematic quantum electrodynamic corrections for circular and near-circular Rydberg states. Calculated values for relativistic Bethe logarithms for circular and near-circular Rydberg states with principal quantum numbers \(16\leq n\leq 20\) are also provided. Third, we provide an overview of recent advances in laser technology and other experimental techniques, which facilitate an improvement of measurements of the Rydberg constant on the basis of Rydberg state measurements. SI mksA units are employed throughout this paper.
## II Quality factors
Of crucial importance for the feasibility of high-precision spectroscopy experiments are so-called quality factors of transitions. The quality factor is the dimensionless ratio of the transition energy to the natural line width of the transition (measured in radians per second), where the latter is converted to an energy via multiplication by the reduced Planck constant \(\hbar\). Here, we present the general formula for the one-photon decay rate of a circular Rydberg state, with principal quantum number \(n\) and maximum orbital angular moment \(\ell=n-1\). This reference state can decay, due to dipole transitions to states with principal quantum number \(n-1\) and angular momentum quantum number \(\ell=n-2\). For the decay rate of the state with principal quantum number \(n\) and maximum orbital angular moment \(\ell=n-1\), as parameterized by the imaginary part of the self energy, \(E=\text{Re}\,E-\text{i}\Gamma_{n}/2\), we find the result
\[\Gamma_{n}^{\ell=n-1}=\frac{4^{2n}(n-1)^{2n-1}\,n^{2n-4}}{(2n-1)^{4 n-1}(2n-3)}\,\frac{\alpha(Z\alpha)^{4}m\,c^{2}}{3n^{5}}\,\left(\frac{\mu}{m} \right)^{3} \tag{5}\]
which can be expanded for large \(n\) as follows,
\[\Gamma_{n}^{\ell=n-1}=\alpha\frac{(Z\alpha)^{4}mc^{2}}{3n^{5}}\, \left(\frac{\mu}{m}\right)^{3}\\ \times\left[1+\frac{3}{2n}+\frac{17}{8n^{2}}+\mathcal{O}\left( \frac{1}{n^{3}}\right)\right]\,, \tag{6}\]
where \(m\) is the electron mass, \(\mu\) is the reduced mass of the two-body system, \(\alpha\) is the fine-structure constant, \(Z\) is the nuclear charge number, and the expansion for large \(n\) illustrates that the lifetimes of circular Rydberg states scale as \(n^{5}\). The energy difference for transitions among circular Rydberg states is
\[E_{n}-E_{n-1}=\frac{(Z\alpha)^{2}\mu}{2}\,\left(\frac{1}{(n-1)^{2}}-\frac{1}{ n^{2}}\right)\,, \tag{7}\]
which scales as \(1/n^{3}\) for large \(n\). Due to the \(1/n^{5}\) asymptotics of the decay rate and the \(1/n^{3}\) asymptotics of the transition energy, the quality factor increases, for large \(n\), with the square of the principal quantum number \(n\),
\[Q=\frac{E_{n}-E_{n-1}}{\Gamma_{n}^{\ell=n-1}+\Gamma_{n-1}^{\ell =n-2}}\\ =\frac{3n^{2}}{2\alpha\,(Z\alpha)^{2}}\,\left(\frac{m}{\mu}\right) ^{2}\left[1-\frac{5}{2n}-\frac{17}{8n^{2}}+\mathcal{O}\left(\frac{1}{n^{3}} \right)\right]\,. \tag{8}\]
This formula constitutes an update of the estimate given in Eq. (6) of Ref. [16] (the quality factor obtained here is larger by a factor two as compared to Ref. [16]). The estimate in Eq. (8) illustrates the enormous advantages of Rydberg states for the measurement of the Rydberg constant. The dramatic increase of the quality factor with the square of the principal quantum number makes Rydberg state transitions very attractive. Also, we observe that the quality factor is inversely proportional to the second power of the nuclear charge number \(Z\). This means that \(Z=1\) (atomic hydrogen) offers the best quality factor, for given principal quantum number \(n\).
Let us also evaluate the quality factor for the transition among near-circular Rydberg states, where the upper level has orbital angular momentum \(\ell=n-2\) and the lower level has orbital angular \(\ell=n-3\) (see also Fig. 2). The calculation of the quality factor proceeds in a similar way, but one needs to consider two available dipole decay channels, namely, from the reference state with principal quantum number \(n\) and orbital angular momentum quantum number \(\ell=n-2\), to lower states with \(n^{\prime}=n-1\) and \(\ell=n-3\), and \(n^{\prime}=n-2\) and \(\ell=n-3\). The decay width evaluates to
\[\Gamma_{n}^{\ell=n-2}=\alpha\frac{(Z\alpha)^{4}mc^{2}}{3n^{5}}\, \left(\frac{\mu}{m}\right)^{3}\\ \times\left[1-\frac{1}{2n}-\frac{1}{8n^{2}}+\mathcal{O}\left(\frac {1}{n^{3}}\right)\right]\\ +\alpha\frac{4(Z\alpha)^{4}mc^{2}}{3n^{6}}\,\left(\frac{\mu}{m} \right)^{3}\\ \times\left[1+\frac{5}{2n}+\frac{25}{4n^{2}}+\mathcal{O}\left( \frac{1}{n^{3}}\right)\right]\,, \tag{9}\]
where the two terms on the right-hand side correspond to the lower states with \(n^{\prime}=n-1\) and \(n^{\prime}=n-2\), respectively. The quality factor evaluates to
\[Q^{\prime}=\frac{E_{n}-E_{n-1}}{\Gamma_{n}^{\ell=n-2}+\Gamma_{n-1} ^{\ell=n-3}}\\ =\frac{3n^{2}}{2\alpha\left(Z\alpha\right)^{2}}\left(\frac{m}{\mu} \right)^{2}\,\left[1-\frac{9}{2n}+\frac{9}{8n^{2}}+\mathcal{O}\left(\frac{1}{ n^{3}}\right)\right]\,, \tag{10}\]
which is commensurate with \(Q\) given in Eq. (8) and illustrates that no significant accuracy loss occurs if one measures near-circular as opposed to circular Rydberg states.
A quick look at Eqs. (1), (2) and (3), and Fig. 1, illustrates that one needs to resolve the Rydberg constant to roughly one part in \(10^{11}\) or better in order to meaningfully distinguish between the 2006 and 2018 CODATA values of the Rydberg constant. One can convert this resolution to a splitting factor \(\mathcal{S}\), which measures the fraction to which one needs to split the resonance line in order to achieve a resolution of one one part in \(10^{11}\). The splitting factor \(\mathcal{S}\) is given by the formula
\[\mathcal{S}=10^{11}/Q\,. \tag{11}\]
For \(Z=1\), one obtains for \(\mathcal{S}\) the perfectly reasonable figure \(\mathcal{S}=93\) for \(n=18\); expressed differently, one only needs to split the resonance lines near \(n=18\) to one part in \(93\) is order to achieve a resolution which meaningfully contributes to a resolution of the proton radius puzzle.
Cross-damping terms (non-resonant corrections) can be generated by virtual levels displaced by a fine-structure interval [19]. A rough estimate of the corresponding energy (frequency) shift \(\delta E_{\text{CD}}\) (we set \(\hbar=1\)) is given by the expression [19]
\[\delta E_{\text{CD}}\sim\frac{\Gamma_{n}^{2}}{\delta E}\,, \tag{12}\]
Here, \(\delta E\) is the displacement of the virtual state responsible for the cross-damping energy shift. As pointed out in Ref. [19], the nearest virtual states which can contribute to differential cross sections are states displaced from the upper state of the Rydberg transition by a fine-structure interval. The maximum angular momentum is \(\ell_{\text{max}}=n-1\). The total angular momenta for the circular Rydberg states are \(\ell_{\text{max}}\pm 1/2\). The two possible values for the total angular momentum quantum numbers of the upper level are thus \(j_{+}=n-1/2\) and \(j_{-}=n-3/2\), one of these being the reference level, the other being the virtual level which contributes to the cross damping. So, we have potential nonresonant contributions from virtual levels with an energy displacement
\[\delta E=E_{n,j_{+}}-E_{n,j_{-}}=\frac{(Z\alpha)^{4}m}{2n^{4}(n-1)}\approx \frac{(Z\alpha)^{4}m}{2n^{5}}\,. \tag{13}\]
The ratio of the cross-damping energy shifts relative to transition frequency is thus estimated by the expression
\[\chi\equiv\frac{\delta E_{\text{CD}}}{E_{n}-E_{n-1}}\sim\frac{2}{9}\frac{ \alpha^{2}(Z\alpha)^{2}}{n^{2}}\,. \tag{14}\]
Figure 2: The level diagram for Rydberg states illustrates the dipole-allowed transitions among circular [panel (a)] and near-circular [panel (b)] states. Transitions driven for high-precision spectroscopy are indicated with two-sided arrows. Transitions relevant for the calculation of decay rates (quality factors) are indicated by dashed lines. Circular Rydberg levels with \(\ell=n-1\) are marked in green color, while near-circular Rydberg levels with \(\ell=n-2\) are marked in red color.
For \(Z=1\) and \(n=18\), this evaluates to \(1.9\times 10^{-12}\), which is less than the accuracy required in order to distinguish between the 2006 and 2018 CODATA values of the Rydberg constant. This estimate suggests that cross-damping effects are suppressed for Rydberg states and do not represent an obstacle for the determination of the Rydberg constant from highly excited, circular Rydberg states.
The above estimates given in Eqs. (12)--(14) are valid for the differential cross section [19]. For the total cross section, these estimates improve even further, consistent with pertinent considerations reported in Refs. [19, 20, 21].
## III Quantum Electrodynamic Effects
One might ask if the theory of Rydberg-state transition is well enough under control in order to facilitate the interpretation of a measurement of transitions among Rydberg states. As outlined in Ref. [2], the theoretical contributions to the Lamb shift of Rydberg states, on the level necessary for a determination of the Rydberg constant, can be summarized into just four terms: _(i)_ the Dirac energy (in the nonrecoil limit) which is summarized in Eq. (1) of Ref. [16], _(ii)_ the recoil corrections from the Breit Hamiltonian, which are summarized in Eq. (2) of Ref. [16], _(iii)_ the relativistic-recoil corrections summarized in Eq. (3) of Ref. [16], and _(iv)_ the self-energy effect summarized in Eq. (4) of Ref. [16]. Calculated values of nonrelativistic Bethe logarithms, which enter the expression for the relativistic recoil correction, have been tabulated for all states with principal quantum numbers \(n\leq 200\) in Ref. [22]. This favorable situation illustrates the tremendous simplifications possible for Rydberg states. Notably, vacuum-polarization, nuclear-size, and nuclear-structure corrections can be completely ignored for circular Rydberg states whose probability density at the nucleus vanishes.
Among the four effects listed above, the most interesting contribution concerns the bound-state self-energy \(E_{\rm SE}\), which is described by the formula
\[E_{\rm SE}=\frac{\alpha}{\pi}\frac{(Z\alpha)^{4}\,m}{n^{3}}\left( A_{40}+(Z\alpha)^{2}\right.\\ \times\left\{A_{61}\,\ln\left[\frac{m}{\mu}(Z\alpha)^{-2}\right] +A_{60}\right\}\right). \tag{15}\]
The first subscript of the \(A\) coefficients counts the number of \(Z\alpha\), while the second counts the number of logarithms \(\ln[\frac{m}{\mu}(Z\alpha)^{-2}]\).
The general result for the \(A_{40}\) coefficient for circular Rydberg states with orbital angular momentum \(\ell\neq 0\) and principal quantum number \(n\geq 2\) is well known,
\[A_{40}=-\left(\frac{\mu}{m}\right)^{2}\frac{1}{2\kappa(2\ell+1)}\,-\frac{4}{3 }\,\left(\frac{\mu}{m}\right)^{3}\ln k_{0}(n,\ell)\,, \tag{16}\]
where \(\kappa=(-1)^{j+\ell+1/2}\) is the Dirac angular quantum number and \(\ln k_{0}(n,\ell)\) is the Bethe logarithm. (For values of \(\ln k_{0}(n,\ell)\), one consults Ref. [22].) The functional dependence on the reduced mass is a consequence of the proton's convection current; an explanation is given in Chap. 12 of Ref. [23]. Here, we will place special emphasis on circular, and near-circular Rydberg states with \(\ell=n-1\) and \(\ell=n-2\), with \(n\geq 13\), and refer to them as the following series of states,
* series \({\cal A}\): \(\ell=n-1\), \(j=\ell+1/2\), \(\kappa=-(j+1/2)\),
* series \({\cal B}\): \(\ell=n-1\), \(j=\ell-1/2\), \(\kappa=(j+1/2)\),
* series \({\cal C}\): \(\ell=n-2\), \(j=\ell+1/2\), \(\kappa=-(j+1/2)\),
* series \({\cal D}\): \(\ell=n-2\), \(j=\ell-1/2\), \(\kappa=(j+1/2)\).
The \({\cal A}\) series has the highest \(\ell\) and \(j\) for given \(n\). The \(A_{40}\) coefficients evaluate to the following expressions for the four series of states,
\[\frac{A_{40}({\cal A},n)}{(\mu/m)^{2}} = \frac{1}{2n(2n-1)}-\frac{4}{3}\,\frac{\mu}{m}\,\ln k_{0}(n,n-1) \tag{17}\] \[\frac{A_{40}({\cal B},n)}{(\mu/m)^{2}} = \,-\,\frac{1}{2(n-1)(2n-1)}-\frac{4}{3}\,\frac{\mu}{m}\,\ln k_{0} (n,n-1)\,,\] (18) \[\frac{A_{40}({\cal C},n)}{(\mu/m)^{2}} = \frac{1}{2(n-1)(2n-3)}-\frac{4}{3}\,\frac{\mu}{m}\,\ln k_{0}(n,n- 2)\,,\] (19) \[\frac{A_{40}({\cal D},n)}{(\mu/m)^{2}} = \,-\,\frac{1}{2(n-2)(2n-3)}-\frac{4}{3}\frac{\mu}{m}\ln k_{0}(n,n- 2)\,. \tag{20}\]
As a function of the principal quantum number, the Bethe logarithms \(\ln k_{0}(n,n-1)\) and \(\ln k_{0}(n,n-2)\) decrease with \(n\) for large \(n\) as \(n^{-3}\). In the nonrecoil limit \(\mu\to m\), and the limit of large \(n\), one has
\[A_{40}({\cal A},n) \approx -A_{40}({\cal B},n)\approx A_{40}({\cal C},n) \tag{21}\] \[\approx -A_{40}({\cal D},n)\approx\frac{1}{4n^{2}}\,,\qquad n\to\infty\,.\]
The leading quantum electrodynamic corrections for circular and near-circular Rydberg states are parameterized by the \(A_{40}\) coefficient. The quantum electrodynamic effects are seen to be suppressed, for large \(n\), by a factor \(n^{-2}\) which appears in addition to the overall scaling factor \(n^{-3}\) in Eq. (15).
Higher-loop contributions to the anomalous magnetic moment can be taken into account by the replacement
\[-\left(\frac{\mu}{m}\right)^{2}\frac{1}{2\kappa(2\ell+1)}\to-\left(\frac{\mu}{ m}\right)^{2}\frac{1}{2\kappa(2\ell+1)}\frac{a_{e}}{\alpha/(2\pi)} \tag{22}\]
where \(a_{e}\) contains the higher-loop contributions to the electron anomalous magnetic moment, which determines the \(g\) factor of the electron according to \(g=2(1+a_{e})\). The term \(\alpha/(2\pi)\) is the one-loop Schwinger value [24]. The quantity \(a_{e}\) can either be taken as the most recent experimental value of the electron anomalous magnetic
moment [25], which results in \(a_{e}=1.159\,652\,180\,59(13)\times 10^{-3}\), or as a purely theoretical prediction including higher-order effects [26].
The suppression of the quantum electrodynamic effects for circular and near-circular Rydberg states has a physical reason: Namely, the velocity of a classical electron orbiting the nucleus in a Bohr orbit corresponding to the principal quantum number \(n\) is
\[v_{\rm cl}=\frac{Z\alpha c}{n}\,, \tag{23}\]
which evaluates, for \(Z=1\) and \(n=18\) (this choice of \(n\) is motivated in Sec. IV), to a velocity of \(1.21\times 10^{5}\)m/s. This is slower than the velocity of the fastest macroscopic man-made object, namely, the Parker Solar Probe which recently reached a velocity of \(1.48\times 10^{5}\)m/s on its orbit around the Sun [27; 28]. Effects originating from relativity and quantum electrodynamics are thus highly suppressed for circular Rydberg states.
The general result for the \(A_{61}\) coefficient, valid for Rydberg states with \(n\geq 13\) and \(\ell=n-1\) and \(\ell=n-2\), has been given in Eq. (6) of Ref. [29] and Eq. (4) of Ref. [16] and reads
\[A_{61}=\left(\frac{\mu}{m}\right)^{3}\,\frac{3n^{2}-\ell(\ell+1)}{3n^{2}( \ell+3/2)(\ell+1)(\ell+1/2)\ell(\ell-1/2)}\,, \tag{24}\]
a result which is independent of the spin orientation. This expression evaluates to
\[\frac{A_{61}({\cal A},n)}{(\mu/m)^{3}}=\frac{A_{61}({\cal B},n)}{(\mu/m)^{3}}= \frac{8}{3n^{2}(n-1)(2n-1)(2n-3)}\,, \tag{25}\]
\[\frac{A_{61}({\cal C},n)}{(\mu/m)^{3}}=\frac{A_{61}({\cal D},n)}{(\mu/m)^{3}}= \frac{32(n+2)}{3n^{2}\prod_{i=2}^{5}(2n-i)}\,. \tag{26}\]
In the large-\(n\) limit, one has
\[A_{61}({\cal A},n)\approx A_{61}({\cal B},n)\] \[\quad\approx A_{61}({\cal C},n)\approx A_{61}({\cal D},n)\approx \frac{2}{3n^{5}}\,,\qquad n\to\infty\,. \tag{27}\]
The suppression with \(n^{-5}\), in addition to the overall scaling factor \(n^{-3}\) from Eq. (15), again illustrates the smallness of relativistic and quantum electrodynamic effects for circular Rydberg states.
The next-higher coefficient is \(A_{60}\), which is called the relativistic Bethe logarithm [30; 31]. Its absolute magnitude is highly suppressed for circular Rydberg states. Specifically, according to Refs. [16; 17; 18] and Table 7.2 of Ref. [32], one has
\[\max\{|A_{60}({\cal A},n)|,|A_{60}({\cal B},n)|,\] \[|A_{60}({\cal C},n)|,|A_{60}({\cal D},n)|\}<10^{-4}\,,\qquad n>13\,. \tag{28}\]
Furthermore, according to the calculations reported in Ref. [18; 33], the approximation \(G_{\rm SE}\approx A_{60}\) for the nonperturbative self-energy remainder function remains valid to excellent approximation for circular Rydberg states, for low and medium nuclear charge numbers (see Table 1 of Ref. [33] and Tables 1 and 2 of Ref. [18]). The relation (28) implies that the correction to the transition frequency among circular Rydberg states induced by the relativistic Bethe logarithm \(A_{60}\), for \(Z=1\), is smaller than one part in \(10^{-15}\) for \(n\geq 13\). Nevertheless, it is useful to calculate numerical values of relativistic Bethe for the states under investigation here (see Table I). We follow the calculational procedure outlined in Ref. [29]. For calculated values of \(A_{60}\) for circular and near-circular Rydberg states with \(13\leq n\leq 16\), we refer to Table 1 of Ref. [16] and Table 1 Ref. [18].
## IV Experimental considerations
Let us also include a few considerations relevant to the experimental realization of a high-precision measurement of the Rydberg constant based on circular Rydberg states. One might assume that the ultimate experimental success could be bolstered by choosing transitions with as high a quality factor \(Q\) as possible. As discussed around Eq. (8), since \(Q\propto n^{2}\), high \(n\) is desirable.
However, it is also important to consider the sensitivity of a given measurement to systematic effects. Many systematic effects increase with powers of \(n\). For instance, shifts and distortions of resonances due to the Stark effect scale as \(n^{5}\)[7; 34; 35], which produces challenges to measuring transitions between circular Rydberg states
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & & & \multicolumn{2}{c}{\({\cal A}\) Series} & \multicolumn{2}{c}{\({\cal B}\) Series} \\ \(n\) & \(\ell\) & \(j\) & \(A_{60}(n\ell_{j})\) & \(j\) & \(A_{60}(n\ell_{j})\) \\ \hline
16 & 15 & \(\frac{31}{2}\) & \(1.059\,675(5)\times 10^{-5}\) & \(\frac{29}{2}\) & \(0.121\,748(5)\times 10^{-5}\) \\
17 & 16 & \(\frac{33}{2}\) & \(0.805\,212(5)\times 10^{-5}\) & \(\frac{31}{2}\) & \(0.078\,287(5)\times 10^{-5}\) \\
18 & 17 & \(\frac{35}{2}\) & \(0.621\,952(5)\times 10^{-5}\) & \(\frac{33}{2}\) & \(0.049\,885(5)\times 10^{-5}\) \\
19 & 18 & \(\frac{37}{2}\) & \(0.487\,434(5)\times 10^{-5}\) & \(\frac{35}{2}\) & \(0.031\,113(5)\times 10^{-5}\) \\
20 & 19 & \(\frac{30}{2}\) & \(0.387\,025(5)\times 10^{-5}\) & \(\frac{37}{2}\) & \(0.018\,584(5)\times 10^{-5}\) \\ \hline \hline & & & \multicolumn{2}{c}{\({\cal C}\) Series} & \multicolumn{2}{c}{\({\cal D}\) Series} \\ \(n\) & \(\ell\) & \(2j\) & \(A_{60}(n\ell_{j})\) & \(2j\) & \(A_{60}(n\ell_{j})\) \\ \hline
16 & 14 & \(\frac{29}{2}\) & \(1.540\,182(5)\times 10^{-5}\) & \(\frac{27}{2}\) & \(0.155\,784(5)\times 10^{-5}\) \\
17 & 15 & \(\frac{31}{2}\) & \(1.145\,325(5)\times 10^{-5}\) & \(\frac{29}{2}\) & \(0.096\,026(5)\times 10^{-5}\) \\
18 & 16 & \(\frac{33}{2}\) & \(0.867\,820(5)\times 10^{-5}\) & \(\frac{31}{2}\) & \(0.058\,328(5)\times 10^{-5}\) \\
19 & 17 & \(\frac{35}{2}\) & \(0.668\,553(5)\times 10^{-5}\) & \(\frac{33}{2}\) & \(0.034\,217(5)\times 10^{-5}\) \\
20 & 18 & \(\frac{37}{2}\) & \(0.522\,676(5)\times 10^{-5}\) & \(\frac{35}{2}\) & \(0.018\,690(5)\times 10^{-5}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Calculated values for the \(A_{60}\) coefficients for highly excited Rydberg states are given for the \({\cal A}\), \({\cal B}\), \({\cal C}\) and \({\cal D}\) series of states, for principal quantum numbers \(16\leq n\leq 20\).
with very high \(n\). However, the previous measurement between circular Rydberg states of hydrogen (Ref. [7]) between \(n=27\) and \(n=28\), and \(n=29\) and \(n=30\), had negligible contributions from uncertainties in the Stark shifts [7]). The experimental accuracy was instead limited by dipole-dipole interactions. Since the dipole moment for an atom in a superposition of adjacent circular Rydberg states scales as \(n^{2}\), and the systematic effect is related to the interaction energy of two dipoles, this effect scaled as \(n^{4}\).
Therefore, in order to mitigate the dipole-dipole interactions, it may be interesting to consider transitions between circular Rydberg states with somewhat lower \(n\). For instance, with all other experimental parameters being similar, a transition between \(n=18\) and \(n=19\) would reduce the effects of the dipole-dipole interactions by a factor of \(\sim 6\) as compared to the previous measurement [7], Another experimental benefit to reducing \(n\) below that demonstrated in [7], is that blackbody-radiation-induced transitions would be mitigated because the thermal radiation spectral density for temperatures \(\leq 300\) K is reduced for the more energetic transitions occurring between lower-lying states. This may allow the experiment to be performed at liquid nitrogen as opposed to liquid helium temperatures.
The MIT measurement used pulsed lasers at a repetition rate of \(61\,\)Hz to produce circular Rydberg states. Therefore, another option to mitigate dipole-dipole interactions could be to produce a near-continuous source of circular Rydberg states using continuous-wave (cw) lasers. Since the dipole-dipole interaction is related to the peak density of circular Rydberg states, a near-continuous source of circular Rydberg states could allow for a large reduction in the peak density while maintaining sufficient statistics. This could be accomplished by first using the \(1S\)-\(2S\) two-photon transition to populate the \(2S\) metastable state as in Refs. [36; 37], followed by excitation to Rydberg levels using a \(365\,\)nm cw laser. Then circularization would be performed using the methods outlined in Ref. [6].
To perform spectroscopy of the \(n=18\) to \(n=19\) circular Rydberg states, a millimeter-wave Ramsey apparatus akin to the one employed in Ref. [7] could be used. To excite the transition, a radiation source at \(1.04\) THz is needed. While the millimeter wave source in [7] operated at \(256\) or \(316\) GHz, a similar source operating at frequencies above \(1\) THz is possible using a planar GaAs Schottky diode frequency multiplier [38]. The output power of such THz sources is relatively low. However, due to the large transition matrix element between circular Rydberg states, the transition can be saturated with \(<1\) nW and a \(3\,\)mm beam waist. Therefore, commercially available THz sources would likely be sufficient [39].
## V Conclusions
The main conclusions of this paper are as follows. In Sec. II, we have shown that the quality factors of transitions among circular Rydberg are sufficient to comfortably allow for a distinction between the 2006 and 2018 CODATA values of the Rydberg constant [see Eqs. (2) and (3), and Refs. [2; 3]]. Furthermore, according to the considerations reported in Sec. II, cross-damping terms do not to present an obstacle to such a measurement. In Sec. III, we showed that the theory of bound states is sufficiently under control to allow for a determination of the Rydberg constant from transitions among circular Rydberg states in atomic hydrogen. Experimental considerations (Sec. IV) corroborate the advances in technology which make such a measurement more feasible than reported in Ref. [7], in part by reducing several systematic effects through a less dense atomic beam which can be realized in a continuous-wave excitation scheme into the circular states.
A few concluding remarks on the proton radius puzzle are in order. We recall that the proton radius puzzle refers to the difference between the "smaller" proton radius of \(R_{p}\approx 0.84\,\)fm obtained in Ref. [4] and the larger value of \(R_{p}\approx 0.88\,\)fm from the 2006 CODATA adjustment (see Refs. [1; 2; 12; 13] and references therein). Various recent scattering experiments [40; 41] and spectroscopic experiments [36; 37; 42; 43; 44] come to conflicting conclusions on the proton radius. A recent measurement described in Refs. [37] has led to a value of \(R_{p}\approx 0.86\,\)fm. It has very recently been pointed out in Ref. [15] that two older scattering experiments, carried out in 1969 at Brookhaven (see Refs. [45; 46]) are consistent with an 8% discrepancy in the cross sections between muon-proton and electron-proton scattering, which translates into 4% for the form factor slope, which in turn amounts to 2% for the radius. This is precisely the difference between the "smaller" proton radius of \(R_{p}\approx 0.84\,\)fm and the recently obtained (Ref. [37]) value of \(R_{p}\approx 0.86\,\)fm. The MUSE experiment [47; 48; 49] at the Paul-Scherrer Institute aims to remmeasure the muon-proton cross sections in the near future.
In conclusion, we have shown that the idea formulated in Refs. [6; 7; 34; 35] and Refs. [16; 17; 18] could lead to a feasible pathway toward a determination of the Rydberg constant. This could be interesting because most recent spectroscopic experiments [36; 37; 42; 43; 44] focus on transitions in atomic hydrogen which depend on both constants in question, namely, the proton radius and the Rydberg constant. Focusing on Rydberg states, as proposed here, means that one isolates one of these constants, thereby potentially obtaining a clear and distinct picture of the proton radius puzzle. The current situation provides not only motivation to carry out the MUSE experiment at PSI [47; 48; 49], but also, to re-double efforts to measure the Rydberg constant.
## Acknowledgements
The authors acknowledge extensive insightful conversations with B. J. Wundt, and helpful conversations with S. M. Brewer. Support from the National Science Foundation (grant PHY-2110294) is gratefully acknowledged. Furthermore, U.D.J. and D.C.Y. acknowledge support from the Templeton Foundation (Fundamental Physics Black Grant, Subawards 60049570 MST and 60049570 CSU of Grant ID #61039), is also gratefully acknowledged.
|
2309.09326 | Experiential-Informed Data Reconstruction for Fishery Sustainability and
Policies in the Azores | Fishery analysis is critical in maintaining the long-term sustainability of
species and the livelihoods of millions of people who depend on fishing for
food and income. The fishing gear, or metier, is a key factor significantly
impacting marine habitats, selectively targeting species and fish sizes.
Analysis of commercial catches or landings by metier in fishery stock
assessment and management is crucial, providing robust estimates of fishing
efforts and their impact on marine ecosystems. In this paper, we focus on a
unique data set from the Azores' fishing data collection programs between 2010
and 2017, where little information on metiers is available and sparse
throughout our timeline. Our main objective is to tackle the task of data set
reconstruction, leveraging domain knowledge and machine learning methods to
retrieve or associate metier-related information to each fish landing. We
empirically validate the feasibility of this task using a diverse set of
modeling approaches and demonstrate how it provides new insights into different
fisheries' behavior and the impact of metiers over time, which are essential
for future fish population assessments, management, and conservation efforts. | Brenda Nogueira, Gui M. Menezes, Nuno Moniz | 2023-09-17T17:17:38Z | http://arxiv.org/abs/2309.09326v1 | # Experiential-Informed Data Reconstruction for Fishery Sustainability and Policies in the Azores
###### Abstract
Fishery analysis is critical in maintaining the long-term sustainability of species and the livelihoods of millions of people who depend on fishing for food and income. The fishing gear, or metier, is a key factor significantly impacting marine habitats, selectively targeting species and fish sizes. Analysis of commercial catches or landings by meter in fishery stock assessment and management is crucial, providing robust estimates of fishing efforts and their impact on marine ecosystems. In this paper, we focus on a unique data set from the Azores' fishing data collection programs between 2010 and 2017, where little information on meters is available and sparse throughout our timeline. Our main objective is to tackle the task of data set reconstruction, leveraging domain knowledge and machine learning methods to retrieve or associate metier-related information to each fish landing. We empirically validate the feasibility of this task using a diverse set of modeling approaches and demonstrate how it provides new insights into different fisheries' behavior and the impact of meters over time, which are essential for future fish population assessments, management, and conservation efforts.
sustainability, fishery data, data set reconstruction, machine learning, evidence-based policy
## I Introduction
The global demand for seafood has increased substantially, with an estimated 179 million tons of fish production worldwide in 2018, of which 156 million tons were used for human consumption - an annual supply of 20.5 kg per capita [1]. However, this high demand has also had an unprecedented impact on aquatic ecosystems and is believed to have caused a reduction in ocean biomass content of up to 80% [2]. The proportion of fish stocks within biologically sustainable levels decreased from 90% in 1974 to 65.8% in 2017 [1]. Furthermore, recent assessments have presented worrsome statistics regarding fish and shellfish stocks. Out of the 397 commercially exploited stocks, a staggering 69% were classified as overfied, reflecting the consequences of excessive fishing activities. Additionally, 51% of the stocks were found to be operating beyond safe biological limits, putting their long-term sustainability at risk. These findings become even more concerning when considering that only 12% of the stocks met the established guidelines outlined by the Common Fisheries Policy (CFP), which governs fisheries in the European Union seas [3]. Such data emphasizes the pressing need for immediate measures to address overfishing and ensure the implementation of sustainable practices for the preservation and future of these vital marine resources.
Portugal has Europe's 5th largest exclusive economic zone (EEZ), the 3rd largest of the EU, and the 20th largest EEZ globally, at 1,727,408 km2. The Autonomous Region of Azores (Portugal), a group of 9 islands spread over 600 km wide, is a major contributor to the size of the country's EEZ. In the Azores, fisheries are considered artisanal and small-scale in nature, with a multisegmented fleet, targeting multiple species with a wide range of fishing gears or meters [4], managed under the CFP and by national and regional management policies [5]. Its annual landings in the last ten years are about 11.000 tons and valued at about 33M\(\mathbf{\acute{e}}\).
The Region implemented many technical and spatial measures, monitoring scientific surveys and the fishing effort, biological data, or the size composition of the species landings, are regularly collected under the national DCF (the PNRD). Despite efforts, information is usually deficient or inadequate for most species. Among the 138 species that landed in the region between 2009-2019, twenty-two (18 fishes, two molluscs, and two crustaceans) were selected as priority stocks according to the FAO1 and ICES2 criteria [6]. Most of these showed a decreasing trend in their abundances, and only four stocks are currently assessed using data-limited approaches [6]. However, The uncertainty inherent to data-limited stock assessments, such as the existing ones, can compromise the ability to inform management [7, 8, 9]. In addition, the more robust population assessment methods require considerable data, which are virtually impossible to obtain for all species, and limited fisheries data [10], such as our study case.
Footnote 1: FAO – Food and Agriculture Organization of the United Nations
Footnote 2: ICES – International Council for the Exploration of the Sea
In this paper, we leverage domain knowledge and machine learning tools to improve the quality of the fisheries database in the Azores, fill the information gaps, improve the fish stock assessment modeling, and decrease its uncertainty, with the ultimate goal of improving advice and sustainable management of the wild fish populations. Specifically, we explore combinations of data pre-processing strategies and feature engineering methods to cope with two main challenges: multi-class imbalance and lack of contextual information in the
original data.
The remainder of this paper is organized as follows. Section II discusses related works, accompanied by motivating examples. In Section III, the data is introduced, along with a description of the preparation and preprocessing steps undertaken, as well as a concise analysis that serves as a motivation for the study. Section IV outlines the evaluation metrics employed for the time series data set and presents the machine learning models and strategies utilized, along with the corresponding results obtained. Section V consists of a comprehensive discussion of the results, followed by the conclusions presented in Section VI.
## II Background
Incomplete datasets, where some samples lack attribute values, have long been a challenge in research. Traditional imputation techniques like mean imputation or last observation carried forward have limitations as they oversimplify the data, leading to biased estimates and distorted conclusions. Researchers have turned to machine learning algorithms for missing data reconstruction to address this.
The utility of machine learning in filling in missing information has been demonstrated in various domains. For instance, in a study focused on gene network reconstruction [11], microarray experiments were used to identify gene interrelations. A supervised learning approach utilizing decision-tree-related classifiers was employed to predict gene expression based on other gene expression data. In another study [12], the authors focus on compensating for missing data in electric power datasets for improved energy management systems. It compares the performance of statistical methods (ARIMA and LI) and machine learning methods (K-NN, MLP, and SVR) for data imputation. Using a two-year dataset from Taiwan, the researchers find that machine learning methods generally outperform statistical methods.
Machine learning offers advantages, particularly in temporal data. Traditional methods struggle with the dynamic patterns and dependencies found in temporal data. In [13], the Engdahlvan der Hilst-Buland (EHB) Bulletin of Hypocentres and associated travel time residuals were reconstructed using updated procedures. This resulted in improved maps of seismicity, benefiting global seismicity studies and tomographic inversions.
Marine ecosystem research and conservation present unique challenges due to their complex nature and data collection across various scales [14]. Machine learning automates routine tasks in marine science, enabling data-driven decision-making and uncovering hidden patterns in large datasets [15].
Our work holds particular significance in the fisheries domain as it showcases the effectiveness of utilizing machine learning to address the issue of missing data in fishery landings on a temporal relevant scale. This research serves as a valuable contribution to the field, highlighting the efficacy of machine learning in overcoming missing data challenges in a dataset of critical importance to fisheries.
## III Fisheries Data (2010-2017)
This section provides an overview of the data utilized in our study. Specifically, we employed the LOTACOR/OKEANOS-UAc daily landings dataset and the PNRD/OKEANOS-UAc inquiries database. Samplers randomly collect these inquiries during the process of fishery landings in the main fishing harbors in the Azores, including Sao Mateus, Praia da Vitoria, Rabo de Peixe, Santa Cruz (Faial), Ponta Delgada (Sao Miguel), Povoação, Madalena, Vila do Porto, and Angra do Heroismo. These inquiries are used to get information from fisheries activities, mainly about the fishing gears used, the fishing effort, the baits, the characteristics of the fishing gears used, or the metiers.
The data used in this paper covers the period between 2010 to 2017 and comprises 13 categories of fishing gears - referred to as **metiers**, detailed in Table I.
The data cleaning process involved several steps, divided into two stages: preparation and pre-processing. The first stage aims at ensuring our data's accuracy and integrity. The second stage ensures that each vessel's daily fish landings are considered an independent sample, essential for the ensuing model development processes. They are detailed as follows.
* **Preparation**: The resulting data set contains 11 features (described in Table II) and 33,895 samples.
* Feature selection to remove variables with a high percentage of missing data, ensuring that our data was complete and reliable;
* Data set analysis for sensitive information - features containing information that could be used for re-identification; these were also removed;
* Data transformation step to replace the name of metiers' classes, LHM-CEF and LHM-PB, with LHP-CEF and LHP-PB, respectively, due to a change in designation at the beginning of 2017;
* Concerning the boat registration code feature, we only retained each boat's "C" or "L" classification code. "C" stands for fishing vessels with cabins, and "L" for smaller, open-deck fishing vessels.
* **Pre-Processing**: The final data set contains 13,955 landings (cases) and 138 features.
* Information concerning landings was consolidated to create one row per landing, i.e., from a row per landing/species to a row per landing;
* One-hot encoding applied to each categorical feature, except for the species classification feature.
### _Data Analysis_
To conduct a comprehensive analysis of fish population dynamics and sustainability of each species, it is crucial to know the biology and ecology of the species (e.g., growth, mortality, reproduction features, etc.), to study the fishing operation regimes and the metiers involved, they collect data on fishing catches (or landings) per species, the fishing geographic distribution, the fishing effort and how these patterns evolve. This information provides insights into the dynamics
and exploitation rates of each species or species group over time.
One important question in the fisheries industry study is the species caught across different metiers over time. Figure 1 shows the proportion of major fish groups weight caught for each metier over the years in the Azores, and a trend has emerged where significant species such as tunas (_tunideos_) have declined in relevance, allowing other species to become more prominent. Tunas are migratory fish species, and catches may strongly fluctuate from one year to another, and the occurrence of different tuna species also varies mainly due to environmental causes.
The image in Figure 1 shows that the "Others" section represents the metiers that have a relatively small proportion in the overall fishery. To better understand the impact of these metiers on the less significant fish species, we have zoomed in on this section and presented the data in Figure 2. By analyzing the figure depicting the lesser-known metiers, we can observe a notable shift in representation. Some of these metiers become more prominent in the other image, disappearing from the current one. This observation suggests a potential improvement in the distribution of fishery activities among species in the latter image, indicating a more balanced and diversified utilization of fishing techniques
To gain a comprehensive understanding of each species' dynamics and the fishery industry's overall sustainability, it is also crucial to know the species' total catches (total weight) and how it changes over time. This information is presented in Figure 3. It is evident in this case that the total weight of fish caught has decreased over the years, indicative of a potential intensive fishing or overfishing issue. The variations in catches are caused by various management practices, such as implementing Total Allowable Annual Catches (TACs), reducing fishing efforts, or changing the number of fishing vessels, among other factors. Therefore, it would be interesting to calculate catch-per-unit-effort (CPUE) using the effort hours. CPUE provides more reliable indicators of population abundance and is commonly employed in fish stock assessment models.
However, obtaining robust data on species CPUE of commercial or recreational fisheries is difficult and error-prone. Information is mainly obtained from inquiries sampling programs and raised to totals following statistical procedures assuming the accomplishment of sampling statistical assumptions in terms of representativeness of the samples of the variables of interest (e.g., representativeness of species landings, fishing vessels, fishing metiers, fish ports, etc.). For example, associating a specific fishing gear to a specific fish landing to obtain a good estimation of the fishing effort (e.g., number of hooks, hours fishing, trawl area, etc.) is quite limited. Fishing survey programs have significant gaps in information and coverage, particularly regarding the metiers or fishing efforts used in fishing. Frequently, this lack of detailed data can hinder more comprehensive studies and robust fisheries assessments, leading to uncertain results.
Our work addresses this issue by reconstructing the missing data in existing Azores fisheries surveys. The reconstruction of the missing data of the LOTACOR/OKEANOS-UAc fish landings database will fill the gaps and provide more accurate information for future stock assessment modeling processes resulting in better and less uncertain advice for fishing management.
## IV Experimental Analysis
Accurately predicting missing information in data analysis is a challenging and complex task with various applications
## References
* [1] A. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. A. Abadi, M. Abadi, M.
in various fields. We explored six questions regarding the optimization and improvement of machine learning models on a dataset:
1. How does the performance of different machine learning methods vary on our dataset, and how can we identify the ideal time frame for making accurate predictions on our time series data? What impact does this have on the precision and stability of our models?
2. To what extent can contextual variables enhance the accuracy and robustness of our predictions, and how do we select the most relevant features to include?
3. What insights can be gained by selectively removing features from our dataset, and how can we leverage this information to improve our models?
4. How does rebalancing an imbalanced dataset affect the quality and representativeness of our training data, and what are the trade-offs involved in different rebalancing approaches?
5. How can ensemble learning techniques be used to enhance the accuracy and robustness of our models, and what are the best practices for incorporating these methods into our workflow?
6. What are the alternative metrics, apart from accuracy, can effectively reflect the performance of an imbalanced classification task?
First, we explore the effectiveness of several supervised machine learning methods in predicting the metiers of fish landings. We evaluate different train and test evaluation periods to determine the optimal timeframe for prediction. Our analysis aims to identify the optimal balance between the size of the training set and the prediction horizon to ensure that our model is accurate and timely.
In addition, we examine the impact of adding context-related variables on the machine learning model's performance. We hypothesize that incorporating such variables can improve our model's accuracy and enable it to capture more complex relationships between the variables.
To further improve our model's performance, we investigate the effect of removing redundant or noisy features from the dataset. We aim to identify the features that have a low contribution to the model's accuracy and remove them from the dataset. Additionally, we consider reconstructing the dataset in a balanced manner to ensure that the reconstructed dataset represents the majority and minority classes more accurately.
Also, we investigate whether combining the resulting models from our best previous solutions can lead to even better performance. We explore the possibilities of ensemble learning, where we combine different models to obtain a more accurate prediction.
Finally, we evaluate the strategies using various metrics to comprehensively understand their performance and determine the most suitable strategy for an imbalanced dataset. By analyzing different metrics, we aim to identify the strengths and limitations of each strategy and gain insights into what each metric reflects. This evaluation will enable us to make informed decisions and select the optimal strategy for dealing with imbalanced data.
Our study aimed to develop a reliable and accurate model to predict missing information about the type of metiers that can be linked to each landing and gain insights into the overall dataset.
### _Methods_
This section describes the methods used to address the research questions, including pre-processing, predictive solutions, and evaluation and estimation methods.
#### Iii-A1 Pre-processing Solutions
Pre-processing strategies are essential to improve model performance. Here we detail the ones used in this work.
* **Context-related variables:** To investigate whether adding context-related variables can improve the performance of machine learning models, the study added information on the weight of each fish group classification over the past six months. This information was relevant since the weight of fish is likely to vary over time, and including it could help improve the accuracy of the
Fig. 3: Total weight of each fish classification over the years
models. To calculate the new variables, we first identified the date of each observation and then computed the maximum, minimum, and mean weight of each class over the past six months. We then added these new variables to our dataset, which increased the number of features available for modeling.
* **Feature Selection:** Feature selection is a technique used to improve the performance of machine learning models by removing noisy or irrelevant features from the dataset. In this study, the Boruta algorithm was employed, an all-relevant feature selection wrapper algorithm that ranks the features according to their importance and determines which features are relevant, irrelevant, or uncertain.
* **Balanced Reconstructions:** Resampling strategies modify the original data distribution to meet specific user-defined criteria. The study evaluated the performance of different sampling techniques, including undersampling of the majority classes, oversampling of the minority classes, and two hybrid techniques that combined both approaches. The functions and the corresponding parameters are described in A.
#### Iv-A2 Predictive Solutions
Supervised learning models are commonly used in classification solutions [16]. Tree-based approaches are a popular class of algorithms that offer good model interpretability, making them suitable for our task [17]. Therefore, for our study, we have chosen three tree-based approaches:
* **Decision tree classifier**: can be used to create a hierarchical partition of the training data space, where the constraints on attribute values are used to split the data [18]. This iterative process continues until leaf nodes are formed, each containing a small number of records that can be used for classification purposes;
* **Random Forest**: The algorithm is based on an ensemble technique, where several trees are trained and then combined to make predictions [19];
* **Boosting**: This algorithm is a meta-heuristic approach based on machine learning, which aims to reduce bias and variance in supervised learning [20]. It belongs to a family of machine learning methods that transform a weak learner into a stronger one.
To ensure that our work is easily replicable, we used the implementations of these tools available in the free and open-source R environment. Concerning the parameter settings for each of these methods. We carried out a preliminary test to search for optimal parameterization (i.e., the setting that obtains the best possible results within a certain set of values of the parameters). The search for optimal parameters was carried out for each combination machine learning model - time window, described in the IV-A3, and the results are detailed in Annex A.
#### Iv-A3 Evaluation and Estimation
To evaluate the performance of predictive models on this dataset, it was decided to consider different time windows for training and testing. So it was decided to use different time windows for training and testing. We selected three methods: sliding, growing, and full.
* **Sliding Window:** Trains the model on two years of data and tests it on three months, with the two-year period being before the three-month test period. We increment the window by one month for training and testing over the years;
* **Growing Window:** Trains the model on all the available data before the test period (which is also three months), starting with a two-year window and incrementing it by one month for both training and testing.
* **Full Window:** Both methods are commonly used in time series analysis, but as we needed to fill in missing values, we thought of using another approach called the full window method. This method considers all the data before and after the three-month test period, with increments of three months over the years for evaluation.
### _Results_
This section provides an overview of the answers to the research questions proposed in this study. We aim to comprehensively understand the effectiveness of the different models and techniques used in this study.
#### Iv-B1 **ML models Performance Across Time Windows**
The first research question we aimed to address was to determine the optimal combination of predictive solution and time window for training. To this, we analyzed the accuracy of each combination of predictive solutions and time window methods over the years, as illustrated in figure 4. The results indicate that the full-time window produced more stable and better results across all models. Although both the boosting and random forest models exhibited good performance, we used the boosting model with the full-time window method. We made this decision based on its consistent performance and stability over time and its ability to handle complex relationships between variables. We now call this the Base strategy.
#### Iv-B2 **Add New Features**
We aimed to investigate whether adding new features to the model could improve its performance. To test this, we created a new data set by introducing features explained in IV-A1 and applied the base strategy. However, the results showed that the model with the new features yielded the same mean accuracy as the model without them (90%). Nevertheless, we believe that including these features would significantly impact the model's performance on a larger version of this data set. Therefore, we decided to use the boosting algorithm with the full window evaluation and the new features for our subsequent analyses. In Table III, we present the results of the strategies tested in this section in the new data set.
#### Iv-B3 **Feature selection**
The new data set has 180 variables, but we suspected some might be affecting the model's performance. To investigate this, we performed feature selection as described in the IV-A1 and ultimately identified 98 relevant variables, 76 irrelevant ones, and two uncertain variables. We used only these 98 selected variables for our subsequent analysis.
In III, it is noteworthy that the feature selection didn't improve any of the classes relative to the model1, and the final accuracy even decreased. This suggests that even the least relevant variables play an important role in distinguishing between classes.
#### Iv-B4 Resampling
Table III reveals that certain minority classes, such as FPO-CRU, FPO-PB, LLS-DEEP, and PS-PB, presented in table I, had low accuracy, with some even having an accuracy of zero. On the other hand, the majority classes, including LLS-PD, LHP-CEF, and LHP-PB, demonstrated good performance, with an accuracy greater than 86%. In response to this imbalance, we constructed a more balanced dataset, focusing on these classes, except for FPO-CRU, which occurred only twice in the entire dataset.
We employed various strategies to select the best combination of parameters for undersampling and oversampling, as detailed in Appendix A. This resulted in two optimal models, OverSamp and ImpSamp, and their performances for each class are illustrated in Table III.
The ImpSamp model was highly effective in improving the performance of the four proposed minority classes, as shown in Table III. On the other hand, OverSamp showed only a slight improvement in LHP-TUN and LLD-PP, which were not our primary focus. These results suggest that balancing the data set positively impacts the performance of minority classes, albeit with a trade-off of slightly reduced accuracy in the majority classes.
#### Iv-B5 Ensemble
After analyzing the results obtained from the ImpSamp and Model1 models, we considered creating a compensatory ensemble model that could maintain the accuracy of the majority classes while preserving the gains made in the minority classes. To explore this option, we designed a new ensemble model named the Ensemble model.
We made predictions using the Model1 and ImpSamp strategies in the Ensemble model. We retained the results from the ImpSamp approach if it predicted the FPO-PB, LLS-DEEP, or PS-PB metiers. However, if the ImpSamp approach did not predict any of these metiers, we kept the results from the Model1 approach.
We evaluated the performance of the Heuristic model and compared it to the results obtained by the previous models. As shown in Table III, the Ensemble model maintained the accuracy gained with the ImpSamp approach without compromising the performance of the majority classes with the Base approach. This method presents accuracy higher than 52% for all, except for FPO-CRU and LHP-PBC, which only occurred
Fig. 4: Machine learning methods and time windows
in 2017, with very low occurrences in data.
#### Iv-B6 **Metric Evaluation**
In addition to evaluating the accuracy of each classification strategy individually, we also compared their results using three different metrics: the geometric mean of the accuracy of each class, the average of each class weighted by its proportion in the dataset (balanced mean), and the average of each class weighted by the inversion of its proportion in the dataset (imbalanced mean). Table III showed that the Base strategy outperformed all metrics except for the imbalanced mean. This metric gives more weight to underrepresented classes, which had the best performance from the Ensemble strategy. As mentioned, the Ensemble strategy is particularly effective in predicting minority classes. Therefore, it can be concluded that the Ensemble strategy can provide the most accurate predictions for imbalanced datasets, while the Base strategy is more suitable for balanced datasets.
## V Discussion
It seems that the models' performance is very high for the most selective gears, those gears which are very specific to catch a small number of species, e.g., the LHP-CEF for catching squids, the LHP-TUN for catching tunas or the PS-PPP to catch small pelagics like the _Trachurus picturatus_.
Table IV provides insights into the three iconic species associated with each class of fishery art, along with their respective percentages of total weight within that class. On the other hand, Table V displays the confusion matrix resulting from the ensemble method. By examining these tables, we can identify similarities between fishery landings and explore the misclassification patterns.
For example, an analysis of the data reveals that GNS-PB is predicted as LHP-PB in 35% of the cases. This can be attributed to the low occurrence of the GNS-PB fishery art in the dataset (as shown in Table I), as well as the presence of common fishery species such as Demersais Costeiros and Demersais Talude. Similarly, LLS-PD is observed to be distributed between LHP-PB and LLS-DEEP. This can be explained by the fact that these three arts share iconic fishery species like Demersais Talude and Bentico Peleagicos, leading to similarities in their predicted classifications.
Understanding the similarity between fishery arts is crucial for assessing the impact of fishing activities on fish populations and their habitats. Researchers can identify common trends and drivers of changes in fish stocks by analyzing catch composition, fishing techniques, and spatial distribution across different arts. This knowledge is essential for evaluating the sustainability of fishing practices and implementing measures to prevent overfishing and protect vulnerable species.
As part of future work, we aim to enhance the predictiveness of our model. One approach we are considering is to use a graph-based representation of the total weight of each fish species over time, based on the work [21]. This would allow us to extract patterns that can be used as features to improve the model's accuracy further.
## VI Conclusions
In conclusion, our research addresses the crucial problem of incomplete datasets and missing data in fisheries by employing machine learning techniques to fill in missing information in the LOTACOR/OKEANOS-UAc fishery landings dataset. The study demonstrates the effectiveness of this approach in enhancing dataset completeness and uncovering valuable insights into fishery trends. The significance of our research lies in its implications for actionable policy-making and real-time sustainability decisions, as it enables decision-makers to make informed choices for resource management and conservation measures based on a more comprehensive and reliable dataset. By leveraging the potential of machine learning, we can contribute to the sustainable management of fisheries and the preservation of marine ecosystems by formulating evidence-based policies and making informed decisions in real time.
## Acknowledgments
This work received national funds through the FCT - Foundation for Science and Technology, I.P., under the project UIDB/05634/2020 and UIDP/05634/2020 and through the Regional Government of the Azores through the project M1.1.A/FUNC.UI&D/003/2021-2024. We also acknowledge the LOTACOR, S.A., a regional public company responsible for the fish auctions and fisheries landings statistics, and Joao Santos (OKEANOS - UAc), which manages the LOTACOR/OKEANOS-UAC fishing landings database.
|
2309.11569 | Revisiting Kernel Temporal Segmentation as an Adaptive Tokenizer for
Long-form Video Understanding | While most modern video understanding models operate on short-range clips,
real-world videos are often several minutes long with semantically consistent
segments of variable length. A common approach to process long videos is
applying a short-form video model over uniformly sampled clips of fixed
temporal length and aggregating the outputs. This approach neglects the
underlying nature of long videos since fixed-length clips are often redundant
or uninformative. In this paper, we aim to provide a generic and adaptive
sampling approach for long-form videos in lieu of the de facto uniform
sampling. Viewing videos as semantically consistent segments, we formulate a
task-agnostic, unsupervised, and scalable approach based on Kernel Temporal
Segmentation (KTS) for sampling and tokenizing long videos. We evaluate our
method on long-form video understanding tasks such as video classification and
temporal action localization, showing consistent gains over existing approaches
and achieving state-of-the-art performance on long-form video modeling. | Mohamed Afham, Satya Narayan Shukla, Omid Poursaeed, Pengchuan Zhang, Ashish Shah, Sernam Lim | 2023-09-20T18:13:32Z | http://arxiv.org/abs/2309.11569v1 | # Revisiting Kernel Temporal Segmentation as an Adaptive Tokenizer for Long-form Video Understanding
###### Abstract
While most modern video understanding models operate on short-range clips, real-world videos are often several minutes long with semantically-consistent segments of variable length. A common approach to process long videos is applying a short-form video model over uniformly sampled clips of fixed temporal length and aggregating the outputs. This approach neglects the underlying nature of long videos since fixed-length clips are often redundant or uninformative. In this paper, we aim to provide a generic and adaptive sampling approach for long-form videos in lieu of the de facto uniform sampling. Viewing videos as semantically-consistent segments, we formulate a task-agnostic, unsupervised and scalable approach based on Kernel Temporal Segmentation (KTS) for sampling and tokenizing long videos. We evaluate our method on long-form video understanding tasks such as video classification and temporal action localization, showing consistent gains over existing approaches and achieving the state-of-the-art performance on long-form video modeling.
## 1 Introduction
Majority of video understanding models are devised to learn representations of short-form videos ranging from 5 to 10 seconds [10, 37, 19, 43, 9, 3, 22, 26]. These models usually suffer from computation and memory bottlenecks when processing videos of longer length. A common approach to overcome this bottleneck is to uniformly divide long videos into fixed-length clips, process each clip separately and aggregate the results. This approach is highly redundant as nearby clips often convey similar information and short clips that overlap semantically meaningful segments are often uninformative.
Several works [28, 23, 41, 12, 20] have previously investigated adaptive sampling to learn video representations in an efficient manner. These methods often devise a learnable adaptive sampler to select more representative frames of the video based on the reward or penalty provided by the final prediction score. However, these methods are often limited to the classification task and are heavily dependent on the specific tasks and datasets on which they are trained and cannot easily transfer to unseen tasks or datasets.
Most of these adaptive sampling approaches are not scalable to sampling large number of frames which is required for understanding long-form videos. In fact, all the recent approaches [18, 38] for long-form video understanding use the de facto uniform sampling for sampling fixed-length clips from long videos.
In this work we propose a task-agnostic, adaptive and unsupervised sampling approach for long videos. Motivated by the intuition that humans perceive videos as semantically-consistent segments of variable length, we decompose the video to semantically meaningful segments using Kernel Temporal Segmentation (KTS) [30]. KTS extracts features from sparsely sampled candidate frames, computes the matrix of frame-to-frame similarity, and outputs a set of optimal change points corresponding to the boundaries of temporal segments. We then sample frames from each segment uniformly which comprises the input to the video understanding model. Our KTS-based input tokenization achieves the following desirable attributes: (a) it is agnostic to the downstream task, (b) it yields semantically-consistent segments without relying on training data, and (c) it is scalable to arbitrary number of segments and frames for a given long video.
We validate generalizability of KTS-based adaptive sampling on multiple downstream tasks and benchmarks. We evaluate KTS-based sampling for video classification on Breakfast [21] and LVU [38] benchmarks achieving state-of-the-art performance. We also report results for temporal action localization on ActivityNet [8], showing effectiveness of KTS-based sampling over standard uniform sampling. Furthermore, we provide a comparison with existing adaptive frame sampling methods on ActivityNet video classification and show that our approach outperforms the baselines.
The main contribution of our work can be summarized
as follows:
* We propose an adaptive, unsupervised, and task-agnostic frame sampling mechanism for long videos based on Kernel Temporal Segmentation (KTS), which overcomes deficiencies of common sampling approaches.
* We extensively evaluate KTS-based adaptive sampling against existing sampling techniques on video classification and temporal action localization tasks, showing consistent improvements and achieving state-of-the-art performance on long-form video understanding.
## 2 Related Work
Most of the video understanding models are devised to learn the representations of short-form videos ranging from \(5\) to \(10\) seconds [10, 37, 19, 43, 9, 3, 22, 26]. While these approaches use various architectures such as 2D CNNs [19, 6, 43], 3D CNNs [5, 10, 37, 36] and Vision Transformers [3, 9, 22, 26], they often share uniform sampling for input tokenization. These models usually suffer from computation and memory bottlenecks when processing videos of longer length.
Recent approaches for long-form video modeling can be broadly divided into two categories: a) building specialized models for learning from long-form videos, and b) adaptive sampling approaches for selecting frames from long-form videos. We discuss the related works in both the areas below:
### Long-form Video Understanding
Several works have been introduced to study the capability of video models in modeling videos of longer length. A movie based question answering dataset was introduced by Tapaswi _et al_. [35] and Bain _et al_. [4] introduced a text-to-video retrieval benchmark based on videos from movies. However, those line of works explore the video-language learning ability of the model hence not ideal for video-only evaluation.
Recent works [15, 16, 38, 18] improve the ability of learning long-range dependencies in the temporal domain of the videos in video classification setting. ViS4mer [18] introduces a state-space sequence layer to model the extracted the short term clip features in a long video. The object transformer model in [38] aims to capture the long-range interactions between tracked objects. Wu _et al_. [38] recently introduced a long-video benchmark (LVU) comprised of 7 classification tasks and 2 regression tasks based on movie videos, which has become a standard benchmark for long-form video understanding. Another line of work focuses on temporal action localization (TAL) [44, 25] task which requires modeling the long-range dependencies and evaluated on long-video datasets such as ActivityNet [8] and Thumos [17].
While the proposed approaches in video classification and temporal action localization show promising performance in the modeling aspect, uniform sampling is employed as the default input sampling strategy and hence, these approach require a large number of frames for understanding the long-form videos. Instead, in this work, we deploy the KTS-based adaptive sampling as input tokenization for both video classification and temporal action localization to study effectiveness of it over the standard uniform sampling.
### Adaptive sampling
Several adaptive sampling based strategies [40, 20, 47, 23, 41, 46, 12] have been proposed to overcome the computation issues faced by the standard uniform sampling in video classification. SCSampler [20] used a light-weight network to predict the saliency score of short-clips sampled uniformally in the long video. AdaFrame [40] introduces an LSTM network augmented with a global memory, to learn how to adaptively select frames conditioned on inputs for efficient video recognition. FrameExit [12] investigates an early existing procedure by employing a cascade of gating modules to automatically determine the earliest point in processing where an inference is sufficiently reliable. OCSampler [23] effectively samples few frames from a selected number of frame candidates to dynamically localize and attend to the instance-specific condensed clip of each video. Zhi _et al_. design an adaptive sampling strategy named MGSampler [47] aiming to choose a more effective input with a fixed length in trimmed and short.
While our work is closely related to both OCSampler and MGSampler, in contrast to them KTS-based sampling is task-agnostic and applicable to various long-video understanding tasks (_e.g.,_ video recognition, temporal action localization). Unlike the prior adaptive sampling approaches, e.g. OCSampler [23] where sampler is first trained separately, KTS-based sampling does not require two-stage training. KTS-based adaptive sampling is scalable and can be used to sample large number of frames from long-range videos effectively unlike the prior works which focus on sampling a small number of frames and are not easily scalable. KTS based sampling is also unsupervised and can be performed independently to the downstream task as it is based on the change points of the video features.
## 3 Method
Conventional long-range video models process uniformly sampled short clips from the video and aggregate the results. However, relevant information in a long video is often not evenly distributed. Humans perceive videos as a sequence of coherent scenes/events and can have a se
mantic understanding of the scenes given a few sampled frames. Motivated by this intuition, we propose a similar approach for sampling and tokenizing long videos. We decompose videos into semantically consistent segments leveraging Kernel Temporal Segmentation (KTS) and sample frames uniformly from each segment. We first give an overview of the KTS algorithm in Sec. 3.1. Then we describe our sampling strategy for long-form video classification and action localization in Sec. 3.2. Finally, we elaborate on the effectiveness of KTS over the other adaptive sampling techniques in the context of long-form video understanding.
### Kernel Temporal Segmentation
The initial motivation behind KTS is to detect change points in the input and decompose the video into semantically consistent segments. KTS is a kernel-based algorithm that operates independently and in an unsupervised manner, hence it does not require any additional training to yield meaningful video segments. KTS has been extensively leveraged by several video summarization approaches [27, 45, 31, 42, 49] as the segmentation output provided by KTS has a significant impact on identifying highlights of the video and yielding a high-quality summarization of the video. Here we briefly describe the KTS algorithm.
Given a long-form video, we initially downsample it, e.g. to one frame per second, and extract frame-level features using a pre-trained feature extractor \(f_{\theta}\). Let \((x_{i})_{i=1}^{n}\in\mathbf{X}\) represent the sampled frames, \(\mathbf{K}:\mathbf{X}\times\mathbf{X}\rightarrow\mathbb{R}\) represent a kernel function (Gram matrix) between descriptors \(f_{\theta}(x_{i})\) and \(\phi:\mathbf{X}\rightarrow\mathcal{H}\) be the associated feature map with norm \(\|.\|_{\mathcal{H}}\). Suppose we want to choose \(m-1\) change points \(x_{t_{1}},\cdots,x_{t_{m-1}}\), which correspond to \(m\) segments \([x_{t_{0}},x_{t_{1}}],[x_{t_{1}},x_{t_{2}}],\cdots,[x_{t_{m-1}},x_{t_{m}}]\) with \(x_{t_{0}}=\) 0 and \(x_{t_{m}}=T\) being length of the video.
The KTS algorithm minimizes the sum of the within-segment variances:
\[\min_{m,t_{1},\cdots,t_{m-1}}\sum_{i=1}^{m}var(t_{i-1},t_{i}) \tag{1}\]
where:
\[var(t_{i-1},t_{i})=\sum\nolimits_{t=t_{i-1}}^{t_{i}-1}\|\phi(x_{t})-\mu_{i}\| ^{2} \tag{2}\]
and \(\mu_{i}\) is the within-segment mean:
\[\mu_{i}=\frac{\sum_{t=t_{i-1}}^{t_{i}-1}\phi(x_{t})}{t_{i}-t_{i-1}} \tag{3}\]
We can also make KTS adaptive to each video by making the number of segments \(m\) variable. To avoid over-segmentation we add a penalty term \(g(m,n)\) to the objective
Figure 1: An overview of KTS-based adaptive sampling for Video Classification and Temporal Action Localization. The input video is initially downsampled and \(m-1\) change points are computed using the KTS algorithm. \(k\) frames are then uniformly sampled from each of the \(m\) segments and are processed for the downstream task.
function. A common choice for \(g(m,n)\) is \(m\log(\frac{m}{n}+1)\). In this case, our final objective is:
\[\min_{m,t_{1},\cdots,t_{m-1}}\sum_{i=1}^{m}\mathit{var}(t_{i-1},t_{i})+g(m,n) \tag{4}\]
In order to solve Equation 1 and 4, we first compute the kernel for each pair of descriptors. We use a dot-product kernel in practice. Then the segment variances are computed for each possible starting point and segment duration. Finally, we use dynamic programming to minimize the objective and find the change points. Refer to [30] for more details.
### Adaptive sampling with KTS
KTS algorithm yields a set of change points \(x_{t_{1}},\cdots,x_{t_{m-1}}\) which decompose the video into \(m\) segments. Note that unlike shot boundary detection methods which focus on local differences between consecutive frames, KTS takes into account the differences between all pairs of frames. Therefore it provides semantically consistent and general segments. To represent each segment we uniformly sample \(k\) frames from it. Long-form video models often consist of a backbone to process short-range clips and an aggregation mechanism (e.g. via a transformer or simple averaging). We feed sampled frames from each segment to the clip-level model which learns the representation for each segment/scene. The aggregation mechanism then combines scene-level information to obtain a global video-level representation. This is in line with how humans perceive videos. Despite its simplicity, we show that our sampling approach achieves state-of-the-art performance on long-form video modeling and outperforms existing samplers on several tasks and benchmarks.
### Discussion
As explained in Sec. 2 there are several other adaptive sampling techniques proposed in the literature. Our approach differs from these samplers in several ways. KTS-based sampling is generic and can be applied to various downstream tasks without training on them. However, existing samplers are task-specific and are often limited to video classification. We show in our experiments that our approach outperforms existing samplers on the video classification task. Unlike current approaches which disregard large portions of the video, KTS-based sampling minimizes loss of information as it samples from all the segments. This makes our approach well-suited for tasks such as action localization which need to preserve local information.
## 4 Experiments
In this section, we present our experiments and results. We focus on video classification and temporal action localization tasks. We perform temporal action localization and classification on the ActivityNet [8] dataset, and video classification on the Breakfast dataset [21] and the LVU Benchmark [38]. We also perform ablation experiments to show the impact of the number of frames used for video classification, number of change points estimated by KTS, and the backbone used as the feature extractor.
### Datasets
Breakfast [21] is a human activity dataset focused on cooking-oriented actions. It comprises of \(10\) categories of cooking breakfast. It contains \(1712\) videos in total with \(1357\) for training and \(335\) for testing. The average length of a video is \(2.3\) minutes. The cooking actions were performed by \(52\) actors with \(44\) for training and 8 for testing. This makes the task more challenging since the actors performing the actions during test time are not seen during training.
LVU (long-form video understanding benchmark) [38] is compiled from the publicly available MovieClips dataset [1] which contains around 30,000 short movie snippets. The benchmarks consist of \(9\) diverse tasks that require long-form video understanding. These tasks could be mainly categorized into content understanding _('relationship','speaking style','scene/place')_, movie metadata prediction _('director', 'genre', 'writer','movie release year')_ and user engagement prediction _('YouTube like ratio', 'YouTube popularity')_. Content understanding and movie metadata prediction can be considered as classification tasks hence are evaluated using the top-1 accuracy metric and user engagement prediction is a regression task and is evaluated using mean squared error (MSE). Each video is generally one to three minutes long. For both Breakfast and LVU datasets, we follow the training configuration suggested in ViS4mer.
ActivityNet [8] dataset contains around 20,000 untrimmed videos spanning 200 action classes of daily activities. The average length of a video is 117 seconds, and the average length of action segments is 48 seconds. Thus it can be considered as a long-form video dataset. Following the standard practice [2, 44], we train on the training split of the dataset and evaluate on the validation split. We report average \(mAP@[0.5:0.05:0.95]\) similar to Actionformer [44] for fair comparison.
### Video Classification
We perform video classification experiments on Breakfast [21] and LVU [38] datasets to study the effectiveness of KTS-based adaptive sampling.
**Baseline:** We adopt the recently introduced ViS4mer [18] as the baseline model to evaluate the performance of KTS-based adaptive sampling against the uniform sampling on
video classification tasks. ViS4mer is a long-range video classification model comprised of a standard Transformer encoder [7, 26] and a multi-scale temporal S4 [13] decoder. It extracts features from input video tokens using the Transformer encoder which are then fed to the multi-scale S4 decoder that learns hierarchical spatio-temporal video representations. ViS4mer uses Vision Transformer [7] to extract features for experiments on the LVU benchmark and uses Video Swin Transformer [26] to extract features in experiments on the Breakfast dataset. Despite innovation in the modeling aspect, ViS4mer leverages uniform sampling to tokenize the input video. We adopt KTS-based adaptive sampling in both settings owing to its task-agnostic nature.
**Implementation Details:** Given a video, we downsample it to one frame per second and use the downsampled frames as candidates for computing the change points. We use GoogleNet [34] pre-trained on ImageNet-1K for extracting the feature descriptors. We sample \(m\times k\) frames for each video as described in Sec. 3.2, and the sampled frames are then fed to the video classification model.
**Results:** Table. 1 demonstrates the video classification results on the Breakfast dataset. We observe that KTS-based adaptive sampling achieves state-of-the-art results on the Breakfast dataset while utilizing \(16\times\) fewer frames per video compared to the original ViS4mer baseline which uses uniform sampling. When compared with uniform sampling using the same setting \([32\times 32]\), we observe a significant gain of \(4.23\%\) in terms of accuracy with KTS-based adaptive sampling, showing its superiority over uniform sampling.
Table. 2 shows the results on the LVU benchmark. KTS-based adaptive sampling achieves state-of-the-art performance on \(7\) out of \(9\) tasks and outperforms uniform sampling in \(8\) out of \(9\) tasks. In particular, in the _scene prediction_ under content understanding task of the benchmark, KTS-based tokenization yields a performance boost of \(12.79\%\) while in the _genre prediction_ task under movie metadata prediction, KTS-based tokenization outperforms uniform sampling with a significant margin of \(11.15\%\). Similar performance gains can be observed consistently throughout different tasks on the benchmark. Note that the LVU benchmark is more challenging than the Breakfast dataset. The tasks in LVU require long-term dependencies to be captured carefully, where the current short-range video models are proven to fail even with strong pre-training mechanisms [38]. KTS-based input tokenization shows promising performance consistently throughout the benchmark which states the need to perform an adaptive sampling strategy to process long-form videos.
### Temporal Action Localization
Temporal Action localization (TAL) aims to identify the action instances present in a video in the temporal domain and recognize the action categories. Despite the steady progress in TAL performance in the modeling aspects (_e.g.,_ action proposals [24], pretraining [2], single-stage TAL [44]), uniform sampling is adopted as the de facto sampling approach in most of the action localization models. We analyze the impact of the KTS-based adaptive sampling mechanism on action localization.
**Baseline:** We investigate the performance of KTS-based sampling on the strong Actionformer [44] baseline, which achieves the current state-of-the art performance on TAL for ActivityNet. It comprises a multi-scale transformer encoder which encodes the sequence of embedded video clip features into a feature pyramid. The feature pyramid is then followed by a classification and a regression head to recognize the action instance and estimate the action boundaries respectively. TSP [2] model pre-trained on ActivityNet video classification task is used to extract non-overlapping clip-level features. Refer to [44] for a complete description of Actionformer.
**Implementation Details:** Given a video, we downsample it to one frame per second when computing the KTS change points and use ResNet-50 [14] pre-trained on ImageNet-1K to extract feature descriptors for KTS computation. We adopt a similar training configuration as the Actionformer to study the impact of KTS-based adaptive sampling in TAL. Actionformer employs clips of 16 frames at a frame rate of \(15\) fps and a stride of \(16\) frames (i.e., non-overlapping clips) as input to the feature extractor followed by the localization module. This gives one feature vector per \(\frac{16}{15}\approx 1.067\) seconds and \(M=\frac{15}{16}T\) segments where \(T\) is the video length. We can also consider \(\frac{M}{2}\), \(\frac{M}{4}\), \(\cdots\) segments by sampling every \(2^{nd}\), \(4^{th}\), \(\cdots\) frame. Similarly, we can choose \(\frac{M}{2}\), \(\frac{M}{4}\), \(\cdots\) segments in our KTS-based sampling strategy. For the baseline, all the segments have the same length while our adaptive sampling technique yields variable-length segments. Within each segment, we uniformly sample \(16\) frames in both cases. These frames are then fed to the action localization model. Fig. 2 provides a comparison of KTS
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & Frames & Accuracy \\ \hline VideoGraph [15] & \(64\times 8\) & 69.50 \\ Timeception [16] & \(1024\times 8\) & 71.30 \\ GHRM [48] & \(64\times 8\) & 75.49 \\ \hline ViS4mer [18] & \(32\times 32\) & \(85.63\) \\ ViS4mer [18] & \(512\times 32\) & \(88.17\) \\ ViS4mer + KTS (Ours) & \(\mathbf{32\times 32}\) & \(\mathbf{89.86}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Video Classification results on Breakfast. We evaluate KTS-based sampling against uniform sampling with ViS4mer [18] as the baseline. Our approach achieves state-of-the-art performance with significantly less computation.
vs. uniform sampling, showing improved performance, especially for smaller number of segments.
**Results:** Fig. 2 shows the empirical analysis of KTS-based sampling on TAL. Note that the performance gain of using KTS-based adaptive sampling is clearly observed for smaller number of segments (_e.g.,_\(\frac{M}{3}\) and below), and the gap in performance increases when reducing the number of segments. In particular, for \(\frac{M}{6}\) segments uniform sampling achieves \(31.05\%\) average mAP while KTS-based sampling attains \(32.58\%\) average mAP on ActivityNet, yielding \(1.53\%\) gain. For larger number of segments, the performance of KTS is nearly similar to uniform sampling. For \(M\) segments, KTS reduces to uniform sampling as there are \(M\) change point candidates when using one frame per second for sampling candidates. Similarly, for \(\frac{M}{2}\) we select half of the candidates as change points, which makes it quite similar to uniform sampling.
### Comparison with Existing Adaptive Sampling Methods
Table. 3 compares KTS-based adaptive tokenization with existing efficient frame sampling methods for video classification on the ActivityNet dataset. We use MobileNet2 [32] pre-trained on ImageNet-1K to extract the features. For a fair comparison with previous methods in terms of accuracy and computational cost, we initially uniformly sample 16 frames resized to a smaller resolution (e.g., 112 \(\times\) 112) in a given video as the change point candidates and estimate change points. We sample one frame within each segment and train the ResNet50 classifier (pre-trained on Imagenet-1K) for video classification on ActivityNet. Our results show that KTS-based sampling yields a competitive performance when compared to existing adaptive sampling approaches. In particular, KTS-based sampling improves the classification accuracy by \(1.03\%\) over AR-Net [28] while minimizing the computational cost by \(3.8\) GFLOPS. KTS algorithm incurs only around \(0.004\) GFLOPS in our experiments which is comparatively negligible to the computational cost incurred by ResNet50 and MobileNetV2. KTS-based sampling method also outperforms OCSampler [23] while incurring significantly less computation cost.
### Ablation and Analysis
In this section, we perform ablation experiments to show the impact of the number of frames used for video classification, number of change points estimated by KTS, and the backbone used as the feature extractor.
\begin{table}
\begin{tabular}{l|c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{Content (\(\uparrow\))} & \multicolumn{3}{c}{Metadata (\(\uparrow\))} & \multicolumn{3}{c}{User (\(\downarrow\))} \\ \cline{2-9} & Relation & Speak & Scene & Director & Genre & Writer & Year & Like & Views \\ \hline SlowFast + NL [10] & 52.40 & 35.80 & 54.70 & 44.90 & 53.00 & 36.30 & **52.50** & 0.38 & 3.77 \\ VideoBERT [33] & 52.80 & 37.90 & 54.90 & 47.30 & 51.90 & 38.50 & 36.10 & 0.32 & 4.46 \\ Object Transformer [38] & 53.10 & 39.40 & 56.90 & 51.20 & 54.60 & 34.50 & 39.10 & **0.23** & 3.55 \\ \hline Vi54mer [18] & 57.24 & 40.79 & 67.44 & 62.62 & 54.71 & 48.8 & 44.75 & 0.26 & 3.63 \\ Vi54mer + KTS (Ours) & **59.52** & **40.79** & **80.23** & **69.16** & **65.86** & **54.16** & 48.25 & 0.29 & **3.29** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Evaluation of KTS-based sampling on the LVU benchmark. Our approach shows consistent improvements over uniform sampling on the majority of video understanding tasks.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & Backbone & mAP (\%) & GFLOPs \\ \hline NSNet [41] & ResNet-101 & 74.9 & 73.2 \\ AdaFrame [40] & ResNet-101 & 71.5 & 78.7 \\ LieNet2 [39] & ResNet-101 & 72.7 & 95.1 \\ KTS (Ours) \((84\times 84)\) [8 frames] & ResNet-101 & **80.9** & **67.1** \\ \hline Uniform & ResNet-50 & 72.5 & 65.8 \\ Random & ResNet-50 & 71.2 & 65.8 \\ SCSampler [20] & ResNet-50 & 72.9 & 41.9 \\ AdaMML [29] & ResNet-50 & 73.9 & 94.0 \\ AR-Net [28] & ResNet-50 & 73.8 & 33.5 \\ LiseNetElook [11] & ResNet-50 & 72.3 & 81.4 \\ OCSampler [23] & ResNet-50 & 79.8 & 67.2 \\ KTS (Ours) \((84\times 84)\) [6 frames] & ResNet-50 & 74.8 & **29.7** \\ KTS (Ours) \((84\times 84)\) [8 frames] & ResNet-50 & **80.0** & 32.1 \\ KTS (Ours) \((112\times 112)\) [8 frames] & ResNet-50 & **80.3** & 37.4 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of our approach with existing adaptive sampling strategies on ActivityNet video classification.
Figure 2: KTS vs Uniform sampling comparison on ActivityNet Action Localization. We report average mAP when varying the number of segments. \(M\) corresponds to the number of segments when each segment length is \(\frac{16}{15}\) seconds as used in the Actionformer baseline.
#### 4.5.1 Impact of the number of frames
We evaluate KTS-based adaptive sampling against uniform sampling on Breakfast and LVU dataset. We vary the number of input frames per video used for the classification using both the sampling approach. Fig. 3 presents the results on the LVU benchmark on four tasks: scene prediction, genre prediction, writer prediction, and director pre
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Total Frames & \(m\times k\) & Uniform & KTS & \(\Delta\) \\ \hline \multirow{2}{*}{\(256\)} & \(16\times 16\) & \(78.87\) & \(81.69\) & \(+2.82\%\) \\ & \(32\times 8\) & \(81.13\) & \(84.51\) & \(+3.38\%\) \\ \hline \multirow{2}{*}{\(512\)} & \(16\times 32\) & \(80.56\) & \(84.51\) & \(+3.95\%\) \\ & \(32\times 16\) & \(83.09\) & \(88.17\) & \(+5.08\%\) \\ & \(64\times 8\) & \(83.66\) & \(87.04\) & \(+3.38\%\) \\ \hline \multirow{2}{*}{\(1024\)} & \(\mathbf{32\times 32}\) & \(\mathbf{85.63}\) & \(\mathbf{89.86}\) & \(+\mathbf{4.23\%}\) \\ & \(64\times 16\) & \(85.63\) & \(86.76\) & \(+1.13\%\) \\ \hline \end{tabular}
\end{table}
Table 4: Impact of the number of change points in KTS-based adaptive video tokenization. We analyze the performance of our sampling approach on Breakfast video classification for different configurations of the number of change points. \(m\): Number of video segments., \(k\): Number of frames selected to process within each segment.
Figure 4: KTS vs Uniform sampling comparison on Breakfast video classification with a varying number of frames.
Figure 3: KTS vs Uniform sampling comparison on classification tasks of the LVU benchmark by varying the number of input frames. Consistent performance gain shows the effectiveness of KTS-based adaptive sampling over standard uniform sampling.
diction. We vary the number of frames sampled for modeling. KTS-based adaptive sampling yields a consistent performance gain over standard uniform sampling in all configurations.
Fig. 4 demonstrates the results on Breakfast dataset. KTS-based adaptive sampling shows consistent performance gain over uniform sampling in all settings. In particular, KTS-based sampling gives an accuracy of \(84.51\%\) when using \(256\) frames for each video in the \(32\times 8\) setting while uniformly sampling \(512\) frames yields \(83.09\%\) in the same setting. This shows KTS-based tokenization not only improves the performance but also requires significantly less number of frames to process when compared to uniform sampling. Table. 1 also validates this claim where KTS-based adaptive input tokenization achieves state-of-the art performance on Breakfast video classification with \((1/16)^{th}\) number of frames as that of uniform sampling.
Table. 4 reports the empirical results on different configurations of change points for a given number of frames input to the model. We observe that increasing the number of change points improves the performance of the model up to a certain point. We also observe that KTS-based sampling consistently achieves significantly better performance compared to uniform sampling when compared in the same setting over all configurations. In particular, we obtain a performance gain of \(4.23\%\) in the \(32\times 32\) setting where the number of change points estimated by the KTS algorithm is \(31\). Similarly, \(5.08\%\) boost is observed in \(32\times 16\) configuration over uniform sampling in the ViS4mer baseline.
Fig. 5 demonstrates the results on the regression task (view count prediction) of the LVU benchmark. We vary the number of frames sampled for modeling. KTS-based adaptive sampling consistently achieves lower mean-squared error (MSE) compared to uniform sampling across different configuration settings.
#### 4.5.2 Impact of the feature extractor on KTS
We investigate the choice of feature extractor in the KTS algorithm in Table. 5. We consider three standard image feature extractors namely: ResNet50 [14], MobileNetv2 [32] and GoogleNet [34]. We use the pre-trained models after discarding the final classification layer. We observe that in both \(32\times 16\) and \(64\times 8\) settings, the GoogleNet backbone produces better results on the Breakfast dataset. Hence, similar to the choice of the number of frames, the choice of feature extractor plays a significant role in the KTS algorithm.
## 5 Limitations and Future Work
We identify two major limitations of KTS based input tokenization in long-form video understanding: (1) KTS, as a kernel based change point detection algorithm, is not learnable. The segmentation defects happening in KTS would directly affect the downstream task performance. However, learning change points specific to a task (_e.g.,_ video classification on Breakfast) can make transferability limited. We plan to investigate a learnable alternative to our current approach in the future. (2) While KTS performs superior to uniform sampling in several training configurations as shown in our analysis, the optimal choice of number of change points and the choice of feature extractor for KTS-based sampling still remains to be hand-picked. A possible future direction could be to investigate a learnable mechanism to select the optimal number of change points for a given video.
## 6 Conclusion
In this work, we present an adaptive and task-agnostic frame sampling mechanism for long-form video modeling. Our approach leverages Kernel Temporal Segmentation (KTS) to generate semantically-consistent segments used for sampling frames. We perform a comprehensive set of experiments on video classification and temporal action localization on several long-form video understanding datasets and benchmarks and show the superiority of KTS-based adaptive sampling against existing sampling strategies. In spite of its simplicity, our sampling approach achieves state-of-the-art performance on long-form video understanding benchmarks while being efficient. We plan to explore other variants of adaptive sampling based on temporal segmentation, which could operate in a learnable man
Figure 5: KTS vs Uniform sampling comparison on the view count prediction task of LVU benchmark.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \(m\times k\) & ResNet50 & MobileNetv2 & GoogleNet \\ \hline \(64\times 8\) & 81.12 & 83.38 & **87.04** \\ \(32\times 16\) & 81.41 & 83.94 & **88.17** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Impact of the feature extractor of KTS on the Breakfast video classification task
ner, in the future.
|
2309.08038 | Efficient Rotating Synthetic Aperture Radar Imaging via Robust Sparse
Array Synthesis | Rotating Synthetic Aperture Radar (ROSAR) can generate a 360$^\circ$ image of
its surrounding environment using the collected data from a single moving
track. Due to its non-linear track, the Back-Projection Algorithm (BPA) is
commonly used to generate SAR images in ROSAR. Despite its superior imaging
performance, BPA suffers from high computation complexity, restricting its
application in real-time systems. In this paper, we propose an efficient
imaging method based on robust sparse array synthesis. It first conducts
range-dimension matched filtering, followed by azimuth-dimension matched
filtering using a selected sparse aperture and filtering weights. The aperture
and weights are computed offline in advance to ensure robustness to array
manifold errors induced by the imperfect radar rotation. We introduce robust
constraints on the main-lobe and sidelobe levels of filter design. The
resultant robust sparse array synthesis problem is a non-convex optimization
problem with quadratic constraints. An algorithm based on feasible point
pursuit and successive convex approximation is devised to solve the
optimization problem. Extensive simulation study and experimental evaluations
using a real-world hardware platform demonstrate that the proposed algorithm
can achieve image quality comparable to that of BPA, but with a substantial
reduction in computational time up to 90%. | Wei Zhao, Cai Wen, Quan Yuan, Rong Zheng | 2023-09-14T21:59:04Z | http://arxiv.org/abs/2309.08038v1 | # Efficient Rotating Synthetic Aperture Radar Imaging via Robust Sparse Array Synthesis
###### Abstract
Rotating Synthetic Aperture Radar (ROSAR) can generate a 360deg image of its surrounding environment using the collected data from a single moving track. Due to its non-linear track, the Back-Projection Algorithm (BPA) is commonly used to generate SAR images in ROSAR. Despite its superior imaging performance, BPA suffers from high computation complexity, restricting its application in real-time systems. In this paper, we propose an efficient imaging method based on robust sparse array synthesis. It first conducts range-dimension matched filtering, followed by azimuth-dimension matched filtering using a selected sparse aperture and filtering weights. The aperture and weights are computed offline in advance to ensure robustness to array manifold errors induced by the imperfect radar rotation. We introduce robust constraints on the main-lobe and sidelobe levels of filter design. The resultant robust sparse array synthesis problem is a non-convex optimization problem with quadratic constraints. An algorithm based on feasible point pursuit and successive convex approximation is devised to solve the optimization problem. Extensive simulation study and experimental evaluations using a real-world hardware platform demonstrate that the proposed algorithm can achieve image quality comparable to that of BPA but with a substantial reduction in computational time up to 90%.
Rotating SAR, sparse array, robust design, successive convex approximation
## I Introduction
Synthetic Aperture Radar (SAR) has been widely used in military reconnaissance and remote sensing, because of its all-weather all-day acquisition capabilities [1]. Conventional SAR working modes include "stripmap", "spotlight" and "scan" [2]. In these modes, high range resolutions are achieved by transmitting large bandwidth signals, while the high resolution in the cross-range dimension is achieved by utilizing the Doppler effect induced by the relative motion between the radar platform and the target. However, the imaging swaths of these SAR modes are relatively small due to the limited beam footprint and the restricted moving track. Different from the aforementioned imaging schemes, Rotating SAR (ROSAR) systems mount antennas on the edge of rotation platforms with a certain radius [3]. Through platform rotating, ROSAR systems are able to scan the surrounding environment continuously and generate a 360deg image using the collected data from a single moving track [4]. ROSAR can overcome limited (angular) field-of-view of radar boards and allow imaging without translational movements of the platform making it a promising low-cost solution in helicopter-borne SAR imaging [5, 6, 7], indoor imaging [8] and so on. In indoor environments, ROSAR can be used for mapping and localization in case of fire emergencies or situations where other sensors fail due to high-heat and low visibility.
Due to the highly non-linear moving track of ROSAR, Back-Projection Algorithm (BPA) [9, 10] is typically employed, where its basic idea is to perform range-azimuth matched filtering with the prior knowledge of the distance between the target and each phase center. Although the conventional BPA can produce high-quality images without any limitation on the array geometry, it suffers extremely high computation complexity making it inadequate for real-time high-resolution imaging systems. The computational complexity of BPA is proportional to the number of pixels, the number of fast-time samples per pulse and the number of pulses needed to generate one image. In a practical system, all three parameters can be very large: the number of pixels depends on the image resolution; a high pulse repetition frequency (PRF) and consequently dense virtual array elements are required to avoid aliasing [11]; and a large signal bandwidth, which results in a large number of fast-time samples, is needed to ensure the high range-resolution. However, due to the unique array geometry of ROSAR, frequency domain processing algorithms such as Chirp Scaling Algorithm [12] and Omega-K [13] that assume linear motions of the radar platform relative to the scene are not applicable. In the past decades, much effort has been made in improving the efficiency of BPA and many algorithms have been proposed. For example, fast factorized Back-Projection (FFBP) [14, 15], Cartesian factorized BPA [16] and its variant [17]. The core idea of these algorithms is sub-aperture fusion. In sub-aperture fusion, the entire aperture is split into many small apertures and BPA is applied to each sub-aperture to obtain coarse images. A high-quality image can then be obtained by fusing these coarse-grained images together. However, all of them assume a linear aperture too and cannot be applied to circular aperture directly. In addition, sparse array synthesis is also a technique with low complexity. Conventional ways to select sparse elements, e.g., randomly or uniformly, are not optimal under every condition. Compressive sensing-based algorithms [18, 19] still suffer from high complexity, the requirement of sparse environment
and sensitivity to array manifold error.
In this work, we propose a new sparse array synthesis technique to reduce the computation complexity of BPA [20]. A key novelty of our design lies in the consideration of array geometry mismatch. Mismatch is prevalent in practical ROSAT systems due to imperfect rotational control or measurements. To solve for sparse complex weights of the virtual array elements, we formulate a robust constrained optimization problem and devise an algorithm based on feasible point pursuit (FPP) [21] and successive convex approximation (SCA) [22]. Compared with conventional aforementioned method, the proposed method is optimal subject to sidelobe constraints and robust to a certain level of array manifold error. Besides, thanks to the symmetry of the circular array, the algorithm only needs to be executed in an offline manner for one azimuth direction per range bin in the radar coverage area, and the results can be used for range bins in any direction. The resulting sparse weights effectively reduce the number of pulses needed in BPA. To further reduce the complexity of the proposed algorithm, we perform range-dimension matched filtering by employing Fast Fourier Transform (FFT). For a specific target in space, only the signals from the appropriate range bins at each phase center is selected. The sparse array design constitutes an important step toward realizing ROSAT on mobile devices with limited in space, battery power and computation capacity.
We have implemented the proposed algorithms in MATLAB. Extensive numerical simulations are conducted to evaluate the impact of the parameter settings on the sparsity of the design and array patterns. Additionally, we simulate radar transmission and receiving signals using the MATLAB Phased Array Toolbox and collect real-world data from indoor environments from a rotational hardware platform. The evaluation study shows that in both simulations and real experiments the proposed algorithm can reduce the total computation time by more than 90% while generating SAR images with comparable quality as BPA.
The rest of the paper is organized as follows. Section II gives the system model of ROSAT and formulates the sparse array synthesis problem for ROSAT. Problem transformation and the solution approach are proposed in Section III. Section III-D introduces range-dimension filtering using range-FFT to further reduce computation complexity. We validate our approach in Section IV by numerical evaluation and simulation study as well as experiments in real environment in Section IV-F. Section V concludes the paper.
## II System Model and Problem Formulation
In this section, we introduce the ROSAT system geometry, signal model, preprocessing steps and give the formal problem formulation of spare array design at the end. Frequently used symbols are summarized in Table I.
### _Radar Geometry_
Consider a stationary ROSAT system in Fig. 1. The radar is moving along the edge of a circle centered at the origin with radius \(r\). The bore-sight of the antenna always faces outwards along the radial directions. The antenna radiation pattern in azimuth is assumed to be cosine-shape and non-zero within \(\left[-\frac{\pi}{2},\frac{\pi}{2}\right]\). The radar transmits chirp signals at a constant rate, e.g., \(N\) times per circle. Due to the symmetry, without loss of generality, we define a 2D coordinate frame such that a point target \(R_{t}\) distance away from the circle center locates at \((0,R_{t})\) and the first (indexed by 0) phase center (with respect to the \(X\)-axis counter-clock wise) is at \((r,0)\). Then, the bore-sight direction of the antenna at the \(n\)-th radar position (phase center) is
\[\phi_{n}=\frac{2\pi n}{N}, \tag{1}\]
where \(n=0,1,..,N-1\). Let \(R_{n}\) and \(\theta_{n}\) be the distance and the direction from the \(n\)-th phase center to the target with respect to its bore-sight direction, respectively. Due to the cosine antenna beam pattern, the target is in the field of
\begin{table}
\begin{tabular}{|c|l|} \hline \(N\) & \# of virtual phase centers per circle \\ \hline \(n\) & Index of phase centers \\ \hline \(\phi_{n}\) & Bore-sight direction of the \(n\)-th phase center \\ \hline \(\phi_{t}\), \(R_{t}\) & The direction and range of target \(t\) \\ \hline \(\phi_{v}\) & The angle such that if \(\phi_{n}\in(\pi/2-\phi_{v},\pi/2+\phi_{v})\), the target is visible to the \(n\)-th phase center \\ \hline \(\theta_{n}\), \(R_{n}\) & The distance and direction observed from the \(n\)-th phase center to the target \\ \hline \(p(\cdot)\) & Antenna radiation pattern \\ \hline \(x(t)\) & Transmitted chirp signal \\ \hline \(y_{n}(t)\) & Received signal at the \(n\)-th phase center \\ \hline \(y_{R_{n}}(\cdot n)\) & IF signal at the \(n\)-th phase center \\ \hline \(y_{\text{F},n}(m)\) & Sampled IF signal at the \(n\)-th phase center \\ \hline \(y_{\text{I},n}(l)\) & The data in the \(l\)-th range bin at the \(n\)-th phase center \\ \hline \(\mathbf{y}_{\text{I},n}\) & Data vector after applying range-FFT at the \(n\)-th phase center \\ \hline \(\mathbf{Y}_{\text{F}}\), \(\mathbf{Y}_{\text{L}}\) & Valid data matrix for. SAR \\ \hline \(\mathbf{a}(\phi;R)\) & Steering vector \\ \hline \(F(\phi;R)\) & Array pattern \\ \hline \(\mathbf{w}\) & Weight vector \\ \hline \(\mathbf{e}\) & Array error vector \\ \hline \((\cdot)^{2}\) & Matrix transpose \\ \hline \((\cdot)^{H}\) & Matrix conjugate-transpose \\ \hline \end{tabular}
\end{table} TABLE I: Frequently Used Symbols
Fig. 1: Imaging geometry of a ROSAT system
view (FoV) of only a subset of antenna positions. Denote by \(\phi_{v}\) the angle such that if \(\phi_{n}\in(\pi/2-\phi_{v},\pi/2+\phi_{v})\), the target is visible to the \(n\)-th phase center. \(R_{n}\), \(\theta_{n}\) and \(\phi_{v}\) can be derived from trigonometry relationships, i.e.,
\[R_{n} = \sqrt{R_{t}^{2}+r^{2}-2rR_{t}\cos\left(\phi_{n}-\phi_{t}\right)}, \tag{2}\] \[\theta_{n} = \arctan\frac{R_{t}\cdot\sin\left(\left|\phi_{n}-\phi_{t}\right| \right)}{R_{t}\cdot\cos\left(\left|\phi_{n}-\phi_{t}\right|\right)-r},\] (3) \[\phi_{v} = \arccos\frac{r}{R_{t}}. \tag{4}\]
The indices of the phase centers where the target is in their FOV are given by
\[n=N_{\min},N_{\min}+1,N_{\min}+2,\ldots,N_{\max}, \tag{5}\]
where \(N_{\min}=\left\lceil\frac{\pi}{2}-\phi_{v}\right\rceil\), \(N_{\max}=\left\lfloor\frac{\pi}{2}+\phi_{v}\right\rfloor\) and \(\phi_{\Delta}=\frac{2\pi}{N}\). To generate the image of the target, only signals received by those phase centers are used.
### _Signal Model and Preprocessing_
Let the chirp signal transmitted by the radar be
\[x(t)=e^{j2\pi\left(f_{c}t+\frac{1}{2}Kt^{2}\right)}, \tag{6}\]
where \(f_{c}\) is the carrier frequency, \(K\) is the chirp slope and \(t\) is the fast time. The received signal at the \(n\)-th phase center is
\[y_{n}(t)=\alpha_{n}e^{j2\pi\left[f_{c}(t-\tau_{n})+\frac{1}{2}K(t-\tau_{n})^{ 2}\right]}+v(t), \tag{7}\]
where \(\alpha_{n}\) combines the complex reflection coefficient of the target, the antenna radiation pattern and channel fading, \(\tau_{n}=2R_{n}/c\) is the round-trip time delay, \(c\) is the speed of the light, \(v(t)\) is the Gaussian white noise at the receiver side. Specifically, the antenna radiation pattern is represented as
\[p\left(\theta\right)=\begin{cases}\cos\left(\theta\right)&\theta\in\left( \frac{-\pi}{2},\frac{\pi}{2}\right),\\ \mathbf{0}&\text{otherwise}.\end{cases} \tag{8}\]
After down-converting and deramping the resulting intermediate frequency (IF) signal is
\[y_{\text{IF},n}(t)=\begin{cases}\cos\left(\theta\right)&\theta\in\left(\frac {-\pi}{2},\frac{\pi}{2}\right),\\ \hline 0&\text{otherwise},\end{cases} \tag{9}\]
where \(v_{\text{IF}}(t)=v^{*}(t)\cdot x(t)\). The residual video phase (RVP) term \(-\frac{1}{2}K\tau_{n}^{2}\) is negligible compared with other phase terms. Let the sampling frequency, the total number of samples, sampling interval, sampling start time in one chirp be \(F_{s}\), \(M\), \(t_{s}\) (\(t_{s}=1/F_{S}\)), \(T_{Start}\), respectively. The sampled IF signal is
\[y_{\text{IF},n}(m)=\alpha_{n}e^{j2\pi\left[\tau_{n}K(mt_{s}+T_{Start})+f_{c} \tau_{n}\right]}+v_{\text{IF}}(m), \tag{10}\]
where \(m\) is the sampling index, \(0\leq m\leq M-1\). Combining all the samples, we get the following vector representation
\[\mathbf{y}_{\text{IF},n}=\begin{bmatrix}\alpha_{n}e^{j2\pi\left[\tau_{n}K(0 _{t}+T_{Start})+f_{c}\tau_{n}\right]}\\ \alpha_{n}e^{j2\pi\left[\tau_{n}K(1\cdot t_{s}+T_{Start})+f_{c}\tau_{n}\right] }\\ \vdots\\ \alpha_{n}e^{j2\pi\left[\tau_{n}K((M-1)\cdot t_{s}+T_{Start})+f_{c}\tau_{n} \right]}\end{bmatrix}+\mathbf{v}_{\text{IF}}, \tag{11}\]
where \(\mathbf{v}_{\text{IF}}=\left[v_{\text{IF}}(0),v_{\text{IF}}(1),\ldots,v_{ \text{IF}}(M-1)\right]^{T}\). Let \(k=2\pi\left(KT_{start}+f_{c}\right)/c\). Substituting the \(\tau_{n}\)'s in each entry by \(2R_{n}/c\) and rearranging items in (11), we have
\[\mathbf{y}_{\text{IF},n}=\begin{bmatrix}\alpha_{n}e^{j2\pi\tau_{n}K\cdot 0 \cdot t_{s}}e^{j2kR_{n}}\\ \alpha_{n}e^{j2\pi\tau_{n}K\cdot 1\cdot t_{s}}e^{j2kR_{n}}\\ \vdots\\ \alpha_{n}e^{j2\pi\tau_{n}K\cdot(M-1)\cdot t_{s}}e^{j2kR_{n}}\end{bmatrix}+ \mathbf{v}_{\text{IF}}. \tag{12}\]
Now, the data matrix from the effective phase centers used for SAR is
\[\mathbf{Y}_{\text{IF}}=\left[\mathbf{y}_{\text{IF},N_{\min}},\mathbf{y}_{ \text{IF},N_{\min}},\mathbf{y}_{\min},\ldots,\mathbf{y}_{\text{IF},N_{\max}} \right]. \tag{13}\]
The conventional BPA images a point with parameter \(\left(\phi_{t},R_{t}\right)\) by computing the Hadamard product of \(\mathbf{Y}_{\text{IF}}\) and a matrix \(\mathbf{W}_{\text{BP}}\) of size \(M\times\left(\mathbf{N}_{\max}-N_{\min}+1\right)\), where the element locating at the \(i\)-th row and \(j\)-th column is
\[\mathbf{W}_{\text{BP},\left(i,j\right)}=\mathbf{\Omega}_{N_{\min}+j}e^{-j2\pi \tau_{N_{\min}+j-1}K(t-1)\theta}e^{-j2kR_{N_{\min}+j-1}}.\]
Then, the intensity of the point is
\[\mathbf{\Omega}(\phi_{t},R_{t})=\mathbf{1}^{T}\cdot\left(\mathbf{W}_{\text{BP} }\odot\mathbf{Y}_{\text{IF}}\right)\cdot\mathbf{1}, \tag{14}\]
where \(\odot\) is the symbol of Hadamard product. If the target indeed locates at \(\left(\phi_{t},R_{t}\right)\), in polar coordinates, all the phases of the sampled data are perfectly compensated, and (14) achieves its maximum. However, the computation complexity of imaging a rectangular area using conventional BPA is \(O(\mathcal{I}_{x}\times L_{y}\not\otimes M\times N)\), where \(L_{x}\) and \(L_{y}\) is the number of grids along \(X\) and \(Y\) direction of the area. Clearly, the complexity grows linearly with \(M\), \(N\) and area size. From the complexity analysis, it can be deduced that two possible ways to lower the complexity of BPA are, (1) reducing the number of phase centers to be used, i.e., reducing \(N\), and (2) apply range-dimension matched filtering and select the appropriate range bin instead of using all \(M\) data samples in each pulse.
In the subsequent sections, we first develop a Sparse Array Synthesis (SAS) method that selects a subset of the phase centers and assigns appropriate complex weights. Then, we investigate the use of range-dimension matched filter (or more commonly known as range FFT) to further reduce computation complexity. The two approached are abbreviated to "SAS" and "FFT+SAS" respectively for simplicity.
### _Problem Formulation for Robust Sparse Array Synthesis_
For simplicity, we assume that the reflection coefficient is always 1. Due to complex multipath reflection, wall penetration in indoor environments and the small diameter of the rotation platform relative to the dimension of the environment, the channel fading factor can be approximated to be a constant for the same range bin in all directions and thus we omit it in the formulation. Thus,
\[\alpha_{n}=p\left(\theta_{n}\right). \tag{15}\]
BPA can be viewed as a form of range-azimuth two-dimension filtering. To generalize it to sparsely-selected phase centers, we
first apply a compensation matrix to \(\mathbf{Y}_{\text{IF}}\) to remove the phase items related to fast-time sampling, i.e.,
\[\mathbf{Y}_{\text{IF}}^{\prime} = \mathbf{W}_{\text{SA}}\odot\mathbf{Y}_{\text{IF}}, \tag{16}\] \[= \begin{bmatrix}\alpha_{N_{\text{min}}}e^{j2kR_{N_{\text{min}}}}& \cdots&\alpha_{N_{\text{max}}}e^{j2kR_{N_{\text{max}}}}\\ \alpha_{N_{\text{min}}}e^{j2kR_{N_{\text{min}}}}&\cdots&\alpha_{N_{\text{max}} }e^{j2kR_{N_{\text{max}}}}\\ \vdots&\ddots&\vdots\\ \alpha_{N_{\text{min}}}e^{j2kR_{N_{\text{min}}}}&\cdots&\alpha_{N_{\text{max} }}e^{j2kR_{N_{\text{max}}}}\end{bmatrix},\]
where the \(i\)-th row and \(j\)-th column element of \(\mathbf{W}_{\text{SA}}\) is \(\mathbf{W}_{\text{SA},(i,j)}=e^{-j2\pi r_{N_{\text{min}}+j-1}K\cdot(i-1)\cdot t _{s}}\). The steering vector of the ROSAT array to a near-field target located at range \(R\) can be represented as
\[\mathbf{a}(\phi;R)=\begin{bmatrix}\cos(\theta_{N_{\text{min}}}^{\prime})e^{j 2k\sqrt{R^{2}+r^{2}-2Rr\cos(\phi-\phi_{N_{\text{min}}})}}\\ \cos(\theta_{N_{\text{min}}+1}^{\prime})e^{j2k\sqrt{R^{2}+r^{2}-2Rr\cos(\phi- \phi_{N_{\text{min}}+1})}}\\ \vdots\\ \cos(\theta_{N_{\text{max}}}^{\prime})e^{j2k\sqrt{R^{2}+r^{2}-2Rr\cos(\phi- \phi_{N_{\text{max}}})}}\end{bmatrix}, \tag{17}\]
where \(\theta_{n}^{\prime}=\arctan\frac{R\sin(|\phi-\phi_{n}|)}{R\cos(|\phi-\phi_{n} |)-r}\), and the array pattern can also be calculated as
\[F(\phi;R)=\mathbf{w}^{H}\mathbf{a}(\phi;R), \tag{18}\]
where \(\mathbf{w}\) is a sparse complex weight vector to be designed and some of its elements can be equal or be close to zero. Note that, \(\mathbf{w}\) is also a function of \(R\), but we omit the subscript \(R\) for simplicity. To focus a point target locating at \((\phi_{t},R_{t})\), we need to compute
\[I\left(\phi_{t},R_{t}\right) = \mathbf{1}^{T}\cdot\left(\mathbf{w}^{H}\circ\mathbf{Y}_{\text{ IF}}^{\prime}\right)\cdot\mathbf{1}\] \[= \mathbf{1}^{T}\cdot\left(\mathbf{w}^{H}\circ\mathbf{W}_{\text{SA} }\circ\mathbf{Y}_{\text{IF}}\right)\cdot\mathbf{1},\]
where \(\circ\) is the Khatri-Rao product.
Due to the vibration of the rotation platform, odometry errors and antenna pattern mismatch (c.g., cosine pattern), there exist array manifold errors, which may lead to blurred images. To obtain an SAR image with good quality in this situation, the sparse weight vector \(\mathbf{w}\) must be carefully designed with consideration of robustness to array errors.
We formulate the way to obtain the desirable \(\mathbf{w}\) as to solve an optimization problem. In the optimization, the first constraint is that the power of the main-lobe peak, locating at \(\phi_{m}=\frac{\pi}{2}\), should be larger than or equal to a threshold \(U\), i.e.,
\[\left|\mathbf{w}^{H}\left(\mathbf{a}\left(\phi_{m};R\right)+\mathbf{e}_{m} \right)\right|^{2}\leq\eta U,\ \ \ \|\mathbf{e}_{m}\|\leq\Delta_{R}, \tag{19}\]
where \(\mathbf{e}_{m}\) is the array error vector caused by measurement and imperfect radar rotations, and we assume that \(\mathbf{e}_{m}\) is bounded by a ball with radius \(\Delta_{R}\) (\(0\leq\Delta_{R}\leq\|\mathbf{a}(\phi;R)\|\)). Second, to restrict the main-lobe width and the sidelobe level, we put limitations on the received power at some uniformly spaced discrete directions except for the desired main lobe area, i.e.,
\[\left|\mathbf{w}^{H}\left(\mathbf{a}\left(\phi_{s};R\right)+\mathbf{e}_{s} \right)\right|^{2}\leq\eta U,\ \ \ s=1,2,\ldots,S;\|\mathbf{e}_{s}\|\leq\Delta_{R}, \tag{20}\]
where \(\mathbf{e}_{s}\) is the error vector for sidelobe area with the same properties as \(\mathbf{e}_{m}\), \(S\) is the number of uniformly spaced discrete directions, \(\phi_{s}\in\left[\phi_{N_{\text{min}}},\frac{\pi}{2}-\phi_{\text{MW}}\right] \cup\left[\frac{\pi}{2}+\phi_{\text{MW}},\phi_{N_{\text{max}}}\right]\), \(\phi_{\text{MW}}\) is the half of the desirable main-lobe width, \(\eta\) is the pre-set power ratio of the main-lobe to the sidelobe. Third, to avoid amplifying the noise level, we impose a constraint on the gain of noise power:
\[\left\|\mathbf{w}\right\|_{2}^{2}=1. \tag{21}\]
Lastly, to guarantee a sufficient gain on the target, we set another constraint \(U\geq U_{\text{min}}\).
The objective is to minimize the number of virtual phase centers given by \(\left\|\mathbf{w}\right\|_{0}\). To this end, we formulate the sparse array synthesis problem as
\[\min_{\mathbf{w},U,\mathbf{e}_{m},\mathbf{e}_{m}}\|\mathbf{w}\|_{0} \tag{22}\] \[s.t.\]
Problem (22) is a non-convex optimization problem since both the objective function and constraints C1 and C3 are non-convex. Because of the consideration of robustness to array errors, the problem formulation is markedly different from those in conventional sparse array synthesis [23, 24, 25, 26, 27, 28], rendering existing techniques inapplicable. In the next section, we develop a customized algorithm based on FPP and SCA to solve (22).
## III Solution Approach for Robust Sparse Array Synthesis (SAS)
### _Problem Transformation_
Directly solving (22) is hard, since \(l^{0}\)-norm minimization problem requires intractable combinatorial search. To reduce the complexity, we replace the \(l^{0}\)-norm objective function with \(l^{1}\)-norm, i.e., \(\left\|\mathbf{w}\right\|_{1}\) as suggested by [29, 30].
C1 and C2 contain additional control variables \(\mathbf{e}_{m}\) and \(\mathbf{e}_{s}\) to express robustness constraints. They can be simplified by considering the worst case scenarios. Specifically, by using the Cauchy-Schwarz inequality and the triangle inequality, we can find the minimum of main-lobe response and the maximum of sidelobe response respectively as follows
\[\begin{split}\left|\mathbf{w}^{H}(\mathbf{a}(\phi_{m};R)+\mathbf{e }_{m})\right|^{2}=&\left|\mathbf{w}^{H}\mathbf{a}(\phi_{m};R)+ \mathbf{w}^{H}\mathbf{e}_{m}\right|^{2}\\ \geq&(\left|\mathbf{w}^{H}\mathbf{a}(\phi_{m};R) \right|-\left|\mathbf{w}^{H}\mathbf{e}_{m}\right|)^{2}\\ \geq&(\left|\mathbf{w}^{H}\mathbf{a}(\phi_{m};R) \right|-\left\|\mathbf{w}\right\|_{2}\|\mathbf{e}_{m}\|_{2})^{2}\\ \geq&(\left|\mathbf{w}^{H}\mathbf{a}(\phi_{m};R) \right|-1\cdot\Delta_{R})^{2},\end{split} \tag{23}\]
The equalities hold when \(\mathbf{e}=a\cdot\mathbf{a}(\phi;R)\) (\(a\in\mathbb{R}\)). Substituting (23) and (24) into C1 and C2, we obtain the worst-case constraints as
\[(\left|\mathbf{w}^{H}\mathbf{a}(\phi_{m};R)\right|-\Delta_{R})^{2} \geq U \tag{24}\] \[(\left|\mathbf{w}^{H}\mathbf{a}(\phi_{s};R)\right|+\Delta_{R})^{2} \leq \eta U,s=1,2,\ldots,S \tag{25}\]
Taking the square root of both sides of (24) and (25), and re-arranging items in the inequalities, we have
\[C1:(U^{{}^{\prime}}+\Delta_{R})^{2}-\mathbf{w}^{H}\mathbf{a}(\phi_{m};R) \mathbf{a}^{H}(\phi_{m};R)\mathbf{w}\leq 0, \tag{26}\]
\[C2:\mathbf{w}^{H}\mathbf{a}(\phi_{s};R)\mathbf{a}^{H}(\phi_{s};R) \mathbf{w}-(\sqrt{\eta}U^{{}^{\prime}}-\Delta_{R})^{2}\leq 0,\] \[s=1,2,\ldots,S, \tag{27}\]
where \(U^{{}^{\prime}}=\sqrt{U}\). \(C3\) in (21) can be replaced by two inequality constraints as
\[C3:\left\|\mathbf{w}\right\|_{2}^{2}-1\leq 0, \tag{28}\]
\[C4:1-\left\|\mathbf{w}\right\|_{2}^{2}\leq 0, \tag{29}\]
and \(C4\) is replaced by
\[C5:U^{{}^{\prime}}\geq U_{\min}^{{}^{\prime}}, \tag{30}\]
where \(U_{\min}^{{}^{\prime}}=\sqrt{U_{\min}}\). The optimization problem is thus transformed to
\[\min_{\mathbf{w},U^{{}^{\prime}}}\left\|\mathbf{w}\right\|_{1}\] \[s.t.\begin{cases}C1\!:\!(U^{{}^{\prime}}+\Delta_{R})^{2}- \mathbf{w}^{H}\mathbf{a}(\phi_{m};R)\mathbf{a}^{H}(\phi_{m};R)\mathbf{w}\leq 0, \\ C2\!:\!\mathbf{w}^{H}\mathbf{a}(\phi_{s};R)\mathbf{a}^{H}(\phi_{s};R)\mathbf{w} -(\sqrt{\eta}U^{{}^{\prime}}-\Delta_{R})^{2}\leq 0,\\ \hskip 14.226378pts=1,2,\ldots,S,\\ C3\!:\!\left\|\mathbf{w}\right\|_{2}^{2}-1\leq 0,\\ C4\!:\!1-\left\|\mathbf{w}\right\|_{2}^{2}\leq 0,\\ C5\!:\!U^{{}^{\prime}}\geq U_{\min}.\end{cases} \tag{31}\]
### _FPP-SCA-based Algorithm_
The reformulated problem (31) is still non-convex which is hard to be solved directly, and it is also tricky to find feasible initial solutions, but it is now amenable to the convex approximation technique. Inspired by the idea of FPP [21], we introduce three slack variables \(\overline{b}\), \(b_{1}\), \(b_{2}\) (\(b_{1}\), \(b_{2}\) > 0) for C1, C4, and we construct the following slacked surrogate problem of (31)
\[s.t.\begin{cases}C1:(U^{{}^{\prime}}+\Delta_{R})^{2}-\mathbf{w}^{H}\mathbf{a} (\phi_{m};R)\mathbf{a}^{H}(\phi_{m};R)\mathbf{w}-b_{1}\leq 0,\\ C2\!:\!\mathbf{w}^{H}\mathbf{a}(\phi_{s};R)\mathbf{a}^{H}(\phi_{s};R)\mathbf{w }-(\sqrt{\eta}U^{{}^{\prime}}-\Delta_{R})^{2}-b_{2}\leq 0,\\ C4:1-b+\left\|\mathbf{w}_{(i)}\right\|_{2}^{2}-2\text{Re}\{\mathbf{w}_{(i)}^{ H}\mathbf{w}\}\leq 0,\\ C5:U^{{}^{\prime}}\geq U_{\min}^{{}^{\prime}}.\end{cases} \tag{32}\]
Repeatedly solving (36) with the values from the previous iteration until the number of iterations reaches a pre-set value \(ITER\). Let \(J^{(i)}=\left\|(\mathbf{w})_{i}^{*}\right\|_{1}+\lambda_{b}(b^{(i)}+b_{1}^{(i) }+b_{2}^{(i)})\) be the value of the objective function after the \(i\)-th iteration, \(\mathbf{R}\) be a vector of target range bins of concern, \(\Delta_{\mathbf{R}}\) be a vector of the maximum \(l^{2}\)-norm of \(\mathbf{e}\) for all range bins. Algorithm 1 and 2 summarize the proposed approaches to determine the sparse weight vector for each range bin and to apply the resulting weight vector for SAR imaging, respectively.
\[s.t.\begin{cases}C1\!:\!(U^{{}^{\prime}}+\Delta_{R})^{2}-\mathbf{w}^{H} \mathbf{a}(\phi_{m};R)\mathbf{a}^{H}(\phi_{m};R)\mathbf{w}-b_{1}\leq 0,\\ C2\!:\!\mathbf{w}^{H}\mathbf{a}(\phi_{s};R)\mathbf{a}^{H}(\phi_{s};R)\mathbf{w} -(\sqrt{\eta}U^{{}^{\prime}}-\Delta_{R})^{2}-b_{2}\leq 0,\\ \hskip 14.226378pts=1,2,\ldots,S,\\ C3\!:\!\left\|\mathbf{w}\right\|_{2}^{2}-1-b\leq 0,\\ C4\!:\!1-b-\left\|\mathbf{w}\right\|_{2}^{2}\leq 0,\\ C5\!:\!U^{{}^{\prime}}\geq U_{\min}^{{}^{\prime}}.\end{cases} \tag{33}\]
### _FPP-SCA-based Algorithm_
The reformulated problem (31) is still non-convex which is hard to be solved directly, and it is also tricky to find feasible initial solutions, but it is now amenable to the convex approximation technique. Inspired by the idea of FPP [21], we introduce three slack variables \(\overline{b}\), \(b_{1}\), \(b_{2}\) (\(b_{1}\), \(b_{2}\) > 0) for C1, C4, and we construct the following slacked surrogate problem of (31)
\[s.t.\begin{cases}C1:(U^{{}^{\prime}}+\Delta_{R})^{2}+\mathbf{w}_{(i)}^{H} \mathbf{a}(\phi_{m};R)\mathbf{a}^{H}(\phi_{m};R)\mathbf{w}-\left(\sqrt{\eta}U^{ {}^{\prime}}-\Delta_{R}\right)^{2}-b_{2}\leq 0,\\ C2:\mathbf{w}^{H}\mathbf{a}(\phi_{s};R)\mathbf{a}^{H}(\phi_{s};R)\mathbf{w}-( \sqrt{\eta}U^{{}^{\prime}}-\Delta_{R})^{2}-b_{2}\leq 0,\\ C3:\left\|\mathbf{w}\right\|_{2}^{2}-\overline{I}-b\leq 0,\\ C4:1-b+\left\|\mathbf{w}_{(i)}\right\|_{2}^{2}-2\text{Re}\{\mathbf{w}_{(i)}^{H} \mathbf{w}\}\leq 0,\\ C5:U^{{}^{\prime}}\geq U_{\min}^{{}^{\prime}}.\end{cases} \tag{34}\]
Repeatedly solving (36) with the values from the previous iteration until the number of iterations reaches a pre-set value \(ITER\). Let \(J^{(i)}=\left\|(\mathbf{w})_{i}^{*}\right\|_{1}+\lambda_{b}(b^{(i)}+b_{1}^{(i) }+b_{2}^{(i)})\) be the value of the objective function after the \(i\)-th iteration, \(\mathbf{R}\) be a vector of target range bins of concern, \(\Delta_{\mathbf{R}}\) be a vector of the maximum \(l^{2}\)-norm of \(\mathbf{e}\) for all range bins. Algorithm 1 and 2 summarize the proposed approaches to determine the sparse weight vector for each range bin and to apply the resulting weight vector for SAR imaging, respectively.
### _Initial values and parameter settings for Algorithms 1 and 2_
#### Iv-C1 Initial Value Settings
The proposed approach is able to work with any initial values of the control variables since the constraints are always feasible due to the introduced slack variables. In our implementation, the initial value of \(\mathbf{w}\) is chosen as \(\mathbf{w}_{(0)}=\frac{1}{\sqrt{N_{\text{max}}-N_{\text{min}}+1}}\mathbf{1}\) and \(U_{(0)}^{\prime}=\mathbf{w}_{(0)}^{H}\mathbf{(0)}^{2}\).
#### Iv-C2 Parameter Settings
To ensure convergence, \(ITER\) and \(Th\) can be set to around \(50^{\prime}\sim 100\) and \(10^{-4}\sim 10^{-3}\), respectively. \(\phi_{\text{MW}}\) and \(\eta\) are design parameters of the sparse array. We found that any value lower than both \(\phi_{\text{MW}}=1^{\circ}\) and \(\eta=-33dB\) makes the SCA diverge. \(\Delta_{R}\) should be chosen by considering practical limitations of target platforms. For example, in Section IV-A, the simulation setup and experiments take the unstable rotation speed of a ROSAR platform into account. The direction of each phase center under unstable rotations is modeled as a Gaussian distribution
\[\widehat{\phi}_{n}\sim\mathbf{N}\left(\begin{array}{c}2\pi n\\ \hline N\end{array}\right), \tag{37}\]
where the \(\sigma\) is the standard deviation of the direction. The error vector is computed from \(\mathbf{e}=\widehat{\mathbf{a}}(\phi;\ R)-\mathbf{a}(\phi;\ R)\), where \(\widehat{\mathbf{a}}(\phi;R)\) is determined by substituting \(\phi_{n}\) with \(\widehat{\phi}_{n}\). Let \(\widehat{\Delta}_{R}=\|\mathbf{e}\|\). By repeatedly sampling from (37), we can obtain the cumulative distribution function (CDF) of \(\widehat{\Delta}_{R}\), and choose the \(\widehat{\Delta}_{R}\) corresponding to 99% of the cumulative probability as \(\Delta_{R}\). The testbed evaluations in Section IV-F show that this model is reasonable in realistic settings. Note since the phase error e differs among range bins, one specific \(\Delta_{R}\) must be pre-computed for each range bin.
To ensure accurate results, the angle interval should be less than or equal to the angular resolution of ROSAR. However, reducing the angle interval increases the number of grid points and leads to higher computation costs. Since there is no closed-form solution to the angular resolution of a circular array, we use the results of a linear array as a reference. The determination of \(U_{\min}\) is based on the expected image quality in target applications. It should be large enough to guarantee a sufficient gain for all range bins. Otherwise, there could be light and dark strips on the generated SAR image.
Increasing \(\lambda_{b}\) improves the sparsity of \(\mathbf{w}\) but when \(\lambda_{b}\) is too large, (36) is no longer feasible. As a general rule of thumb, \(\lambda_{b}\) should be several times larger than the maximum of \(\left\|\mathbf{w}\right\|_{1}\) to ensure that the slack variables converge to zero. Since the \(l^{1}\)-norm differs \(l^{0}\)-norm and cannot enforce sparsity in itself, we must manually set any term in \(\mathbf{w}^{*}\) lower than a pre-defined threshold to 0. Thus, in Algorithm 2, only the nonzero entries in \(\mathbf{w}^{*}\) are included in computing \(\mathbf{w}^{*H}\mathbf{\circ W_{SA}\circ Y_{\text{IF}}^{\prime}}\). Moreover, a final step must be taken to verify the solution. This can be accomplished by checking if the slack variables are sufficiently small, i.e., \(\mathbf{b}_{1}+\mathbf{b}_{2}<b_{\min}\) and \(b_{\min}\) is set to \(10^{-5}\) in the experiments.
### _Complexity Reduction for Real-implementation_
Since applying range compression along fast time samples [31] (denoted as "FFT+BPA") can reduce the processing time of conventional BPA, we borrow its idea to further reduce the complexity of the proposed SAS approach (denoted as "FFT+SAS"). Ignoring the noise term in (10) and applying range FFT to \(\mathbf{w}_{\text{IF}}\), we have
\[Y_{\text{1D},n}(l) =\!\!\!\!\!\sum_{m=0}^{M-1}y_{\text{IF},n}(m)e^{-j2\pi\frac{l}{L}m}\] \[= \alpha_{n}e^{j2\pi(rKT_{Start}+f_{c}r)}\!\!\!\!\!\!\sum_{m=0}^{M-1 }e^{j2\pi\left(rKt_{s}-\frac{l}{L}\right)m}\] \[= \alpha_{n}e^{j4\pi\frac{KT_{Start}+f_{c}r}{c}R_{n}}\!\!\!\!\!\!\! \sum_{m=0}^{M-1}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
To focus a point locating at \((\phi_{t},R_{t})\) in polar coordinates, we need to compute
\[I(\phi_{t},R_{t})=\mathbf{w}^{H}\cdot\mathbf{Y}_{\text{ID}}(l^{\star}_{N_{\text{ min}}},l^{\star}_{N_{\text{min}}+1},\ldots,l^{\star}_{N_{\text{max}}}), \tag{43}\]
where \(\mathbf{Y}_{\text{ID}}(l^{\star}_{N_{\text{min}}},l^{\star}_{N_{\text{min}}+1}, l^{\star}_{N_{\text{min}}+2},\ldots,l^{\star}_{N_{\text{max}}})=[\mathbf{y}_{ \text{ID},N_{\text{min}}}(l^{\star}_{N_{\text{min}}}),\mathbf{y}_{\text{ID},N _{\text{min}}+1}(l^{\star}_{N_{\text{min}}+1}),\ldots,\mathbf{y}_{\text{ID},N _{\text{max}}}(l^{\star}_{N_{\text{max}}})]^{T}\) and \(\mathbf{y}_{\text{ID},n}(l)\) represents the \(l\)-th entry of the vector \(\mathbf{y}_{\text{ID},n}\). \(\mathbf{w}\) in (43) can be the sparse weight vector of the corresponding range bin determined by Algorithm 2, or the weight vector of BPA with the \(j\)-th entry being \(\mathbf{w}_{\text{BP},j}=\alpha_{N_{\text{min}}+j-1}\cdot e^{-j2\text{R}_{N_ {\text{min}}+j-1}}\).
**Remark 1**: _The steering vector may differ from the origin one in (16) after applying range-FFT, since substituting (40) into (39) cannot fully cancel the phase summation term in some cases (e.g., when \(R_{n}\) is not multiple of the length of range bin). In our implementation, \(\mathbf{w}^{\star}\) is still obtained from the original steering vector. Thus, the array pattern could deviate from the desired one. However, doing so can lead to computation reduction._
**Remark 2**: _The computation complexity of SAS is \(O(L_{x}\times L_{y}\times M\times N^{\prime})\), where \(N^{\prime}\) is number of phased centers corresponding to non-zero weights. \(N^{\prime}\) is typically less than a half of \(N\). As for FFT+SAS, the computation complexity is given by \(O(N^{\prime}\times M\log_{2}M+L_{x}\times L_{y}\times N^{\prime})\). When \(L_{x}\times L_{y}\gg M\), the second term dominates. Thus, the overall reduction in complexity by combining FFT and SAS is substantial compared with that of the conventional BPA algorithm. Since the proposed algorithms conduct filtering pixel-by-pixel independently, they can be further accelerated by separating these pixels into multiple groups and processing them in a parallel manner._
## IV Performance
In this section, we conduct experimental study to evaluate the effectiveness of the proposed ROSAR imaging algorithms.
### _Hardware and Software Implementation_
We implement the proposed algorithms on a PC equipped with an Intel Core 8700 CPU and 16GB RAM. The PC has been installed MATLAB and Phased Array Toolbox. We also build a real ROSAR system, which hardware consists of a radar board, a rotation plate, motor and odometry sensors mounted on a rover platform as shown in Fig. 2. The radar uses is Taxes Instruments IWR6843ISK, which generates millimeter wave (mmWave) signals. A 3D-printed plate holding the radar and counter weight is connected to a step motor and a wheel encoder.
The whole system is controlled by Robot Operating System (ROS), including the radar signal sending/receiving and rotation speed. Radar data is collected from the antenna board and angle reading is obtained from the encoder of the rotation platform. Both data is streamed to the PC and processed by MATLAB programs for SAR imaging.
To estimate phase center errors from wheel encoder, we mount several optical markers on the rotation plate. Markers can be tracked by OptiTrack, an optical motion capture system treated as ground truth (position accuracy: 0.2 mm). We rotate the platform multiple rounds and compare the angle difference between the data from tick's encoder and the OptiTrack system. Fig. 3 shows the mean value is \(3.9\times 10^{-4}\) degree. If the data is fitted by a Gaussian distribution, the standard deviation is \(\sigma=0.086\). Thus, we can assume the direction of the \(n\)-th phase center follows \(\widehat{\phi}_{n}\sim\mathcal{N}\left(\frac{2\pi n}{N},0.086\right)\).
### _Parameter Settings_
The detailed parameters of the ROSAR system are summarized in Table II. From the settings, the range resolution is given by \(R_{\Delta}=\frac{c}{2B}=0.0435m\), where \(B\) is the bandwidth of the sampled chirp signal. The maximum unambiguous range is \(R_{\max}=\frac{cE}{4K}\approx 4.8686m\). If we choose \(\Delta_{R}\) to represent 99% of the probability of the cumulative distribution function (CDF), Fig. 3(a) shows an example CDF for \(R=2\) and \(\Delta_{R}=0.035\). Fig. 3(b) shows \(\Delta_{R}\) as a function of range, from which we can see \(\Delta_{R}\) decreases as the range becomes larger. It is because a small displacement from the desirable positions of phase centers has less impact when the radar is further away from the target. The detailed calculation steps have been given in Section III-C2.
All the sparse weight vector \(\mathbf{w}\) for each range bin are computed in advance using Algorithm 1 with parameters listed in Table III. We use CVX and Mosek solver [32] to find the optimal values in each iteration of SCA.
### _Baseline Algorithm and Metrics_
We implement four algorithms: "BPA", "FFT+BPA", "SAS", "FFT+SAS", "RBPA" (BPA with randomly selected phase centers) and "FFT+RBPA" for different comparison
Fig. 3: Direction deviation of phase centers
Fig. 2: ROSAR system
purposes. The following metrics are used in quantitative evaluations:
* _Half main lobe width_\(\phi_{\text{MW}}\): The half main lobe width is defined as the angle interval between the peak and the closest local minima on either side of the main lobe.
* _Peak-to-integral sidelobe ratio (PISR)_: The PISR \(\mathcal{R}\) for a specific range bin\(R\) is calculated as \[\mathcal{R}=\frac{\left\|\mathbb{I}(\phi_{m},R)\right\|^{2}}{\sum_{s=1}^{S} \left\|\mathbb{I}\left(\widetilde{\phi_{s}},R\right)\right\|^{2}}.\] (44)
* _SAR computation cost_: The elapsed time of generating a SAR image.
* _Image entropy_[33]: Let \(E=\sum_{\phi}\sum_{R}\left|I(\phi,R)\right|^{2}\) be the total energy of the image, and \(d_{(\phi,R)}=\frac{1}{2}\sum_{s=1}^{S}\left\|\mathbb{I}(\phi,R)\right\|^{2}\) be the energy density of a pixel. The image entropy is defined as \[E=-\sum_{\phi,R}d_{(\phi,R)}\ln d_{(\phi,R)}.\] (45)
The targets are well focused on the SAR image if \(\mathcal{R}\) is large and \(E_{I}\) is small.
to the worst case assumption of robustness design as evident in (26) and (27). The PISRs are given in Table V. High values are better. The conventional BPA gives the best PISR due to its low sidelobes but needs much more computation time.
#### V-E3 Results for All Range Bins
Fig.7 shows the values of \(\left\|\mathbf{w}\right\|_{0}\) and \(U^{\prime}\) for all range bins when \(U_{\min}=0\). Clearly, the sparsity of the array holds in all range bins. We observe that all \(U^{\prime}\)s are small. In this case, although the array is very sparse, the SNR is low (recall that \(U^{\prime}\) is the magnitude of main-lobe peak and the noise power is constant from C3). Setting \(U_{\min}=5\) can bound the SNR at the cost of reduced sparsity as shown in Fig.8.
### _SAR Imaging Simulation for a Point Target_
By using MATLAB Phased Array Toolbox, we simulated a point target locating at \(\left(\frac{\pi}{2},2m\right)\), a rotating radar and the sending/receiving signals of radar antennas. Fig. (a)a, (c)c and (e)e show the imaging results of the target area by conventional BPA, SAS and RBPA, while Fig. (b)b, (d)d and (f)f show the SAR images by employing FFT acceleration. The SAR image quality and computation cost is summarized in Table VI. Although the entropy of the SAR image generated by SAS is slightly worse than that by BPA, the computational time is significantly reduced. Furthermore, although BPA with randomly selected phase centers takes less time as well, the resulting image quality is much worse than others.
### _Testbed Evaluation_
Although simulations can provide insights on the impacts of configuration parameters and the performance of the proposed approach in simulated environments, existing packages in
\begin{table}
\begin{tabular}{|c|c|c|} \hline Algorithms \& Settings & w/o error & w. error \\ \hline SAS w. Robust & 0.1256 & 0.1253 \\ \hline SAS w/o robust & 0.1308 & 0.1303 \\ \hline BPA w. error & N/A & 0.2339 \\ \hline \end{tabular}
\end{table} TABLE V: Peak-to-Integral Sidelobe Ratio
Fig. 8: The solutions of the SCA \(U_{\min}=5\) for different range bins
Fig. 6: The Array Pattern in Different Scenarios
Fig. 7: The solutions of the SCA \(U_{\min}=0\) for different range bins
MATLAB cannot model the reflection, diffusion and deflection properties of mmWave signals in indoor environments well. In this section, the performance and efficiency of the proposed approach is validated through two real-world experiments. The size of SAR area is set to be 9.8756m\(\times\)9.8756m with grid size being 0.04m\(\times\)0.04m.
#### Vi-B1 Scenario 1: Corner Reflector
We put a radar platform (red dot) and a corner reflector (blue dot) in a lab (see Fig. 10). In addition to a corner reflector which can be treated as the point target with strong reflection in practice, there are also computer desks, wood cabinet, metal cases and other equipment in the environment. Fig. 11, 12 and 13 illustrate the SAR images under different algorithms and settings. The numerical results are summarized in Table VII. It can be observed from the figures, BPA gives the clearest image among all approaches. The inclusion of robust design/can improve sharpness of the image. At \(\phi_{\text{MW}}=1^{\circ}\) and \(\eta=-33\)dB, the image entropy from SAS is comparable to that of the BPA but takes only one fifth of the total computation time. Range-FFT can work in conjunction with both BPA and SAS. When comparing all these figures, we find that range-dimension match filtering degrades image sharpness slightly. This is also corroborated by the image entropy results in Table VII. Among all approaches, "FFT+SAS" incurs the least amount of compute time - close to 13 times faster than BPA while achieving acceptable image quality.
Due to the penetration of mmWave signals through drywalls, the steel bars inside the walls are visible in the figures. Additionally, an object in the Room 128 and the contour of Room 132 are also visible. Similar to the case with corner reflector, the consideration of robust design can indeed improve image quality. When \(\eta\) is set to -33dB and robust design, the sidelobes are less visible. Moreover, inclusion of range-domain FFT can indeed greatly reduce the compute time. BPA with randomly selected phase centers takes less time, but the generated image is blurred. The proposed robust design, with \(\phi_{\text{MW}}=1^{\circ}\) and \(\eta=-33\)dB, gives comparable image quality as that of BPA and consume much less computation time.
## V Conclusion
In this paper, we propose a new fast imaging algorithm based on robust sparse array synthesis for ROSAR. Since radar path is circular, such an algorithm only needs to pre-compute the complex weights of the imaging filter offline for one direction per range bin. Due to the external influence could affect the sidelobe level, we add robust design to maintain the image quality. To meet our pre-set expectation and solve
\begin{table}
\begin{tabular}{c|c|c|c} \hline Algorithm & Settings & \(E_{I}\) & Time Cost (s) \\ \hline BPA & N/A & **6.4436** & 151.22 \\ \hline FFT+BPA & N/A & 6.6881 & 21.85 \\ \hline \multirow{2}{*}{SAS} & \(\phi_{\text{MW}}\)=\(1^{\circ}\),\(\eta\)\(=-30\)dB,w/o Robust & 7.3802 & 27.75 \\ \cline{2-4} & \(\phi_{\text{MW}}\) = \(1^{\circ}\),\(\eta\)\(=-30\)dB,Robust & 7.2871 & 29.30 \\ \hline \multirow{4}{*}{FFT+SAS} & \(\phi_{\text{MW}}\)=\(1^{\circ}\),\(\eta\)\(=-33\)dB,Robust & 7.0251 & 30.25 \\ \cline{2-4} & \(\phi_{\text{MW}}\)=\(1^{\circ}\),\(\eta\)\(=-30\)dB,Robust & 7.5309 & **12.50** \\ \cline{1-1} \cline{2-4} & \(\phi_{\text{MW}}\) = \(1^{\circ}\),\(\eta\)\(=-30\)dB,Robust & 7.4362 & **12.21** \\ \cline{1-1} \cline{2-4} & \(\phi_{\text{MW}}\) = \(1^{\circ}\),\(\eta\)\(=-33\)dB,Robust & 7.4245 & **12.47** \\ \hline RBPA & N/A & 8.3671 & 25.35 \\ \hline FFT+RBPA & N/A & 8.3801 & 13.28 \\ \hline \end{tabular}
\end{table} TABLE VIII: Imaging Performance with Different Approaches in Corridor Corner Case
Fig. 16: SAR images of corridor corner (FFT + BPA and FFT + SAS)
\begin{table}
\begin{tabular}{c|c|c|c} \hline Algorithm & Settings & \(E_{I}\) & Time Cost (s) \\ \hline BPA & N/A & **6.0472** & 151.84 \\ \hline FFT+BPA & N/A & 6.3400 & 21.69 \\ \hline \multirow{2}{*}{SAS} & \(\phi_{\text{MW}}\)=\(1^{\circ}\),\(\eta\)\(=-30\)dB,w/o Robust & 7.2607 & 28.04 \\ \cline{2-4} & \(\phi_{\text{MW}}\) = \(1^{\circ}\),\(\eta\)\(=-30\)dB,Robust & 7.177 & 28.94 \\ \cline{2-4} & \(\phi_{\text{MW}}\) = \(1^{\circ}\),\(\eta\)\(=-33\)dB,Robust & 6.8913 & 30.56 \\ \hline \multirow{2}{*}{FFT+SAS} & \(\phi_{\text{MW}}\)=\(1^{\circ}\),\(\eta\)\(=-30\)dB,w/o Robust & 7.3989 & **11.45** \\ \cline{2-4} & \(\phi_{\text{MW}}\) = \(1^{\circ}\),\(\eta\)\(=-30\)dB,Robust & 7.3227 & **12.14** \\ \hline \multirow{2}{*}{RBPA} & \(\phi_{\text{MW}}\) = \(1^{\circ}\),\(\eta\)\(=-33\)dB,Robust & 7.1170 & **12.31** \\ \cline{2-4} & N/A & 8.3060 & 25.02 \\ \hline FFT+RBPA & N/A & 8.3179 & 13.07 \\ \hline \end{tabular}
\end{table} TABLE VII: Imaging Performance with Different Approaches in Corridor Corner Reflector Case
Fig. 14: Floor map of the corridor corner
Fig. 15: SAR images of corridor corner (BPA and SAS)
this problem, our proposed algorithm employs feasible point pursuit and successive convex approximation technology. On that basis, we also give another algorithm based on range-FFT to further reduce the computation complexity.
According to the simulation and testbed results, we conclude that our approach can generate an SAR image with the quality comparable to that of BPA. Meanwhile, the proposed approach is able to reduce the computational cost significantly and is robust to the array error.
Nonetheless, we must sacrifice some image quality if employing FFT-based range-dimensional matched filtering. Thus, exploring a better approach for the processing based on range FFT is our future research direction.
|
2308.16891 | GNFactor: Multi-Task Real Robot Learning with Generalizable Neural
Feature Fields | It is a long-standing problem in robotics to develop agents capable of
executing diverse manipulation tasks from visual observations in unstructured
real-world environments. To achieve this goal, the robot needs to have a
comprehensive understanding of the 3D structure and semantics of the scene. In
this work, we present $\textbf{GNFactor}$, a visual behavior cloning agent for
multi-task robotic manipulation with $\textbf{G}$eneralizable $\textbf{N}$eural
feature $\textbf{F}$ields. GNFactor jointly optimizes a generalizable neural
field (GNF) as a reconstruction module and a Perceiver Transformer as a
decision-making module, leveraging a shared deep 3D voxel representation. To
incorporate semantics in 3D, the reconstruction module utilizes a
vision-language foundation model ($\textit{e.g.}$, Stable Diffusion) to distill
rich semantic information into the deep 3D voxel. We evaluate GNFactor on 3
real robot tasks and perform detailed ablations on 10 RLBench tasks with a
limited number of demonstrations. We observe a substantial improvement of
GNFactor over current state-of-the-art methods in seen and unseen tasks,
demonstrating the strong generalization ability of GNFactor. Our project
website is https://yanjieze.com/GNFactor/ . | Yanjie Ze, Ge Yan, Yueh-Hua Wu, Annabella Macaluso, Yuying Ge, Jianglong Ye, Nicklas Hansen, Li Erran Li, Xiaolong Wang | 2023-08-31T17:52:10Z | http://arxiv.org/abs/2308.16891v3 | # GNFactor: Multi-Task Real Robot Learning with Generalizable Neural Feature Fields
###### Abstract
It is a long-standing problem in robotics to develop agents capable of executing diverse manipulation tasks from visual observations in unstructured real-world environments. To achieve this goal, the robot needs to have a comprehensive understanding of the 3D structure and semantics of the scene. In this work, we present **GNFactor**, a visual behavior cloning agent for multi-task robotic manipulation with **G**eneralizable **N**eural feature **F**ields. GNFactor jointly optimizes a generalizable neural field (GNF) as a reconstruction module and a Perceiver Transformer as a decision-making module, leveraging a shared deep 3D voxel representation. To incorporate semantics in 3D, the reconstruction module utilizes a vision-language foundation model (_e.g._, Stable Diffusion) to distill rich semantic information into the deep 3D voxel. We evaluate GNFactor on 3 real robot tasks and perform detailed ablations on 10 RLBench tasks with a limited number of demonstrations. We observe a substantial improvement of GNFactor over current state-of-the-art methods in seen and unseen tasks, demonstrating the strong generalization ability of GNFactor.
**Keywords:** Robotic Manipulation, Neural Radiance Field, Behavior Cloning
## 1 Introduction
One major goal of introducing learning into robotic manipulation is to enable the robot to effectively handle unseen objects and successfully tackle various tasks in new environments. In this paper, we focus on using imitation learning with a few demonstrations for multi-task manipulation. Using imitation learning helps avoid complex reward design and training can be directly conducted on the real robot without creating its digital twin in simulation [1, 2, 3, 4]. This enables policy learning on diverse tasks in complex environments, based on users' instructions (see Figure 1). However, working with a limited number of demonstrations presents great challenges in terms of generalization. Most of these challenges arise from the need to comprehend the 3D structure of the scene, understand the semantics and functionality of objects, and effectively follow task instructions based on visual cues. Therefore, a comprehensive and informative visual representation of the robot's observations serves as a crucial foundation for generalization.
The development of visual representation for robot learning has mainly focused on learning within a 2D plane. Self-supervised objectives are leveraged to pre-train the representation from the 2D image observation [6, 7, 8] or jointly optimized with the policy gradients [9, 10, 11]. While these approaches improve sample efficiency and lead to more robust policies, they are mostly applied to relatively simple manipulation tasks. To tackle more complex tasks requiring geometric understanding (_e.g._, object shape and pose) and with occlusions, 3D visual representation learning has been recently adopted with robot learning [11, 12]. For example, Driess et al. [12] train the 3D scene representation by using NeRF and view synthesis to provide supervision. While it shows effectiveness over tasks requiring geometric reasoning such as hanging a cup, it only handles the simple
scene structure with heavy masking in a single-task setting. More importantly, without a semantic understanding of the scene, it would be very challenging for the robot to follow the user's language instructions.
In this paper, we introduce learning a language-conditioned policy using a novel representation leveraging both 3D and semantic information for multi-task manipulation. We train **G**eneralizable **N**eural Feature **F**ields (**GNF**) which distills pre-trained semantic features from 2D foundation models into the Neural Radiance Fields (NeRFs). We conduct policy learning upon this representation, leading to our model **GNFactor**. It is important to note that GNFactor learns an encoder to extract scene features in a feed-forward manner, instead of performing per-scene optimization in NeRF. Given a single RGB-D image observation, our model encodes it into a 3D semantic volumetric feature, which is then processed by a Perceiver Transformer [13] architecture for action prediction. To conduct multi-task learning, the Perceiver Transformer takes in language instructions to get task embedding, and reason the relations between the language and visual semantics for manipulation.
There are two branches of training in our framework (see Figure 3): (i) _GNF training_. Given the collected demonstrations, we train the Generalizable Neural Feature Fields using view synthesis with volumetric rendering. Besides rendering the RGB pixels, we also render the features of the foundation models in 2D space. The GNF learns from both pixel and feature reconstruction at the same time. To provide supervision for feature reconstruction, we apply a vision foundation model (_e.g._, pre-trained Stable Diffusion model [5]) to extract the 2D feature from the input view as the ground truth. In this way, we can distill the semantic features into the 3D space in GNF. (ii) _GNFactor joint training_. Building on the 3D volumetric feature jointly optimized by the learning objectives of GNF, we conduct behavior cloning to train the whole model end-to-end.
For evaluation, we conduct real-robot experiments on three distinct tasks across two different kitchens (see Figure 1). We successfully train a single policy that effectively addresses these tasks in different scenes, yielding significant improvements over the baseline method PerAct [3]. We also conduct comprehensive evaluations using 10 RLBench simulated tasks [14] and 6 designed generalization tasks. We observe that GNFactor outperforms PerAct with an average improvement of \(1.55\)x and \(1.57\)x, consistent with the significant margin observed in the real-robot experiments.
## 2 Related Work
**Multi-Task Robotic Manipulation.** Recent works in multi-task robotic manipulation have led to significant progress in the execution of complex tasks and the ability to generalize to new scenarios [15; 2; 1; 16; 17; 3; 18; 19]. Notable methods often involve the use of extensive interaction data to train multi-task models [2; 1; 16; 17]. For example, RT-1 [1] underscores the benefits of task-agnostic training, demonstrating superior performance in real-world robotic tasks across a variety of datasets. To reduce the need for extensive demonstrations, methods that utilize keyframes - which encode the initiation of movement - have proven to be effective [20; 21; 22; 23; 24]. PerAct [3] employs the Perceiver Transformer [13] to encode language goals and voxel observations and shows its effectiveness in real robot experiments. In this work, we utilize the same action prediction frame
Figure 1: **Left: Three camera views used in the real robot setup to reconstruct the feature field generated by Stable Diffusion [5]. We segment the foreground feature for better illustration. Right: Three language-conditioned real robot tasks across two different kitchens.**
work as PerAct while we focus on improving the generalization ability of this framework by learning a generalizable volumetric representation under limited data.
**3D Representations for Reinforcement/Imitation Learning (RL/IL).** To improve manipulation policies by leveraging visual information, numerous studies have concentrated on enhancing 2D visual representations [8; 7; 6; 25], while for addressing more complex tasks, the utilization of 3D representations becomes crucial. Ze et al. [11] incorporates a deep voxel-based 3D autoencoder in motor control, demonstrating improved sample efficiency compared to 2D representation learning methods. Driess et al. [12] proposes to first learn a state representation by NeRF and then use the frozen state for downstream RL tasks. While this work shows the initial success of utilizing NeRF in RL, its applicability in real-world scenarios is constrained due to various limitations: _e.g._, the requirement of object masks, the absence of a robot arm, and the lack of scene structure. The work closest to ours is SNeRL [26], which also utilizes a vision foundation model in NeRF. However, similar to NeRF-RL [12], SNeRL masks the scene structure to ensure functionality and the requirement for object masks persists, posing challenges for its application in real robot scenarios. Our proposed GNFactor, instead, handles challenging muti-task real-world scenarios, demonstrating the potential for real robot applications.
**Neural Radiance Fields (NeRFs).** Neural fields have achieved great success in novel view synthesis and scene representation learning these years [27; 28; 29; 30; 31; 32], and recent works also start to incorporate neural fields into robotics [33; 34; 35; 12; 26]. NeRF [29] stands out for achieving photorealistic view synthesis by learning an implicit function of the scene, while it requires per-scene optimization and is thus hard to generalize. Many following methods [36; 37; 38; 39; 40; 41; 42] propose more generalizable NeRFs. PixelNeRF [43] and CodeNeRF [37] encode 2D images as the input of NeRFs, while TransINR [36] leverages a vision transformer to directly infer NeRF parameters. A line of recent works [44; 45; 46; 47; 48; 49] utilize pre-trained vision foundation models such as DINO [50] and CLIP [51] as supervision besides the RGB image, which thus enables the NeRF to learn generalizable features. In this work, we incorporate generalizable NeRF to reconstruct different views in RGB and embeddings from a pretrained Stable Diffusion model [5].
## 3 Method
In this section, we detail the proposed GNFactor, a multi-task agent with a 3D volumetric representation for real-world robotic manipulation. GNFactor is composed of a volumetric rendering module and a 3D policy module, sharing the same deep volumetric representation. The volumetric rendering module learns a Generalizable Neural Feature Field (GNF), to reconstruct the RGB image from cameras and the embedding from a vision-language foundation model, _e.g._, Stable Diffusion [5]. The task-agnostic nature of the vision-language embedding enables the volumetric representation to learn generalizable features via neural rendering and thus helps the 3D policy module better handle multi-task robotic manipulation. The task description is encoded with CLIP [51] to obtain the task embedding \(T\). An overview of GNFactor is shown in Figure 3.
Figure 2: **Simulation environments and the real robot setup. We show the RGB observations for our 10 RLBench tasks in Figure (a), the sampled views for GNF in Figure (b), and the real robot setup in Figure (c).**
### Problem Definition
To effectively address complex real-world robotic problems, we structure the observation space as a 3D voxel space \(\mathcal{O}\in\mathbb{R}^{100^{3}\times 3}\), as opposed to the commonly used 2D images [1; 2; 7; 8]. The 3D voxel observation originates from an RGB-D image captured by a **single front camera** with known extrinsic and intrinsic parameters, ensuring our method's practical applicability in the real world. In addition to the front camera view used for policy training, we also gather additional \(k\) views for training the GNF. We collect only RGB images for these additional views instead of RGB-D images. In real-world scenarios, we use \(k=2\), while in simulated environments, we set \(k=19\).
The action of the robot arm with a gripper is represented by translation \(a_{\text{trans}}\in\mathbb{R}^{3}\), rotation \(a_{\text{rot}}\in\mathbb{R}^{(360/5)\times 3}\), gripper openness \(a_{\text{open}}\in[0,1]\), and collision avoidance \(a_{\text{collision}}\in[0,1]\). For the rotation \(a_{\text{rot}}\), each rotation axis is discretized into \(R=5\) bins. The collision avoidance parameter \(a_{\text{collision}}\) instructs the motion planner regarding the necessity to avoid collisions, which is crucial as our tasks encompasses both contact-based and non-contact-based motions.
Due to the inefficiency of continuous action prediction and the extensive data requirements that come with it, we reformulate the behavior cloning problem as a _keyframe-prediction_ problem [3; 52]. We first extract keyframes from expert demonstrations using the following metric: a frame in the trajectory is a keyframe when joint velocities approach zero and the gripper's open state remains constant. The model is then trained to predict the subsequent keyframe based on current observations. This formulation effectively transforms the continuous control problem into a discretized keyframe-prediction problem, delegating the internal procedures to the RRT-connect motion planner [53] in simulation and Linear motion planner in real-world xArm7 robot.
### Learning Volumetric Representations with Generalizable Neural Feature Fields
In our initial step, we transform the RGB-D image into a \(100^{3}\) voxel. Then the 3D voxel encoder encodes this 3D voxel and outputs our volumetric representation \(v\in\mathbb{R}^{100^{3}\times 128}\). To enhance the volumetric representation \(v\) with structural knowledge and language semantics, we learn a Generalizable Neural Feature Field (GNF) that takes the deep volume \(v\) as the scene representation and the model is learned by reconstructing the additional views and the features predicted by a 2D vision-language foundation model [5]. The entire neural rendering process is described as follows.
We denote \(v_{\mathbf{x}}\in\mathbb{R}^{128}\) as the sampled 3D feature for the 3D point \(\mathbf{x}\) using the volumetric representation \(v\). \(v_{\mathbf{x}}\) is formed with trilinear interpolation due to the discretized nature of the volume \(v\). Our GNF primarily consists of three functions: (i) one density function \(\sigma(\mathbf{x},v_{\mathbf{x}}):\mathbb{R}^{3+128}\mapsto\mathbb{R}_{+}\) that
Figure 3: **Overview of GNFactor. GNFactor takes an RGB-D image as input and encodes it using a voxel encoder to transform it into a feature in deep 3D volume. This volume is then shared by two modules: volumetric rendering (Renderer) and robot action prediction (Perceiver). These two modules are jointly trained, which optimizes the shared features to not only reconstruct vision-language embeddings (Diffusion Feature) and other views (RGB), but also to estimate accurate Q-values (\(Q_{\text{trans}}\), \(Q_{\text{rot}}\), \(Q_{\text{collide}}\), \(Q_{\text{open}}\)).**
maps the 3D point \(\mathbf{x}\) and the 3D feature \(v_{\mathbf{x}}\) to the density \(\sigma\), (ii) one RGB function \(\mathbf{c}(\mathbf{x},\mathbf{d},v_{\mathbf{x}}):\mathbb{R}^{3+3+128}\mapsto \mathbb{R}^{3}\) that maps the 3D point \(\mathbf{x}\), the view direction \(\mathbf{d}\), and the 3D feature \(v_{\mathbf{x}}\) to color, and (iii) one vision-language embedding function \(\mathbf{f}(\mathbf{x},\mathbf{d},v_{\mathbf{x}}):\mathbb{R}^{3+3+128}\mapsto \mathbb{R}^{512}\) that maps the 3D point \(\mathbf{x}\), the view direction \(\mathbf{d}\), and the 3D feature \(v_{\mathbf{x}}\) to the vision-language embedding. In Figure 3, the corresponding components of these three functions are illustrated. Given a pixel's camera ray \(\mathbf{r}(t)=\mathbf{o}+t\mathbf{d}\), which is defined by the camera origin \(o\in\mathbb{R}^{3}\), view direction \(\mathbf{d}\) and depth \(t\) with bounds \([t_{n},t_{f}]\), the estimated color and embedding of the ray can be calculated by:
\[\hat{\mathbf{C}}(\mathbf{r},v) =\int_{t_{n}}^{t_{f}}T(t)\sigma(\mathbf{r}(t),v_{\mathbf{x}(t)}) \mathbf{c}(\mathbf{r}(t),\mathbf{d},v_{\mathbf{x}(t)})\mathrm{d}t\,, \tag{1}\] \[\hat{\mathbf{F}}(\mathbf{r},v) =\int_{t_{n}}^{t_{f}}T(t)\sigma(\mathbf{r}(t),v_{\mathbf{x}(t)}) \mathbf{f}(\mathbf{r}(t),\mathbf{d},v_{\mathbf{x}(t)})\mathrm{d}t\,,\]
where \(T(t)=\exp\left(-\int_{t_{n}}^{t}\sigma(s)\mathrm{d}s\right)\). The integral is approximated with numerical quadrature in the implementation. Our GNF is then optimized to reconstruct the RGB image and the vision-language embedding from multiple views and diverse scenes by minimizing the following loss:
\[\mathcal{L}_{\text{recon}}=\sum_{\mathbf{r}\in\mathcal{R}}\|\mathbf{C}( \mathbf{r})-\hat{\mathbf{C}}(\mathbf{r})\|_{2}^{2}+\lambda_{\text{feat}}\| \mathbf{F}(\mathbf{r})-\hat{\mathbf{F}}(\mathbf{r})\|_{2}^{2}\,, \tag{2}\]
where \(\mathbf{C}(\mathbf{r})\) is the ground truth color, \(\mathbf{F}(\mathbf{r})\) is the ground truth vision-language embedding generated by Stable Diffusion, \(\mathcal{R}\) is the set of rays generated from camera poses, and \(\lambda_{\text{feat}}\) is the weight for the embedding reconstruction loss. For efficiency, we sample \(b_{\text{ray}}\) rays given one target view, instead of reconstructing the entire image. To help the GNF training, we use a coarse-to-fine hierarchical structure as the original NeRF [29] and apply depth-guided sampling [54] in the "fine" network.
### Action Prediction with Volumetric Representations
The volumetric representation \(v\) is optimized not only to achieve reconstruction of the GNF module, but also to predict the desired action for accomplishing manipulation tasks within the 3D policy. As such, we jointly train the representation \(v\) to satisfy the objectives of both the GNF and the 3D policy module. In this section, we elaborate the training objective and the architecture of the 3D policy.
We employ a Perceiver Transformer [3] to handle the high-dimensional multi-modal input, _i.e._, the 3D volume, the robot's proprioception, and the language feature. We first condense the shared volumetric representation \(v\) into a volume of size \(20^{3}\times 128\) using a 3D convolution layer with a kernel size and stride of 5, followed by a ReLU function, and flatten the 3D volume into a sequence of small cubes of size \(8000\times 128\). The robot's proprioception is projected into a 128-dimensional space and concatenated with the volume sequence for each cube, resulting in a sequence of size \(8000\times 256\). We then project the language token features from CLIP into the same dimensions (\(77\times 256\)) and concatenate these features with a combination of the 3D volume, the robot's proprioception state, and the CLIP token embedding. The result is a sequence with dimensions of \(8077\times 256\).
This sequence is combined with a learnable positional encoding and passed through the Perceiver Transformer, which outputs a sequence of the same size. We remove the last \(77\) features for the ease of voxelization [3] and reshape the sequence back to a voxel of size \(20^{3}\times 256\). This voxel is then upscaled to \(100^{3}\times 128\) with trilinear interpolation and referred to as \(v_{\text{PT}}\). \(v_{\text{PT}}\) is shared across three action prediction heads (\(Q_{\text{open}}\), \(Q_{\text{trans}}\), \(Q_{\text{rot}}\), \(Q_{\text{collide}}\) in Figure 3) to determine the final robot actions at the same scale as the observation space. To retain the learned features from GNF training, we create a skip connection between our volumetric representation \(v\) and \(v_{\text{PT}}\). The combined volume feature \((v,v_{\text{PT}})\) is used to predict a 3D Q-function \(\mathcal{Q}_{\text{trans}}\) for translation, as well as Q-functions for other robot operations like gripper openness (\(\mathcal{Q}_{\text{open}}\)), rotation (\(\mathcal{Q}_{\text{rot}}\)), and collision avoidance (\(\mathcal{Q}_{\text{collide}}\)). The \(\mathcal{Q}\)-function here represents the action values of one timestep, differing from the traditional \(\mathcal{Q}\)-function in RL that is for multiple timesteps. For example, in each timestep, the 3D \(\mathcal{Q}_{\text{trans}}\)-value would be equal to \(1\) for the most possible next voxel and \(0\) for other voxels. The model then optimizes the cross-entropy loss like a classifier,
\[\mathcal{L}_{\text{action}}=-\mathbb{E}_{Y_{\text{trans}}}\,\left[\log\mathcal{V} _{\text{trans}}\,\right]-\mathbb{E}_{Y_{\text{rot}}}\,\left[\log\mathcal{V}_{ \text{rot}}\,\right]-\mathbb{E}_{Y_{\text{trans}}}\,\left[\log\mathcal{V}_{ \text{open}}\,\right]-\mathbb{E}_{Y_{\text{collide}}}\,\left[\log\mathcal{V}_{ \text{collide}}\,\right]\,, \tag{3}\]
where \(\mathcal{V}_{i}=\mathrm{softmax}(\mathcal{Q}_{i})\) for \(\mathcal{Q}_{i}\in[\mathcal{Q}_{\text{trans}},\mathcal{Q}_{\text{open}},\mathcal{ Q}_{\text{rot}},\mathcal{Q}_{\text{collide}}]\) and \(Y_{i}\in[Y_{\text{trans}},Y_{\text{rot}},Y_{\text{open}},Y_{\text{collide}}]\) is the ground truth one-hot encoding. The overall learning objective for GNFactor is as follows:
\[\mathcal{L}_{\text{GNFactor}}=\mathcal{L}_{\text{action}}+\lambda_{\text{recon}} \mathcal{L}_{\text{recon}}\,, \tag{4}\]
where \(\lambda_{\text{recon}}\) is the weight for the reconstruction loss to balance the scale of different objectives. To train the GNFactor, we employ a joint training approach in which the GNF and 3D policy module are optimized jointly, without any pre-training. From our empirical observation, this approach allows for better fusion of information from the two modules when learning the shared features.
## 4 Experiments
In this section, we conduct experiments to answer the following questions: (i) Can GNFactor surpass the baseline model in simulated environments? (ii) Can GNFactor generalize to novel scenes in simulation? (iii) Does GNFactor learn a superior policy that handles real robot tasks in two different kitchens with noisy and limited real-world data? (iv) What are the crucial factors in GNFactor to ensure the functionality of the entire system? Our concluded results are given in Figure 4.
### Experiment Setup
For the sake of reproducibility and benchmarking, we conduct our primary experiments in RLBench simulated tasks. Furthermore, to show the potential of GNFactor in the real world, we design a set of real robot experiments across two kitchens. We compare our GNFactor with the strong language-conditioned multi-task agent PerAct [3] in both simulation and the real world, emphasizing the universal functionality of GNFactor. Both GNFactor and PerAct use the single RGB-D image from the front camera as input to construct the voxel grid. In the multi-task simulation experiments, we also create a stronger version of PerAct by adding more camera views as input to fully cover the scene (visualized in Figure 10). Figure 2 shows our simulation tasks and the real robot setup. We briefly describe the tasks and details are left in Appendix A and Appendix B.
**Simulation.** We select \(10\) challenging language-conditioned manipulation tasks from the RLBench tasksuites [14]. Each task has at least two variations, totaling \(166\) variations. These variations encompass several types, such as variations in shape and color. Therefore, to achieve high success rates with very limited demonstrations, the agent needs to learn generalizable knowledge about manipulation instead of merely overfitting to the given demonstrations. We use the RGB-D image of size \(128\times 128\times 3\) from the single front camera as the observation. To train the GNF, we also add additional \(19\) camera views to provide RGB images as supervision.
**Real robot.** We use the xArm7 robot with a parallel gripper in real robot experiments. We set up two toy kitchen environments to make the agent generalize manipulation skills across the scenes and designed three manipulation tasks, including _open the microwave door_, _turn the faucet_, and _relocate the teapot_, as shown in Figure 1. We set up three RealSense cameras around the robot. Among the three cameras, the front one captures the RGB-D observations for the policy training and the left/right one provides the RGB supervision for the GNF training.
Figure 4: **Main experiment results.** We present the average success rates in both the multi-task and generalization settings across RLBench tasks and real robot tasks. The error bar represents one standard deviation. The number in the bracket denotes the number of tasks.
**Expert Demonstrations.** We collect \(20\) demonstrations for each RLBench task with the motion planner. The task variation is uniformly sampled. We collect \(5\) demonstrations for each real robot task using a VR controller. Details for collection remain in Appendix C.
**Generalization tasks.** To further show the generalization ability of GNFactor, we design additional \(6\) simulated tasks and \(3\) real robot tasks based on the original training tasks and add task distractors.
**Training details.** One agent is trained with two NVIDIA RTX3090 GPU for \(2\) days (\(100\)k iterations) with a batch size of \(2\). The shared voxel encoder of GNFactor is implemented as a lightweight 3D UNet with only \(0.3\)M parameters. The Perceiver Transformer keeps the same number of parameters as PerAct [3] (\(25.2\)M parameters), making our comparison with PerAct fair.
### Simulation Results
We report the success rates for multi-task tests on RLBench in Table 1 and for generalization to new environments in Table 2. We conclude our observations as follows:
**Dominance of GNFactor over PerAct for multi-task learning.** As shown by Table 1 and Figure 4, GNFactor achieves higher success rates across various tasks compared to PerAct, particularly excelling in challenging long-horizon tasks. For example, in sweep to dustpan task, the robot needs to first pick up the broom and use the broom to sweep the dust into the dustpan. We find that GNFactor achieves a success rate of \(28.0\%\), while PerAct could not succeed at all. In simpler tasks like open drawer where the robot only pulls the drawer out, both GNFactor and PerAct perform reasonably well, with success rates of \(76.0\%\) and \(54.7\%\) respectively. Furthermore, we observe that enhancing PerAct with extra camera views does not result in significant improvements. This underscores the importance of efficiently utilizing the available camera views.
**Generalization ability of GNFactor to new tasks.** In Table 2, we observe that the change made on the environments such as distractors impacts all the agents negatively, while GNFactor shows better capability of generalization on 5 out of 6 tasks compared to PerAct. We also find that for some challenging variations such as the smaller block in the task slide (S), both GNFactor and PerAct struggle to handle. This further emphasizes the importance of robust generalization skills.
**Ablations.** We summarize the key components in GNFactor that contribute to the success of the volumetric representation in Table 4. From the ablation study, we gained several insights:
(i) Our GNF reconstruction module plays a crucial role in multi-task robot learning. Moreover, the RGB loss is essential for learning a consistent 3D feature in addition to the feature loss, especially since the features derived from foundation models are not inherently 3D consistent.
(ii) The volumetric representation benefits from Diffusion features and depth-guided sampling, where the depth prior is utilized to enhance the sampling quality in neural rendering.
\begin{table}
\begin{tabular}{l c c c c c|c} \hline \hline Method / Task & \multicolumn{1}{c}{close z} & open drawer & sweep to dustpan & meat off grill & turn tap & **Average** \\ \hline PerAct & \(18.7\pm 8.2\) & \(54.7\pm 18.6\) & \(0.0\pm 0.0\) & \(40.0\pm 17.0\) & \(38.7\pm 6.8\) & \\ PerAct (4 Cameras) & \(21.3\pm 7.5\) & \(44.0\pm 11.3\) & \(0.0\pm 0.0\) & \(\mathbf{65.3\pm 132}\) & \(46.7\pm 3.8\) & \\ GNFactor & \(\mathbf{25.3\pm 6.8}\) & \(\mathbf{76.0\pm 7.7}\) & \(\mathbf{28.0\pm 10.0}\) & \(57.3\pm 18.9\) & \(\mathbf{50.7\pm 8.2}\) & \\ \hline Method / Task & \multicolumn{1}{c}{slide block} & put a drawer & drag stick & push buttons & **stack blocks** \\ \hline PerAct & \(18.7\pm 13.6\) & \(2.7\pm 3.3\) & \(5.3\pm 5.0\) & \(18.7\pm 12.4\) & \(\mathbf{6.7\pm 10.0}\) & \(20.4\) \\ PerAct (4 Cameras) & \(16.0\pm 14.2\) & \(\mathbf{6.7\pm 6.2}\) & \(12.0\pm 3.3\) & \(9.3\pm 19.9\) & \(5.3\pm 19.9\) & \(22.7\) \\ GNFactor & \(\mathbf{20.0\pm 15.0}\) & \(0.0\pm 0.0\) & \(\mathbf{37.3\pm 3.2}\) & \(\mathbf{18.7\pm 10.0}\) & \(4.0\pm 3.3\) & \(\mathbf{31.7}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Multi-task test results on RLBench.** We evaluate \(25\) episodes for each checkpoint on \(10\) tasks across \(3\) seeds and report the success rates (%) of the final checkpoints. Our method outperforms the most competitive baseline PerAct [3] with an average improvement of \(\mathbf{1.55x}\) and even still largely surpasses PerAct with 4 cameras as input. The additional camera views are visualized in Figure 10.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline Method / Task & drag (D) & slide (L) & slide (S) & open (n) & turn (N) & push (D) & **Average** \\ \hline PerAct & \(6.6\pm 4.7\) & \(\mathbf{33.3\pm 4.7}\) & \(5.0\pm 4.1\) & \(25.0\pm 10.8\) & \(18.3\pm 6.2\) & \(20.0\pm 7.1\) & \(18.0\) \\ GNFactor & \(\mathbf{46.7\pm 30.6}\) & \(25.0\pm 4.1\) & \(\mathbf{6.7\pm 6.2}\) & \(\mathbf{31.7\pm 6.2}\) & \(\mathbf{28.3\pm 2.4}\) & \(\mathbf{31.7\pm 2.4}\) & \(\mathbf{28.3}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Generalization to unseen tasks on RLBench.** We evaluate \(20\) episodes for each task with the final checkpoint across \(3\) seeds. We denote “L” as a larger object, “S” as a smaller object, “N” as a new position, and “D” as adding a distractor. Our method outperforms PerAct with an average improvement of \(\mathbf{1.57x}\).
intuitive explanation is that GNF, when combined with DGS, becomes more adept at learning depth and 3D structure information. This enhanced understanding allows the 3D representation to better concentrate on the surfaces of objects Moreover, replacing Stable Diffusion with DINO [50] or CLIP [51] would not result in similar improvements easily, indicating the importance of our vision-language feature.
(iii) While the use of skip connection is not a new story and we merely followed the structure of PerAct, the result of removing the skip connection suggests that our voxel representation, which distills features from the foundation model, plays a critical role in predicting the final action.
(iv) Striking a careful balance between the neural rendering loss and the action prediction loss is critical for optimal performance and utilizing information from multiple views by our GNF module proves to be beneficial for the single-view decision module.
Furthermore, we provide the view synthesis in the real world, generated by GNFactor in Figure 5 and Figure 6. We also give the quantitative evaluation measured by PSNR [29]. We observe that the rendered views are somewhat blurred since the volumetric presentation learned by GNFactor is optimized to minimize both the neural rendering loss as well as the action prediction loss, and the rendering quality is largely improved when the behavior cloning loss is removed and only the GNF is trained. Notably, for the view synthesis in the real world, we do not have access to ground-truth point clouds for either training or testing. Instead, the point clouds are sourced from RealSense cameras and are therefore imperfect. Despite the limitations in achieving accurate pixel-level reconstruction results, we focus on learning semantic understanding of the whole scene from distilling Diffusion features, which is more important for policy learning.
### Real Robot Experiments
We summarize the results of our real robot experiment in Table 3. From the experiments, GNFactor outperforms the PerAct baseline on almost all tasks. Notably, in the _teapot_ task where the agent is required to accurately determine the grasp location and handle the teapot from a correct angle, PerAct fails to accomplish the task and obtains a zero success rate across two kitchens. We observed that it is indeed challenging to learn a delicate policy from only \(5\) demonstrations. However, by incorporating the representation from the embedding of a vision-language model, GNFactor gains an understanding of objects. As such, GNFactor does not simply overfit to the given demonstrations.
The second kitchen (Figure 1) presents more challenges due to its smaller size compared to the first kitchen. This requires higher accuracy to manipulate the objects effectively. The performance gap between GNFactor and the baseline PerAct becomes more significant in the second kitchen. Importantly, our method does not suffer the same performance drop transitioning from the first kitchen to the second, unlike the baseline.
We also visualize our 3D policy module by Grad-CAM [55], as shown in Figure 7. We use the gradients and the 3D feature map from the 3D convolution layer after the Perceiver Transformer to compute Grad-CAM. We observe that the target objects are clearly attended by our policy, though the training signal is only the Q-value for a single voxel.
## 5 Conclusion and Limitations
In this work, we propose GNFactor, a visual behavior cloning agent for real-world multi-task robotic manipulation. GNFactor utilizes a Generalizable Neural Feature Field (GNF) to learn a 3D volumetric representation, which is also used by the action prediction module. We employ the vision
Figure 6: **More novel view synthesis results. Both RGB and features are synthesized. We remove the action loss here for a better rendering quality. Videos are available on yanjieze.com/GNFactor.**
Figure 7: **Visualize the 3D policy module by Grad-CAM [55]. Though the supervision signal is only the Q-value for a single voxel during the training process, we observe in visualizations that the target objects are clearly attended by our policy. Videos are available on yanjieze.com/GNFactor.**
language feature from the foundation model Stable Diffusion besides the RGB feature to supervise the GNF training and observe that the volumetric representation enhanced by the GNF is helpful for decision-making. GNFactor achieves strong results in both simulation and the real world, across \(10\) RLBench tasks and \(3\) real robot tasks, showcasing the potential of GNFactor in real-world scenarios.
One major limitation of GNFactor is the requirement of multiple views for the GNF training, which can be challenging to scale up in the real world. Currently, we use three fixed cameras for GNFactor, but it would be interesting to explore using a cell phone to randomly collect camera views, where the estimation of the camera poses would be a challenge.
|
2309.06773 | Laser-induced heating for the experimental study of critical Casimir
forces with optical trapping | Critical Casimir interactions represent a perfect example of bath-induced
forces at mesoscales. These forces may have a relevant role in the living
systems as well as a role in the design of nanomachines fueled by environmental
fluctuations. Since the thermal fluctuations are enhanced in the vicinity of a
demixing point of a second-order phase transition, we can modulate the
magnitude and range of these Casimir-like forces by slight changes in the
temperature. Here, we consider two optical trapped colloidal beads inside a
binary mixture. The Casimir interaction is controlled by warming the mixture by
laser-induced heating, whose local application ensures high reproducibility.
Once this two-particle system is warmed, the critical behavior of different
observables allows the system to become its self-thermometer. We use this
experimental scheme for analyzing the energetics of a critical colloidal system
under a non-equilibrium-driven protocol. We quantify how the injected work can
be dissipated to the environment as heat or stored as free energy. Indeed, our
system allows us to use the fluctuation theorems framework for analyzing the
performance of this critically driven toy model. Our work paves the way for
future experimental studies on the non-equilibrium features of bath-induced
forces and the design of critically driven nanosystems. | Ignacio A. Martinez, Artyom Petrosyan, Sergio Ciliberto | 2023-09-13T07:47:19Z | http://arxiv.org/abs/2309.06773v2 | # Laser-induced heating for the experimental study of critical Casimir forces with optical trapping.
###### Abstract
Critical Casimir interactions represent a perfect example of bath-induced forces at mesoscales. These forces may have a relevant role in the living systems as well as a role in the design of nanomachines fueled by environmental fluctuations. Since the thermal fluctuations are enhanced in the vicinity of a demixing point of a second-order phase transition, we can modulate the magnitude and range of these Casimir-like forces by slight changes in the temperature. Here, we consider two optical trapped colloidal beads inside a binary mixture. The Casimir interaction is controlled by warming the mixture by laser-induced heating, whose local application ensures high reproducibility. Once this two-particle system is warmed, the critical behavior of different observables allows the system to become its self-thermometer. We use this experimental scheme for analyzing the energetics of a critical colloidal system under a non-equilibrium-driven protocol. We quantify how the injected work can be dissipated to the environment as heat or stored as free energy. Indeed, our system allows us to use the fluctuation theorems framework for analyzing the performance of this critically driven toy model. Our work paves the way for future experimental studies on the non-equilibrium features of bath-induced forces and the design of critically driven nanosystems.
## I Introduction
Temperature is a physical quantity that defines the amount of energy a system stores. In particular, Brownian motion observed in the mesoscopic systems is an intrinsic feature of the bath temperature. However, controlling temperature in the micrometrical scale in micromanipulation setups is far from standard. The usual approach consists in working with macroscopic thermal baths which modify the temperature of the microscopic system, although light-induced local heating has been applied either directly [1] or by using highly absorbing spots [2]. Nevertheless, the protocols are limited to stationary regimes [3] or to study the thermalization after energy quenches [4].
A non-trivial example of thermal fluctuations with extreme sensitivity to temperature changes is liquid mixture close to its critical demixing point. An upcoming second order phase transition enhances the thermal fluctuations by modifying the correlation length \(\xi\) and the relaxation time \(\tau\) of the fluctuations-field \(\phi\) of the fluid. Once the liquid approaches its critical temperature \(T_{c}\), both parameters of the fluctuations, \(\xi\) and \(\tau\), diverge from their intrinsic values \(\xi_{0}\) and \(\tau_{0}\) following a universal scaling, \(\xi\approx\xi_{0}(\Delta T/T_{c})^{-\nu}\) and \(\tau\approx\tau_{0}(\Delta T/T_{c})^{-\nu z}\) where \(\nu\) and \(z\) are the static and dynamic exponents respectively and \(\Delta T=T_{c}-T\) is the distance to the criticality (from now _critical distance_). Critical binary liquid mixtures have been experimentally tested to produce Casimir-like forces between microscopic objects (critical Casimir forces, CCF) [5], to transfer energy in multi-particle systems[6], to react to the bacterial swimming[7] or to generate self-assembly [8], and they have been proposed to induce non-Gaussian fluctuations [9], to react to the chemical affinity of the tracers by changing their viscosity[10] or to play a fundamental role in the intracellular dynamics [11].
Critical Casimir force (CCF) is a paradigmatic case of bath-induced force. In the vicinity of a critical demixing point, the confinement of the thermal fluctuations produces a force between the confining walls. The sense of force depends on the symmetry of the boundary conditions, attractive in the case of symmetric boundaries, and repulsive in the case of antisymmetric ones. In the case of two identically coated spheres of radius \(R\) with central positions \(x_{1}\) and \(x_{2}\) acting as confining walls, and under the Derjaguin approximation, Casimir-like potential can be written as:
\[U_{\text{cas}}(d,\xi)=-\frac{AR\pi kT}{\xi}\exp\left(-\frac{d}{\xi}\right) \tag{1}\]
where \(d=x_{2}-x_{1}-2R\) is the distance between the surfaces, \(kT\) is the thermal energy, and \(A\approx 1.3\) is a numerical constant from the numerical approximation [12]. Notice how the temperature can be extracted from \(U_{\text{cas}}\) since the correlation length \(\xi\) is a function of the distance to the criticality.
This article presents a method for accurately adjusting the interaction force between colloids using a laser source. We demonstrate that by gradually modulating the intensity of the laser on the colloidal solution, we can alter the force acting on two beads trapped in optical tweezers and change the energy fluxes. This method holds the potential for understanding the impact of bath-induced forces, which may be crucial in membrane interactions, and for constructing nano-devices that utilize forces tunable by slight temperature changes. Additionally, it opens avenues for quantitative investigations into the non-equilibrium characteristics of critical baths.
## II Experimental system.
Our critical bath consists of a micelle-solvent mixture, C\({}_{12}\)E\({}_{5}\) in water, at its critical concentration whose demixing point is at \(T_{c}\approx 30.^{1}\)C, see Methods. The sample has an intrinsic correlation length of 1.3 nm which is about 5 times larger than previously studied liquids mixtures such as
lutidine-water. This feature enhances the critical features, which become non-negligible even at relatively long critical distances. The fluid mixture is contained in a transparent cell of thickness \(40\mu m\).
Inside the mixture, two beads (P1 and P2) are held by two optical tweezers (T1 and T2) at a distance of \(\Delta x_{T}\) between their equilibrium positions (see Fig.1). The two beads are equal (Silica, \(5\mu\)m diameter), so the boundary conditions are symmetric and the critical Casimir force is attractive. Besides the trapping laser beam (\(\lambda=1064nm\)) an extra beam of wavelength \(1550\) nm is sent into the cell (see Fig.1) to modulate the temperature around the particles by the light absorbed by the mixture. This beam has a negligible effect on the trapping strength (see Supplementary Video 1) because it has a focal depth of \(400\) microns and a diameter of \(40\mu\)m, much larger than the region of interest (ROI) around the two particles whose size is about \(15\mu m\) Its power \(P_{h}\) can be changed from \(0\) to \(200\)mW. Being the cell kept at \(T_{g}\simeq 28.00^{\rm o}\)C, the critical distance \(\Delta T\) in the heated volume is about \(2.0\) K. The cell is shined by white light to image the two beads on a CCD camera (see Fig. 1c). The positions \(x_{1},x_{2}\) of the two-particle centers are recorded at a sampling frequency of \(400\)Hz at different values of \(P_{h}\).
Furthermore, to obtain an independent measure of the critical distance without including extra devices, we analyze the variance \(\delta I^{2}\) of the illumination light fluctuations in the corners of the image in the spirit of dynamic light scattering Fig. 1c). Since the micelle-rich phase has a different refractive index than the micelle-poor phase, once the transition is crossed we expect a huge change in \(\delta I^{2}\) as a function of \(\Delta T\), that allows us to distinguish between the two phases. Furthermore, since the refractive index is expected to depend on the critical distance, we aim at quantifying the behavior of \(\delta I^{2}(\Delta T)\).
## III Results
### CCF as nanothermoter.
In this section, we demonstrate the reproducibility of CCF with light-induced heating and how it can be used as the system's self-thermometer. In the physical system described before, the trap distance \(\Delta x_{T}=5570\)nm is kept fixed while we slightly change the power of the heating laser \(P_{h}(t)\). The following protocol is time-symmetrical and it has a triangular shape with a constant increase of the irradiated power of \(v_{\rm heat}=90\mu\)Ws\({}^{-1}=\max(P_{h})/\tau_{\rm heat}\), where \(\max(P_{h})=180\)mW is the maximum laser power achieved and \(\tau_{h}=2000\)s is half the duration of the heating cycle, see the black solid line in Fig. 2a) as a guide to the eye. The trajectory of the two particles is followed at a frequency of \(400\) Hz which is more than enough since the trap stiffness is small (\(\kappa=0.5\)pN\(\mu\)m\({}^{-1}\)) hence the beads relaxation time is about \(80\) milliseconds.This time is much shorter than \(\tau_{h}\) showing that the heating protocol is quasi-stationary, i.e. the system's temperature is always close to equilibrium. In Fig. 2a), we show the time-evolution of the observables extracted from the analysis of \(\delta I^{2}\) (left axis, blue line) and the beads trajectory (mean distance between the walls \(d\) over a \(20\) seconds block, right axis, red lines). Notice that the total time of the experiment is above one day with the same set of particles. Since the time evolutions of \(d\) and \(\delta I^{2}\) are correlated with \(P\), we performed an average conditioned to the \(P\) values on all of the cycles. The results \(\langle d\rangle\) and \(\langle\delta I^{2}\rangle\) of this conditional average are plotted in Fig. 2b) as a function of \(P\). The variance \(\langle\delta I^{2}\rangle\) has remarkable reproducibility and stability as a function of \(P_{h}\) and shows that the scattering
Figure 1: Sketch of the system. a) The heating laser raises the local temperature of an amount (\(\delta T\)) over the ground temperature (\(T_{g}\approx 28^{\rm o}\)C). The width of the heating beam (\(40\mu\)m) is larger than the region of interest (ROI) around the particles (size \(\simeq 15\mu\)m). The power of the heating laser is slowly changed to access different values of \(\delta T\). The temperature field is assumed to be homogeneous and with no thermophoretic flows within the ROI. b) Two particles are held in two independent optical traps (\(1\) and \(2\)). The two particles (\(2.5\) microns radius each) interact via the critical Casimir force produced by confining the concentration field fluctuations between the particle surfaces. c) The two particles are held in two independent tweezers at a distance of \(\Delta x_{\rm trap}=5570\)nm between them. We aim at extracting independent information from the system by using two parameters of each video frame. The positions, \(x_{1}\) and \(x_{2}\), of the particles are tracked while we analyze the dispersion \(\delta I^{2}=\sum_{i,j}(I_{i,j}-I)^{2}\) of the intensity at the four corners (\(I_{ij}\), \(30\)px x \(30\) px each) being \(I\) the mean intensity).
increases when the critical point is approached.
In Fig. 2b), we see that the mean distance \(\langle d\rangle\) between the two beads decreases for growing \(P_{h}\) demonstrating the appearance of an attractive force. The appearance of this force can be understood by measuring the probability distribution \(\rho(d)\) hence the potential \(U(d)\propto\ln(\rho(d))\), which is plotted in Fig. 2c) for two values of \(P_{h}\). At \(P_{h}=0\) mW, the equilibrium position defined by the Casimir force and the electrostatic repulsion is small compared to that of the optical trap. Instead at \(P_{h}=100\) mW, i.e. \(\Delta T\to 0\) the combination of the different interactions produces an energy potential landscape with two comparable equilibrium positions. For the sake of simplicity during the reading, we will call the optical trap and the Casimir wells 'OW' and 'CW' respectively. Since the profile of the critical force depends on \(\Delta T\), the occupation of each well also depends on \(\Delta T\) following the detailed balance between the wells. This explains the behavior of \(\langle d\rangle\) as a function of \(P_{h}\) in Fig. 2b). The mean value between surfaces \(\langle d\rangle\) gets progressively closer from \(\langle d\rangle=500\) nm up to saturation at 110 nm. However, this is not a continuous approach between the particles, but a change in the proportion of the permanence time in OW and CW. Indeed, if we get even closer, the total potential would evolve to a single equilibrium position landscape since the range of the potential scales with \(\xi\).
As described in previous sections, the correlation length is obtained from \(U(d,P_{h})\), plotted in Fig. 2c). We fit the critical Casimir force contribution \(U_{CC}(d)\) and hence obtain the correlation length \(\xi\) of the fluid assuming Derjaguin approximation. Assuming an Ising 3D class of universality, \(\xi(\Delta T)=\xi_{0}(\Delta T/T_{c})^{-\nu}\) with \(\nu\simeq 0.63\) we can assign a \(\Delta T\) to each value of the heating power \(P_{h}\). The measured \(\xi\) (blue points) and \(\Delta T\) (red dots) are plotted in Fig. 2d) as a function of \(P_{h}\). The blue dashed line is the estimated \(\xi(\Delta T)\) with \(\nu=0.63\). This allows us to find a relationship between the temperature \(T_{ROI}\) of the ROI and the heating power, specifically \(T_{\text{ROI}}=C_{h}P_{h}+T_{g}\) with a heating rate of \(C_{h}\approx(4.87\pm 0.2)\text{mK mW}^{-1}\).
Using this relationship between \(\Delta T\) and \(P_{h}\), we observe that \(\langle\delta I^{2}\rangle\) has a power law behavior (blue dashed line in Fig. 2b) as a function of \(\Delta T\) but with an exponent much larger of \(\nu\). Specifically comparing \(\langle\delta I^{2}\rangle\) with \(\xi\) we find \(\langle\delta I^{2}\rangle\propto\xi^{\alpha}\propto\Delta T^{-\nu\alpha}\), where \(\alpha=4.7\pm 0.6\).
### Switching the energy transfer between colloids by light-induced heating
In this section, we apply this laser-induced heating method in a dynamical colloidal system. We designed a toy machine based on a protocol in which thermodynamic magnitudes (work, heat,...) can be easily defined for analyzing their statistics under the stochastic thermodynamics framework. We inject energy as work in P1 for studying how the energy is dissipated via released heat in both particles, see the sketch in Fig. 3a). For that purpose, we displace the position of trap 2 following a time-symmetric protocol \(\Gamma\). The position of the movable trap is changed from \(\Delta x_{1}^{T}=5500\)nm to position \(\Delta x_{2}^{T}=5840\)nm at a constant velocity \(v=342\)nm/s, see black solid line in Fig. 3b). We call the action of pushing T2 away from T1 the forward (F) protocol while the action of getting the traps closer is the backward one (B). Both processes, of one second each, are connected by two equilib
Figure 2: Critical interactions as local nanothermometers. a) Time-evolution of \(\delta I^{2}\) (blue, left axis) and \(d=x_{2}-x_{1}-2R\) (red, right axis). The black solid line is a guide to the eye without a y-scale showing the power of the heating laser \(P(t)\). Notice the reproducibility over tens of repetitions over more than 24 hours of the experiment. b) \(\langle\delta I^{2}\rangle\) (blues circles and line) and \(\langle d\rangle\) (red points) are plotted as a function of the input heating power \(P_{\text{heat}}\). For increasing power, the mean distance decreases showing the appearance of an attractive force, whereas \(\langle\delta I^{2}\rangle\) increases approaching the critical point. c) The attraction potentials measured at two different heating powers \(P_{h}=0\) mW (blue) and \(P_{h}=100\) mW (red) are plotted as a function of \(d\). The correlation length is extracted from the critical Casimir potential using Derjaguin approximation (blue empty squares), d) The measured \(\xi\) is plotted as a function of \(P\) (blue dots). Since \(\xi/\xi_{0}=(\Delta T/T_{c})^{-\xi}\), we can infer a critical distance \(\Delta T\) for each value of the input power (red points, right axis) obtaining a heating rate of \(C_{h}=(4.87\pm 0.20)\text{mKmW}^{-1}\).
rium positions of the same duration to warrant the equilibration of the colloids within their corresponding traps giving a protocol total time of 4 seconds. The stiffness of the traps is kept small to allow a broader exploration of the phase space (\(\kappa=0.5\)pN\(\mu\)m\({}^{-1}\)). The dynamics can be drastically modified by small temperature changes which determines the ratio between the critical (CW) and optical (OW) wells. Indeed, in Fig. 3b) it is shown how the system, the colloidal particles, can evolve in different ways along the potential seascape represented by the color map in the background image. At the beginning of each cycle, the system can mainly lie either in OW (yellow and purple time-series in Fig. 3b)) or CW (magenta and red). If the traps are close, the thermal transitions between the states are allowed and the system may erase its previous state. Once T2 is pushed away, the system must make a choice, see the different trajectories on Fig. 3b). As the probability of choosing each option depends on the ratio between the well's depth, the performance of the toy machine can be modulated by changing the distance to the criticality hence the heating laser power. In this analysis, our focus will be on two events with higher statistical significance, specifically those that do not alter the well. We refer to the events where particles remain within the optical trap well (OW) as OFF events, while those with the particle residing in the critical well (CW) are labeled as ON events. The trajectories of the ON and OFF events are depicted by red and yellow curves, respectively, in Fig. 3b).
In Fig. 3c) and d) we show how the ensemble average of the system's energetics changes as a function of \(P_{\mathrm{h}}\), that is, the temperature. From the trajectories, we obtain the values of the stochastic work (\(W\)) and both released heats (\(Q_{1}\) and \(Q_{2}\)) within the framework of stochastic thermodynamics, see Methods. We calculate the ensemble average of each quantity \(\langle X\rangle\) and its probability density function \(\rho(X)\). The mean value of the heat released by each particle \(\langle Q_{i}\rangle\) and the mean injected work \(\langle W\rangle\) are shown in Fig. 3c) with their standard deviations, Fig. 3d). Indeed, \(\langle W\rangle\) increases with temperature during F and B protocol, but there is a discrepancy between them. The same features seem to appear in other magnitudes like the heat released by P2 (\(\langle Q_{2}\rangle\), black ) but not by P1 (red, \(\langle Q_{2}\rangle\)). Finally, the mean change of internal energy (\(\langle U\rangle=\langle W\rangle+\langle Q_{1}\rangle+\langle Q_{2}\rangle\)) remains constant during both protocols as expected although changes with the critical distance. The variance of the same quantities is shown in Fig. 3d), where we observe different behaviors between F and B. Indeed, the existence of different options during the process produces a bimodal distribution in the energetics, see Fig. 4). If we compare the distribution of the injected work \(\rho(W)\) for different \(\Delta T\), we observe how the critical interactions start to dominate at small \(\Delta T\), the CW dominates OW. In Fig. 4 we also compare the distributions of the forward \(\rho_{F}(W)\) and backward \(\rho_{B}(-W)\) protocols in the Crook theorem spirit, \(\rho_{F}(W)/\rho_{B}(-W)=\exp{(W-\Delta F)}/kT\). The global energetics of the system is the combination of pure dissipative events (OFF) and events that change the free energy of the system (ON).
The same analysis can be performed for each quantity as it is presented in Fig. 5, which shows how the ON events dissipate much less energy than the OFF events. Indeed, the OFF events are dragging-like processes in a more complex environment (the non-negligible hydrodynamics due to the surfaces' proximity derives in a non-homogeneous viscosity, the Casimir interactions,...). The ON events allow a higher proximity between the surfaces, and pulling away the traps between them increases the effect of the critical interaction since the increasing distance between the particle and the trap equilibrium position suggests a higher importance of the CCF. It is this increase of the critical Casimir force once the traps are pulled away that increases the free energy of the system while the small change in the relative distance between the particles is the reason for the small energy dissipation during ON events.
## IV Discussion
Since the temperature of the fluid is increased locally, there is no direct measurement of the temperature via thermome
Figure 3: a) Energetic sketch of the system. Two particles are held by independent optical tweezers within a critical bath with correlation length \(\xi\) at a given critical distance (\(\Delta T\)). Trap 1 remains static while trap 2 is moved back and forward at a constant velocity between two equilibrium positions. b) Color plot showing the temporal evolution of the energetic seascape \(U(d)\) felt by the relative distance degree of freedom \(d=x_{2}-x_{1}-2R\). The two-colloidal particle system has different options through the protocol. The single trajectories can start synchronized at the optical trapping well (OW) or the critical well (CW) to remain in the same state,OW \(\rightarrow\) OW (yellow triangles) or CW \(\rightarrow\) CW (red solid line), or change it, OW \(\rightarrow\) CW (purple squares) or CW \(\rightarrow\) OW (magenta circles). For the sake of simplicity in the analysis of the energetics, we will focus on the analysis of OW \(\rightarrow\) OW and CW \(\rightarrow\) CW, defining them as OFF and ON respectively. c-d) Energetics of the system. Work (blue circles), the heat released by particle 1 (red triangles) and heat released by particle 2 (black squared), and the change of internal energy (green diamonds) are shown for the forward and backward process (\(X_{f}\) filled symbols, \(-X_{b}\) empty symbols). c) Mean value \(\langle X\rangle\) and d) variance \(\sigma_{X}^{2}\) as functions of the input energy \(P_{\mathrm{h}}\).
try. Indeed, obtaining a reproducible nano-thermometric technique from our typical experiments is one of the objectives of the article. Here, we have based our temperature measurement, or more precisely the _critical distance_\(\Delta T\) from \(T_{c}\), on i) the trajectory of the particles via CCF, and 2) the fluctuations of the pixels' luminosity. Assuming that the critical temperature is well defined at \(T_{c}=30.1^{\circ}\)C, we can infer the temperature of the fluid as the distance to this reference \(T=\Delta T+T_{c}\). Indeed, this is one of the typical fingerprints of studying CCF using optical traps. The possible local heating of the laser trap and the illumination for video tracking may produce a mismatch between the temperature of the thermal bath and the temperature of the physical system. The influence of the critical distance on the ratio of the two potentials makes the statistics of the jumps as well as the population rates very sensitive to the temperature changes in its environment. However, this method could be used by any physical system to detect temperature changes autonomously, for example, confining a vesicle filled with a critical fluid and sensoring its size or shape.
Using light as the heating source allows us to compare the changes of the different observables, the experiment is clear, very reproducible, and with no dependence on a statistic of jumps, so very accurate. However, the optical configuration of our setup makes difficult an analytical expression for the account of the change of the dispersion with the critical distance via critical exponents, \(\langle\delta I^{2}\rangle\propto(\Delta T)^{\mu}\), where \(\mu=(3.0\pm 0.5)\) seems to be a combination of different critical exponents. As in previous studies of CCF with optical trapping, we did not report any change in the trap stiffness along the protocol. The index of refraction scales as the density,i.e. \(\delta n/n\propto\delta\rho/\rho\). However, the measure based on light scattering are based either on the gradient or on the Laplacian of the index of refraction field. Therefore the fluctuations are strongly enhanced by the derivatives.
Fluctuation theorems are powerful theoretical tools that generalize the second law of thermodynamics allowing us to connect out-of-equilibrium measures with equilibrium magnitudes like free energy or entropy changes. In particular, Crooks' theorem is very visual for discriminating between pure-dissipative and non-pure dissipative processes. Being \(kT\log\left[\rho_{F}(W)/\rho_{B}(-W)\right]=W-\Delta F\), on the one hand, if \(\Delta F=0\) (all the injected work is dissipated as released heat) both distribution cross at zero. On the other hand, if there is a storing or release of energy in the system, i.e. \(\Delta F\neq 0\), the pdfs will cross at \(W=\Delta F\). Since in our system, the initial and final states are multiple-defined, as shown in previous sections, we build the pdf corresponding to each event (ON/OFF) and we can extract information. Namely, Fig. 4 shows multimodal
Figure 4: Work injected into the system during the forward \(\rho_{F}(W)\), full squares and colored area, and backward process \(\rho_{B}(-W)\), empty circles. a) Statistics of W far from the criticality, \(P=0\)mW with \(\xi\approx 65\) nm. b) Close to the criticality, \(P=150\)mW with \(\xi\approx 140\) nm. The bimodality of all the distributions can be interpreted as the contribution of the two possible equilibrium states. Moreover, the distributions can be decoupled and analyzed in the Crooks-theorem spirit with a crossing of the forward and backward distributions close to zero for the OFF events and a crossing different from zero for the ON events.
Figure 5: Probability density function of the energetics (\(W\) blue circles, \(Q_{1}\) red triangles, and \(Q_{2}\) black squares) discriminating between a) ON events and b) OFF events (colored areas with full symbols and non-shadowed areas with empty symbols respectively) during the forward process at \(P=150\) mW. The decoupling of the energetics between the two different possibilities shows a higher dissipation during OFF events inferred from the non-zero mean distribution of both released heats (black and red open symbols with non-colored curve) compared with the zero mean heat distributions of the ON events (black and red full symbols with colored area). The difference between the work distributions of the ON and OFF events comes from the storage of free energy in the case of ON (both particles within the critical Casimir well).
distributions for \(\rho_{F}(W)\) and \(\rho_{B}(-W)\). However, we can easily notice that the distributions are not symmetric. This is due to the different contributions of the ON and OFF events. While the ON events activate the critical well, increasing \(\Delta F\) along the forward protocol or decreasing it along the backward, the OFF events only dissipate the injected work as released heat. This is the reason why the forward and backward distribution of the ON events are almost identical (a very small amount of energy is dissipated) while the OFF distributions crosses each other at \(W=0kT\), see Fig. 5. However, we expect that an increase in the resolution for a given \(\Delta T\) will reveal differences between \(F\) and \(B\) also at the OFF events. The energy dissipation is easier to visualize in Fig. 5b), where we show the heat distribution for both particles as well as the work injected by selecting only the desired realizations. We observe how in the case of ON events the heat distribution is centered almost at zero, for both particles, while the distribution does not peak at zero in the case of OFF events. Indeed, the values of the mean work and mean heat shown in Fig. 3c) mainly evolve due to the changes in the probability of choosing between the different events. We want to point out how the distributions of work forward and backward overlap each other in the case of the ON events. We interpret it as the fact that the critical Casimir force is always in equilibrium for those displacements due to its fast relaxation time.
## Conclusions
In this article, we have used light-induced heating to control the bath-induced forces between two optically trapped colloidal particles. The heating laser beam is focused by the upper objective irradiating the sample with a cylindrical shape that allows us to control the temperature in a range of 1K below the demixing point (\(\approx 30.1^{\mathrm{o}}\mathrm{C}\)) with a precision of \(\pm 10\mathrm{mK}\). The temperature is smoothly changed during hours with a symmetric protocol for ensuring quasistatic warming. We do not observe any hysteresis in the sample. Since the temperature calibration is done via the correlation length of the fluid obtained by non-linear fits to the total energy landscape, the precision of the method is highly affected by the duration of the measurement. To have an extra observable with the same optical setup, we study how the luminosity of the pixels has a clear dependence on the critical distance. However, since the scattering geometry is highly non-trivial and out of the interest of this article, the results remain open to link them with a quantitative value of the correlation length.
The temperature modulation via laser-induced heating is applied in a two-particle toy model whose flux of energy is changed as a function of the local temperature. The temperature control is stable enough to study the stochastic energetics of the two-particle system from a statistical point of view and to use the fluctuation theorems in the interpretation of the dissipation sources. From the use of the Crooks' theorem, we interpret that the critical Casimir force is always in equilibrium since the distributions of the work in the forward and backward process for the ON events overlaps each other. However, the light-induced heating technique paves the way for time-driven protocols in the local temperature for future experiments in the out-of-equilibrium performance of critical Casimir forces such as its equilibration or its influence in the rheology of the sample.
As future applications, it is pertinent to highlight two potential approaches for harnessing useful work from the depicted scheme. Firstly, the system's dual-equilibrium configuration exhibits a notable resemblance to the colloidal Szilard engine [13]. By effectively manipulating the probabilities associated with each equilibrium position, one can intentionally change the system's ergodicity, thereby facilitating the extraction of valuable work within the framework of thermodynamics of information. Secondly, a more advantageous avenue emerges. By carefully adjusting the temperature differentials during the forward and reverse processes, it becomes possible to readily access and extract the stored energy in the form of free energy. This rudimentary model aligns with the foundational principles that underlie the impact of critical interactions on biological systems [14]..
We are just scratching the possibilities of fluctuation-induced forces in the performance of nanomachines or their use in nanothermometry. Indeed, although the range would be limited, critical interactions may be used in specific experiments whose temperature dependence is large in a narrow temperature range. In line with the final sentence of [15], "the consciousness of the environment as a part of the whole system is important not only in the ecology but also at the micron or nanoscale physics", we expect these results pave the way to future experiments where the thermal bath is not just a passive actor but a tunable agent during a physical process.
## Acknowledgements
IAM acknowledges funding from the Spanish Government through grants Contract (FIS2017-83709-R) and Juan de la Cierva program as well as the CNRS visiting researcher program and the MSCA-IF NEQLIQ - 101030465. AP and SC acknowledge funding from ANR18-CE30-0013. This work was partially financed by ERC-OUTEFLUCOP.
## Methods
_Experimental methods._ Our experiments are done in a low critical temperature micelle-solvent solution, C\({}_{12}\)E\({}_{5}\) in Milli-Q water at 1.2% mass concentration. This mixture has a correlation length of \(\xi_{0}\approx 1.4\mathrm{nm}\) and a critical temperature \(T_{C}\approx(30.5\pm 0.1)^{\mathrm{o}}\mathrm{C}\)[16; 17]. A few microspheres (Fluka silica, \(R=(2.50\pm 0.35)\mu\mathrm{m}\)) per milliliter are added to the final mixture in a low concentration to allow long-term measurements without intrusive particles. The mixture is injected into a custom-made cell \(40\mu\mathrm{m}\) thick and mechanically sealed to avoid contamination. This chamber is made by sandwiching
a parafilm mask between a microscope slide and a sapphire optical window. The chamber thickness is reduced at minima to avoid thermophoretic or convective effects. Within the fluid cell, the two optical traps are created by a near-infrared laser beam (LaserQuantum, \(\lambda=\)1064nm ) which is focused thanks to a high NA immersion oil objective (Leica \(\times 63\), NA=1.4). The dynamic optical trap is based on the control of the laser beam by an acousto-optical deflector (AA optoelectronics) which allows us to create two different traps using the time-sharing regime at 10 kHz. The two optical traps are kept \(15\mu\)m from the cell bottom slide. The bead's images are acquired by a high-speed camera (Mikrotron MC1310) and their positions are tracked in real-time by suitable software. The tracking resolution is \(\pm 5\) nm with a conversion rate of \(S=105.4\mathrm{nm/px}\). The acquisition frequency is fixed at 400 frames per second for all experiments. Our particle tracking is restricted to the XY plane while we only analyse the trajectory in the x-axis. We neglect the cross-coupling between the two axes (x and y) since this perturbation is second order in the Rotne-Prager diffusion tensor. The optical traps are calibrated using standard methods such as power spectrum density. From the time series, we obtain the total energy potential by the Boltzmann relation \(\rho(r)\propto\exp\left(-U(r)/kT\right)\). The total potential can be split into its different components: electrostatical \(U_{cf}\), Casimir \(U_{CCF}\) and trapping \(U_{OT}\). The Critical Casimir contribution, \(U_{CCF}\) is fitted assuming the Derjaguin approximation (for \(d/R\ll 1\)). The correlation length is obtained from the non-linear fit of \(U_{\mathrm{cas}}\) at different values of the irradiating power \(P_{h}\) and hence \(\Delta T=T_{c}-T\).
The ground temperature (\(T_{g}\)) is controlled by a double feedback system one on the objective and one inside the cell. Temperature is registered with two independent sensors (Pt 1000\(\Omega\)) and sent to a programmable temperature controller (PID Stanford research instruments). The objective and the cell are heated with heater mats (_Minco_ 40 \(\Omega\) and 80 \(\Omega\) respectively). The whole system is thermally isolated from the environment to reduce the effect of environmental perturbations both on the position of the particles and on the temperature. Once the bulk is thermalized, \(T_{g}=28.00^{\mathrm{o}}\mathrm{C}\) the sample is irradiated using an infrared _heating_ laser (Thorlabs, \(\lambda=1550\mathrm{nm}\)) focused by the upper objective (Leica, NA 0.53). The heating laser beam has a width of \(20\mu m\) and a depth of field of 400 \(\mu m\) that is much larger than the chamber thickness (40 \(\mu m\)). Therefore, a cylindrical shape can be assumed in the description of the irradiating heating beam.
_Stochastic thermodynamics_. Work (\(W\)) and heat (\(Q\)) as the exchange of energy of the colloidal system with the external agent (change in the movable trap position \(x_{T}\)) and the thermal bath respectively. Work and heat are defined as \(\delta W_{i}=\kappa(x_{i}-x_{i}^{T})\circ\mathrm{d}x_{i}^{T}\) and \(\delta Q_{i}=-\kappa(x_{i}-x_{i}^{T})\circ\mathrm{d}x_{i}\), where \(i=1,2\) are the two particles and \(\circ\) stands for Stratonovich integration. From this definition, the work is always injected in particle 2, since \(x_{1}^{T}\) is static. The events are discriminated between ON and OFF by comparing the mean distance during a single protocol with a threshold between the colloidal surfaces: if the mean relative distance is always below \(d<400\) nm along a single protocol, we consider it as an ON event. Ensemble average \(\langle X\rangle\) of any quantity \(X\) over \(N\) process of \(M\) points each is defined as \(\langle X(t_{j})\rangle=\sum_{i=1}^{N}\sum_{k=1}^{j}\delta X_{i}(t_{k})\).
|
2309.08986 | Discovery of a Planar Black Hole Mass Scaling Relation for Spiral
Galaxies | Supermassive black holes (SMBHs) are tiny in comparison to the galaxies they
inhabit, yet they manage to influence and coevolve along with their hosts.
Evidence of this mutual development is observed in the structure and dynamics
of galaxies and their correlations with black hole mass ($M_\mathrm{BH}$). For
our study, we focus on relative parameters that are unique to only disk
galaxies. As such, we quantify the structure of spiral galaxies via their
logarithmic spiral-arm pitch angles ($\phi$) and their dynamics through the
maximum rotational velocities of their galactic disks ($v_\mathrm{max}$). In
the past, we have studied black hole mass scaling relations between
$M_\mathrm{BH}$ and $\phi$ or $v_\mathrm{max}$, separately. Now, we combine the
three parameters into a trivariate $M_\mathrm{BH}$-$\phi$-$v_\mathrm{max}$
relationship that yields best-in-class accuracy in prediction of black hole
masses in spiral galaxies. Because most black hole mass scaling relations have
been created from samples of the largest SMBHs within the most massive
galaxies, they lack certainty when extrapolated to low-mass spiral galaxies.
Thus, it is difficult to confidently use existing scaling relations when trying
to identify galaxies that might harbor the elusive class of intermediate-mass
black holes (IMBHs). Therefore, we offer our novel relationship as an ideal
predictor to search for IMBHs and probe the low-mass end of the black hole mass
function by utilizing spiral galaxies. Already with rotational velocities
widely available for a large population of galaxies and pitch angles readily
measurable from uncalibrated images, we expect that the
$M_\mathrm{BH}$-$\phi$-$v_\mathrm{max}$ fundamental plane will be a useful tool
for estimating black hole masses, even at high redshifts. | Benjamin L. Davis, Zehao Jin | 2023-09-16T13:01:54Z | http://arxiv.org/abs/2309.08986v1 | # Discovery of a Planar Black Hole Mass Scaling Relation for Spiral Galaxies
###### Abstract
Supermassive black holes (SMBHs) are tiny in comparison to the galaxies they inhabit, yet they manage to influence and coevolve along with their hosts. Evidence of this mutual development is observed in the structure and dynamics of galaxies and their correlations with black hole mass (\(M_{\bullet}\)). For our study, we focus on relative parameters that are unique to only disk galaxies. As such, we quantify the structure of spiral galaxies via their logarithmic spiral-arm pitch angles (\(\phi\)) and their dynamics through the maximum rotational velocities of their galactic disks (\(v_{\rm max}\)). In the past, we have studied black hole mass scaling relations between \(M_{\bullet}\) and \(\phi\) or \(v_{\rm max}\), separately. Now, we combine the three parameters into a trivariate \(M_{\bullet}\)-\(\phi\)-\(v_{\rm max}\) relationship that yields best-in-class accuracy in prediction of black hole masses in spiral galaxies. Because most black hole mass scaling relations have been created from samples of the largest SMBHs within the most massive galaxies, they lack certainty when extrapolated to low-mass spiral galaxies. Thus, it is difficult to confidently use existing scaling relations when trying to identify galaxies that might harbor the elusive class of intermediate-mass black holes (IMBHs). Therefore, we offer our novel relationship as an ideal predictor to search for IMBHs and probe the low-mass end of the black hole mass function by utilizing spiral galaxies. Already with rotational velocities widely available for a large population of galaxies and pitch angles readily measurable from uncalibrated images, we expect that the \(M_{\bullet}\)-\(\phi\)-\(v_{\rm max}\) fundamental plane will be a useful tool for estimating black hole masses, even at high redshifts.
Astrostatistics (1882) -- Galaxy evolution (594) -- Hubble classification scheme (757) -- Intermediate-mass black holes (816) -- Late-type galaxies (907) -- Regression (1914) -- Scaling relations (2031) -- Spiral galaxies (1560) -- Spiral pitch angle (1561) +
Footnote †: journal: _The Astrophysical Journal Letters_
0000-0002-4880-237X]Benjamin L. Davis
0000-0002-2886-0307]Zehao Jin
## 1 Introduction
Black hole mass scaling relations (_i.e._, relations with central black hole mass as the dependent variable and some physical property of its host galaxy as the independent variable) have evolved and proliferated over the past quarter-century, beginning with the identification of a correlation between black hole mass (\(M_{\bullet}\)) and the stellar mass of its host galaxy's bulge (Magorrian et al., 1998). The most reliable of these scaling relations are those that are built and calibrated upon samples of galaxies with dynamically-measured black hole masses (_e.g._, Graham & Scott, 2013; Graham 2023a,b; Graham & Sahu, 2023a,b; Savorgnan et al., 2013, 2016; Savorgnan, 2016a,b; Davis et al., 2017, 2018, 2019a,b,c, 2021; Sahu et al., 2019a,b, 2020, 2022a,b; Sahu, 2021, 2022; Jin & Davis, 2023).1 To-date, only about 150 such supermassive black holes (SMBHs) with \(M_{\bullet}\gtrsim 10^{6}\,{\rm M}_{\odot}\) have been directly measured in just the nearest and most massive galaxies.2 With SMBHs expected to reside in the hearts of most every massive galaxy (Rees, 1984),
one can take their pick of scaling relations (see Graham, 2016; D'Onofrio et al., 2021, for informative reviews) to perform black hole mass estimates for large numbers of galaxies in surveys to construct black hole mass functions (_e.g._, Graham et al., 2007; Davis et al., 2014; Mutlu-Pakdil et al., 2016).
The accuracy of scaling relations can vary significantly based on which independent variable is selected; some variables are less accurate or not applicable for certain galaxy morphologies (_e.g._, bulge relations are useless for bulgeless galaxies). Moreover, the prevalence of morphologically-dependent black hole mass scaling relations (_e.g._, Davis et al., 2018, 2019; Sahu et al., 2019) hints that particular independent variables alone might not be sufficient to cover different galaxy morphologies. Specifically, this is evident from the different coefficients required for the same variables when applied to separate morphologies in isolation. As such, it is problematic to be restricted to using only one predictor of black hole mass. Thus, we seek a methodology that incorporates multiple mass predictors in one relation.
Dynamical measurements of smaller black holes like intermediate-mass black holes (IMBHs) are much more difficult to obtain because the gravitational sphere of influence radius of a black hole is directly proportional to the mass of the black hole. As expected, an observational bias exists among the catalog of directly-measured black holes, _i.e._, only black holes that are sufficiently massive and/or nearby are measurable (Batcheldor, 2010). Our current sample of 145 dynamically-measured black holes ranges from \(4\times 10^{5}\,\mathrm{M}_{\bullet}\lesssim M_{\bullet}\lesssim 2\times 10^{10}\, \mathrm{M}_{\odot}\)(Jin and Davis, 2023). Although this sample reaches down almost to the IMBH regime (\(10^{3}\leq M_{\bullet}<10^{5}\,\mathrm{M}_{\odot}\)), the sample of black holes is very top heavy with a median mass of \(\approx\)\(10^{8}\,\mathrm{M}_{\odot}\). Therefore, interpolation of existing black hole mass scaling relations are incapable of predicting IMBHs and extrapolation is heavily reliant on the largest SMBHs. As such, several studies have instead relied on meta-analyses to combine the predictions of multiple black hole mass scaling relations to more securely extrapolate down into the IMBH regime (Koliopanos et al., 2017; Graham and Soria, 2019; Graham et al., 2019; Davis and Graham, 2021; Davis et al., 2023).
In a larger study (Jin and Davis, 2023), we used modern machine learning methods to identify higher-dimensional (_n_-D) black hole mass scaling relations that have lower intrinsic scatters than existing two-dimensional (2-D) black hole mass scaling relations. With 145 galaxies and as much as a hundred different measured quantities for every galaxy, the task of checking the vast number of permutations of possible _n_-D black hole mass scaling relations is an immense undertaking. Therefore, we applied modern machine learning methods to find the best scaling relations, which ideally are an optimized combination of accuracy and simplicity. For this task, we ran symbolic regression software PySR(Cranmer, 2023) to find the best combination of variables and mathematical operations to describe our dataset of directly-measured SMBH masses and their host galaxy parameters.
In this letter, we describe in detail one such solution found by of our study: a trivariate relationship between \(M_{\bullet}\), logarithmic spiral-arm pitch angle (\(\phi\)), and maximum rotational velocity (\(v_{\mathrm{max}}\)). We will present our sample, fit, and analysis of the planar black hole mass scaling relation in SS2. In SS3, we will discuss benefits, reasons, comparisons, implications, and utility of our \(M_{\bullet}\)-\(\phi\)-\(v_{\mathrm{max}}\) fundamental plane. Finally, we provide a summary of our findings and remark on future work (in SS4). We represent black hole masses (\(M_{\bullet}\)) throughout this work as logarithmic (solar) masses (\(\mathcal{M}_{\bullet}\)), such that \(\mathcal{M}_{\bullet}\equiv\log(M_{\bullet}/\mathrm{M}_{\odot})\). All uncertainties are quoted at \(1\,\sigma\equiv 68.3\%\) confidence intervals; median absolute deviations are given as uncertainties associated with medians.
## 2 Data and Analysis
### Sample
Our sample consists of all spiral galaxies with \(M_{\bullet}\), \(\phi\), and \(v_{\mathrm{max}}\) measurements from Davis et al. (2019). This yields a set of 41 galaxies (not including the Milky Way), all with dynamically-measured black hole masses (see references compiled by Davis et al., 2017, 2019). Pitch angles were consistently measured by Davis et al. (2017, 2019)3 and rotational velocities compiled by Davis et al. (2019, see references therein). This sample that we use to construct the planar relation is listed in Table 1.
Footnote 3: For details regarding the measurement of galactic logarithmic spiral-arm pitch angles, see additional reading (Davis et al., 2012; Davis and Hayes, 2014; Davis, 2015; Shields et al., 2022).
\begin{table}
\begin{tabular}{c c c c} \hline \hline Galaxy & \(|\phi|\) & \(v_{\mathrm{max}}\) & \(\mathcal{M}_{\bullet}\) \\ & [\({}^{\circ}\)] & [\(\frac{\mathrm{km}}{\mathrm{s}}\)] & [dex] \\ (1) & (2) & (3) & (4) \\ \hline Circinus & \(17\fdg 0\pm 3\fdg 9\) & \(153\pm 7\) & \(6.25\pm 0.11\) \\ IC 2560 & \(22\fdg 4\pm 1\fdg 7\) & \(196\pm 3\) & \(6.52\pm 0.11\) \\ \hline \end{tabular}
\end{table}
Table 1: Sample of Spiral Galaxies
The sample of SMBH host galaxies exhibits a broad range in each of the three variables. To illustrate this, we have plotted probability density functions (PDFs) of each parameter in Figure 1. From these distributions, we normalize the \(\phi\) and \(v_{\rm max}\) values about their respective medians to minimize the covariance between the estimated coefficients during regression analysis. Following an initial symbolic regression, we then use the outputs from PySR(Cranmer, 2023) as our input initial guesses for Hyper-Fit(Robotham and Obreschkow, 2015, 2016).4
Footnote 4: The combination of PySR and Hyper-Fit is necessary to produce a relation that considers and takes into account the errors on individual measurements. PySR only considers the uncertainties on the dependent variable (_i.e._, \(M_{\bullet}\)) without accounting for the uncertainties on the independent variables (_i.e._, \(\phi\) and \(v_{\rm max}\)), and produces a best-fit relation without uncertainties on the derived coefficients nor computing the intrinsic scatter of the relation. Whereas, Hyper-Fit is able to refine the fit found by PySR while taking into account errors on every measurement, producing uncertainties on each derived coefficient, and determining the intrinsic scatter of the plane.
### Finding the Plane via Machine Learning
Symbolic Regression is a sub-field of machine learning that aims to find mathematical expressions that best fit a given set of data. Symbolic Regression searches over equations made of possible selections and combina
\begin{table}
\begin{tabular}{c c c c} \hline \hline Galaxy & \(|\phi|\) & \(v_{\rm max}\) & \(\mathcal{M}_{\bullet}\) \\ & [\({}^{\circ}\)] & [\(\frac{\rm km}{\rm s}\)] & [dex] \\ (1) & (2) & (3) & (4) \\ \hline Milky Way* & \(13\fdg 1\pm 0\fdg 6\) & \(198\pm 6\) & \(6.60\pm 0.02\) \\ NGC 224 & \(8\fdg 5\pm 1\fdg 3\) & \(257\pm 6\) & \(8.15\pm 0.16\) \\ NGC 253 & \(13\fdg 8\pm 2\fdg 3\) & \(196\pm 3\) & \(7.00\pm 0.30\) \\ NGC 613 & \(15\fdg 8\pm 4\fdg 3\) & \(289\pm 5\) & \(7.57\pm 0.15\) \\ NGC 1068 & \(17\fdg 3\pm 1\fdg 9\) & \(192\pm 12\) & \(6.75\pm 0.08\) \\ NGC 1097 & \(9\fdg 5\pm 1\fdg 3\) & \(241\pm 34\) & \(8.38\pm 0.04\) \\ NGC 1300 & \(12\fdg 7\pm 2\fdg 0\) & \(189\pm 28\) & \(7.86\pm 0.14\) \\ NGC 1320 & \(19\fdg 3\pm 2\fdg 0\) & \(183\pm 13\) & \(6.77\pm 0.22\) \\ NGC 1365 & \(11\fdg 4\pm 0\fdg 1\) & \(198\pm 3\) & \(6.60\pm 0.30\) \\ NGC 1398 & \(9\fdg 7\pm 0\fdg 7\) & \(289\pm 7\) & \(8.03\pm 0.11\) \\ NGC 1566 & \(17\fdg 8\pm 3\fdg 7\) & \(154\pm 14\) & \(6.83\pm 0.30\) \\ NGC 1672 & \(15\fdg 4\pm 3\fdg 6\) & \(213\pm 8\) & \(7.70\pm 0.10\) \\ NGC 2273 & \(15\fdg 2\pm 3\fdg 9\) & \(211\pm 16\) & \(6.95\pm 0.06\) \\ NGC 2748 & \(6\fdg 8\pm 2\fdg 2\) & \(188\pm 27\) & \(7.54\pm 0.21\) \\ NGC 2960 & \(14\fdg 9\pm 1\fdg 9\) & \(257\pm 34\) & \(7.07\pm 0.05\) \\ NGC 2974 & \(10\fdg 5\pm 2\fdg 9\) & \(284\pm 26\) & \(8.23\pm 0.07\) \\ NGC 3031 & \(13\fdg 4\pm 2\fdg 3\) & \(237\pm 10\) & \(7.83\pm 0.09\) \\ NGC 3079 & \(20\fdg 6\pm 3\fdg 8\) & \(216\pm 6\) & \(6.38\pm 0.12\) \\ NGC 3227 & \(7\fdg 7\pm 1\fdg 4\) & \(240\pm 10\) & \(7.97\pm 0.14\) \\ NGC 3368 & \(14\fdg 0\pm 1\fdg 4\) & \(218\pm 15\) & \(6.89\pm 0.11\) \\ NGC 3393 & \(13\fdg 1\pm 2\fdg 5\) & \(193\pm 48\) & \(7.49\pm 0.05\) \\ NGC 3627 & \(18\fdg 6\pm 2\fdg 9\) & \(188\pm 7\) & \(6.94\pm 0.09\) \\ NGC 4151 & \(11\fdg 8\pm 1\fdg 8\) & \(272\pm 16\) & \(7.69\pm 0.37\) \\ NGC 4258 & \(13\fdg 2\pm 2\fdg 5\) & \(222\pm 8\) & \(7.60\pm 0.01\) \\ NGC 4303 & \(14\fdg 7\pm 0\fdg 9\) & \(214\pm 7\) & \(6.78\pm 0.17\) \\ NGC 4388 & \(18\fdg 6\pm 2\fdg 6\) & \(180\pm 5\) & \(6.90\pm 0.10\) \\ NGC 4395 & \(22\fdg 7\pm 3\fdg 6\) & \(145\pm 11\) & \(5.62\pm 0.17\) \\ NGC 4501 & \(12\fdg 2\pm 3\fdg 4\) & \(272\pm 4\) & \(7.31\pm 0.08\) \\ NGC 4594 & \(5\fdg 2\pm 0\fdg 4\) & \(277\pm 22\) & \(8.81\pm 0.03\) \\ NGC 4699 & \(5\fdg 1\pm 0\fdg 4\) & \(258\pm 7\) & \(8.27\pm 0.09\) \\ NGC 4736 & \(15\fdg 0\pm 2\fdg 3\) & \(182\pm 5\) & \(6.83\pm 0.11\) \\ NGC 4826 & \(24\fdg 3\pm 15\fdg 5\) & \(167\pm 9\) & \(6.18\pm 0.12\) \\ NGC 4945 & \(22\fdg 2\pm 3\fdg 0\) & \(171\pm 2\) & \(6.13\pm 0.30\) \\ NGC 5055 & \(4\fdg 1\pm 0\fdg 4\) & \(270\pm 14\) & \(8.94\pm 0.10\) \\ NGC 5495 & \(13\fdg 3\pm 15\fdg 4\) & \(202\pm 43\) & \(7.04\pm 0.08\) \\ NGC 5765b & \(13\fdg 5\pm 3\fdg 9\) & \(238\pm 15\) & \(7.72\pm 0.05\) \\ NGC 6926 & \(9\fdg 1\pm 0\fdg 7\) & \(246\pm 10\) & \(7.68\pm 0.50\) \\ NGC 7582 & \(10\fdg 9\pm 1\fdg 6\) & \(200\pm 9\) & \(7.72\pm 0.12\) \\ UGC 3789 & \(10\fdg 4\pm 1\fdg 9\) & \(210\pm 14\) & \(7.07\pm 0.05\) \\ \hline \end{tabular} Note. – This sample of 41 spiral galaxies (not including the Milky Way) consists of all spiral galaxies with \(\phi\), \(v_{\rm max}\), and \(M_{\bullet}\) measurements from our larger sample of all galaxy types. The full parent dataset of 145 galaxies is available online via Jin and Davis(2023). **Column (1):** galaxy name. **Column (2):** absolute value of the _face-on_ (_i.e._, de-projected from the plane of the sky) spiral-arm pitch angle (in degrees), from Davis et al.(2017, 2019). **Column (3):** physical maximum velocity rotation (in km s\({}^{-1}\)) corrected for inclination and compiled by Davis et al.(2019) from references therein. **Column (4):** dynamical black hole mass (in dex, solar masses) measurement compiled by Davis et al.(2017, 2019) from references therein.
* We do not include the Milky Way in our preferred determination of the fundamental plane (see SSA for further details).
\end{table}
Table 1: _continued_
tions of variables, operators, and constants, and judges these equations with a score defined by both accuracy and simplicity. In this work, we adopt the symbolic regression package PySR(Cranmer, 2023), which conducts the equation search through a multi-population evolutionary algorithm. The accuracy is defined by the mean squared error loss, and the simplicity is characterized by a complexity score, where each use of variables, operators, and constants adds some pre-defined complexity. The final score of an equation aims to maximize the accuracy and penalize the complexity with a parsimony constant.
The variable pool that we input to PySR includes all of the data from Davis et al. (2017, 2019c), parameters modeled by the bulge/disk decompositions of Davis et al. (2019b), and their derived spheroid stellar density properties (Sahu et al., 2022b). These variables included, but are not limited to: pitch angle, central stellar velocity dispersion, maximum rotational velocity, galaxy stellar mass, and several properties of the spheroid, including Sersic index, half-light radius, stellar mass, and densities (apparent, projected, and de-projected). We also included all available measurements from Hyper-Leda (Makarov et al., 2014), _e.g._, colors, diameters, _etc._ The variable pool also included multiple copies of each variable in different forms of natural numbers, their logarithms, and trigonometric functions (_e.g._, \(|\phi|\) or \(\tan|\phi|\) and \(v_{\rm max}\) or \(\log v_{\rm max}\)). The arithmetic operator pool is simply \(+\), \(-\), \(\times\), and \(\div\), with additional \(\log_{10}\), power, and exponentiation in rare cases. Based upon a search on these criteria, PySR found an optimal correlation between \(\mathcal{M}_{\bullet}\), \(\tan|\phi|\), and \(\log(v_{\rm max}/{\rm km\,s^{-1}})\). A presentation and discussion of other interesting scaling relations found by PySR are beyond the intended scope of this letter, they will be addressed in a more comprehensive work (Jin and Davis, 2023).
The functional form and initial parameters were identified by PySR and refined via Hyper-Fit. The final fitted equation for the \(M_{\bullet}\)-\(\phi\)-\(v_{\rm max}\) relationship is
\[\mathcal{M}_{\bullet} \sim\mathcal{N}\,[\mu=\alpha(\tan|\phi|-0.24) \tag{1}\] \[\quad+\beta\log\left(\frac{v_{\rm max}}{211\,{\rm km\,s^{-1}}} \right)+\gamma,\] \[\sigma=0.22\pm 0.06],\]
with \(\alpha=-5.58\pm 0.06\), \(\beta=3.96\pm 0.06\), \(\gamma=7.33\pm 0.05\), and intrinsic scatter (\(\sigma\)) in the \(\mathcal{M}_{\bullet}\)-direction.5 We present a 3-D plot of the resulting plane in Figure 2. The orientation of the plane intuitively matches the expectation of the extreme cases:
Footnote 5: Hyper–Fit minimizes the intrinsic scatter orthogonal to the plane and then performs a transformation from normal to Cartesian coordinates and outputs the intrinsic scatter along the axis of the dependent variable.
* the **most** massive black holes reside in host galaxies with tightly wound spiral arms _and_ high rotational velocities,
* the **least** massive black holes are found in galaxies with loosely wound spiral arms _and_ low rotational velocities,
* **no** black holes are found in galaxies with tightly wound spiral arms _and_ low rotational velocities, and
* **no** black holes are found in galaxies with loosely wound spiral arms _and_ high rotational velocities.
For additional analyses and discussions, see Jin and Davis (2023) for higher-dimensional relations featuring all galaxy types.6
Footnote 6: In the symbolic regression analysis of the spiral galaxies in our sample, we investigated and considered higher-dimensional versions of Equation 1 that incorporated additional quantities such as colors and bulge-to-total ratios. However, none of our higher-dimensional combinations improved upon the optimization of Equation 1 with the added expense of increased complexity and error propagation. We find the planar relation is valid because it is built upon parameters (\(\phi\) and \(v_{\rm max}\)) that are unique to disk galaxies. In our forthcoming work (Jin and Davis, 2023), we will present higher-dimensional relations that were able to find due to the larger combined sample of late-type and early-type galaxies with their more varied ranges of colors, bulge-to-total ratios, etc., as compared to our sample of just spiral galaxies in the current work.
### Error Analysis
One sign of a robust multi-parameter relationship is when different variables contribute equitably. That is, one variable should not have an overly-dominant influence on the relationship. Therefore, we need to check the relative change in \(M_{\bullet}\) when there is a proportional change in \(\phi\) or \(v_{\rm max}\). To check this, we test Equation 1 with equivalent 10% variations in the median values of \(\phi\) or \(v_{\rm max}\). Doing so, we find that a change of 10% in \(\phi\) leads to a 37.39% change in \(M_{\bullet}\) or a 10% change in \(v_{\rm max}\) leads to a 48.59% change in \(M_{\bullet}\). Ergo, in terms of overall weight, \(v_{\rm max}\) accounts for a slight majority (56.51%) of the variation in \(M_{\bullet}\) as compared to a similarly-sized variation in \(\phi\). Thus, neither variable has an outsized influence on \(M_{\bullet}\).
We used the Hyper-Fit routine to robustly fit the equation of the plane to the (\(\phi\), \(v_{\rm max}\), \(M_{\bullet}\)) variable set, with consideration of the individual uncertainties on all three parameters and accounting for intrinsic scatter in the relation. As suggested by its name, Hyper-Fit is uniquely designed to fit "linear models to multi-dimensional data with multi-variate Gaussian uncertainties." Additionally, Hyper-Fit calculates the intrinsic scatter of a scaling relation, which can be considered as the root-mean-square deviation in the observed data from the fitted function in the case of zero measurement error.7 Therefore, intrinsic scatter is the ideal parameter to judge and compare the accuracy of various scaling relations.
Footnote 7: For a more detailed description of intrinsic scatter and its determination in galaxy scaling relations, see Stone et al. (2021).
Our determination of a fundamental plane of black hole mass in spiral galaxies is ultimately advantageous because of its combination of the \(M_{\bullet}\)-\(\phi\) (\(\sigma=0.33\pm 0.08\) dex) and \(M_{\bullet}\)-\(v_{\rm max}\) (\(\sigma\sim 0.45\) dex) relations, reducing the intrinsic scatter down to \(\sigma=0.22\pm 0.06\) dex in the \(M_{\bullet}\)-direction. Previously, the \(M_{\bullet}\)-\(\phi\) relation had the lowest level of intrinsic scatter among black hole mass scaling relations for spiral galaxies. This reduction down to \(\sigma=0.22\pm 0.06\) dex (now below a factor of \(\log 2\approx 0.3\) dex) with the planar relation significantly improves upon the previous accuracy of the \(M_{\bullet}\)-\(\phi\) relation, which was already well below intrinsic scatters available for black hole mass scaling relations built from samples of late- and early-type galaxies. With such a low-level of intrinsic scatter, this makes the \(M_{\bullet}\)-\(\phi\)-\(v_{\rm max}\) relation the preeminent scaling relation for black hole mass in spiral galaxies.
## 3 Discussion
### The Benefit of Combining Two Relations
Because the \(M_{\bullet}\)-\(\phi\)-\(v_{\rm max}\) relation is a combination of the \(M_{\bullet}\)-\(\phi\) and \(M_{\bullet}\)-\(v_{\rm max}\) relations (see the bottom panels of Figure 2 for projections of these relations), we begin by comparing our 3-D relation to each of the 2-D relations (see Figure 3). First (in the left column of plots in Figure 3), we compare the fundamental plane with the \(M_{\bullet}\)-\(\phi\) relation (Davis et al., 2017, equation 8). From a glance, we find that both relations display tight correlations without significant outliers. For a more complete comparison, we have included subplots that break down the performance of each relation versus the planar relation across eight bins. One subplot shows root mean square error (\(\Delta_{\rm rms}\)) in each bin and the other subplot shows the mean absolute scatter (\(\bar{\Delta}\)) in each bin. As we can see, the \(M_{\bullet}\)-\(\phi\)-\(v_{\rm max}\) relation generally equals or outperforms the \(M_{\bullet}\)-\(\phi\) relation in all except for the most massive bin. Although, the planar relation does tend to be biased towards slightly over-massive black holes in the middle bins, as compared to the \(M_{\bullet}\)-\(\phi\) relation.
Second (in the right column of plots in Figure 3), we compare the fundamental plane with the \(M_{\bullet}\)-\(v_{\rm max}\) relation (Davis et al., 2019, equation 10). We can see that the planar relation is significantly more accurate than the \(M_{\bullet}\)-\(v_{\rm max}\) relation in the four most massive bins. Here, the plane is only slightly lopsided towards over-massive black hole predictions in the central bins, relative to the \(M_{\bullet}\)-\(v_{\rm max}\) relation. Overall, these comparisons to each of the 2-D relations demonstrates that the fundamental plane performs well, particularly so at the low-mass end, which should make it advantageous for extrapolating toward lower mass black holes, _i.e._, IMBHs.
### Explanation of the Fundamental Plane
The existence of a tight \(M_{\bullet}\)-\(\phi\)-\(v_{\rm max}\) relation built upon \(M_{\bullet}\)-\(\phi\) and \(M_{\bullet}\)-\(v_{\rm max}\) relations is not a revelation, but rather an expected consequence of numerous prior
studies. It first goes back to the so-called "Hubble" tuning fork diagram (Jeans, 1928; Hubble, 1936)8, which established a clear and understandable sequence that organized spiral galaxies into morphological classes based upon the prominence of their bulges and the winding geometry of their spiral arms. To simplify morphological trends, we can use the Hubble sequence morphological stage number, \(T\), where spiral galaxies are defined as \(T>0\) and higher numbers are considered to be "later" types. In this way, the Hubble sequence qualitatively establishes \(T\propto|\phi|\) and \(T\propto M_{\star,{\rm sph}}\), where \(M_{\star,{\rm sph}}\) is the
Figure 2: The three-dimensional plot (viewed from four different vantage points) of the planar \(M_{\star}\)–\(\phi\)–\(v_{\rm max}\) relationship (Equation 1). Onto the surface (\(\,\Uparrow\)), we show the locations of the 41 spiral galaxies (\(\bullet\)) from Jin & Davis (2023) used to define the plane. The fainter gray planes above and below the darker gray middle plane depict the intrinsic scatter bounds (\(\pm 0.22\,\)dex in the \(M_{\star}\)-direction). This plot illustrates that our galaxies are dispersed over the area of the plane, demonstrating a lack of degeneracy between the parameters by the apparent embedding of the two-dimensional manifold (_i.e._, surface) in three-dimensional space. For an animation of this plot (showing also the intrinsic scatter bounds above and below the plane), see the following link, [http://surl.li/iggdg](http://surl.li/iggdg).
stellar mass of the spheroid (bulge) component of a spiral galaxy. Decades after dissemination of the Hubble sequence, many studies conducted quantitative studies showing that indeed \(|\phi|\propto T\)(Kennicutt, 1981; Seigar and James, 1998; Ma et al., 1999; Baillard et al., 2011; Yu et al., 2018; Diaz-Garcia et al., 2019; Yu and Ho, 2019, 2020) and Davis et al. (2019) showed that \(|\phi|\propto M_{\bullet,\rm sph}\).9 It follows from these correlations that there should be a correlation between \(\phi\) and \(M_{\bullet}\)(Seigar et al., 2008; Berrier et al., 2013; Davis et al., 2017) as both are strongly correlated to bulge mass.
Footnote 9: The \(\phi\)-\(M_{\bullet,\rm sph}\) relation is actually a projection of the fundamental plane of spiral structure in disk galaxies (Lin and Shu, 1966; Davis et al., 2015).
As for uncovering the \(M_{\bullet}\)-\(v_{\rm max}\) relation, we can look first at the correlation between \(v_{\rm max}\) and \(T\) identified by Roberts (1978). By substituting \(|\phi|\) as a proxy for \(T\), we then arrive at the \(\phi\)-\(v_{\rm max}\) relation (Kennicutt, 1981; Davis et al., 2019). Armed with the knowledge of both \(\phi\)-\(M_{\bullet}\) and \(\phi\)-\(v_{\rm max}\) relations, Davis et al. (2019) produced an \(M_{\bullet}\)-\(v_{\rm max}\) relation that is informed by, and consistent with, the Tully-Fisher relation (Tully and Fisher, 1977; Tiley et al., 2019). Thus, we now arrive at a unified \(M_{\bullet}\)-\(\phi\)-\(v_{\rm max}\) relation that is a manifestation of the gravitational potential well of a spiral galaxy. Ergo, in more massive galaxies with deeper potential wells, we find more massive black holes, more tightly-wound spiral patterns, and higher rotational velocities.
### Fundamental Planes with Black Hole Mass
There have been a couple prior attempts at obtaining a fundamental plane scaling relation for SMBHs. The most relevant example is the trivariate relation between \(M_{\bullet}\)-\(\sigma_{e}\)-\(R_{e}\), where \(R_{e}\) is a galaxy's half-light radius and \(\sigma_{e}\) is the stellar velocity dispersion inside an aperture equal to \(R_{e}\)(Marconi and Hunt, 2003; van den Bosch, 2016). For the \(M_{\bullet}\)-\(\sigma_{e}\)-\(R_{e}\) relation, van den Bosch (2016) found an intrinsic scatter of \(\sigma=0.49\pm 0.03\,\rm dex\) in the \(\mathcal{M}_{\bullet}\)-direction. However, this is insignificant because van den Bosch (2016) also found an identical intrinsic scatter for the \(M_{\bullet}\)-\(\sigma_{e}\) relation, meaning that the addition of the third parameter, \(R_{e}\), serves no purpose, and in practice makes things worse because it introduces another variable that contributes to error propagation. Moreover, van den Bosch (2016) utilized a large sample of 230 black hole mass measurements that is "very het
Figure 3: Plots between the dynamically-measured \(M_{\bullet}\) on the \(x\)-axis and the \(M_{\bullet}\) predicted by Equation 1 (\(\bullet\)) on the \(y\)-axis, compared with the \(M_{\bullet}\)–\(\phi\) relation from Davis et al. (2017, equation 8) in the _left column_ of plots and the \(M_{\bullet}\)–\(v_{\rm max}\) relation from Davis et al. (2019, equation 10) in the _right column_ of plots (both depicted with \(\blacksquare\)). The subplots show histograms (spread across eight bins, each 0.41 dex wide) for the \(\Delta_{\rm rms}\) scatter (top subplots) and \(\bar{\Delta}\) average residual (bottom subplots) about the 1:1 line for both the planar relation (\(\xrightarrow{}\)) and the \(M_{\bullet}\)–\(\phi\) (_left column_) or \(M_{\bullet}\)–\(v_{\rm max}\) (_right column_) relations (both depicted with \(\xrightarrow{}\)). The dashed line ( \(\xrightarrow{}\)) depicts the 1:1 correlation between observed and predicted masses in the main plots and \(\bar{\Delta}=0.0\,\rm dex\) in the lower subplots. For clarity, predicted errors (along the \(y\)-axis) are not shown because they are directly proportional to (and always greater than) the errors along the \(x\)-axis.
erogenous" because they are derived from a variety of measurement methods, most of which are indirect and not from dynamical methods.
The other notable example is the so-called fundamental plane of black hole activity (Merloni et al., 2003; Falcke et al., 2004). This plane is one between \(M_{\bullet}\)-\(L_{\rm R}\)-\(L_{\rm X}\), where \(L_{\rm R}\) and \(L_{\rm X}\) are radio and X-ray luminosity, respectively. The fundamental plane of black hole activity is based upon the interpretation of scale invariant disk-jet coupling manifesting as an empirical relation between jet power probed by radio and mass accretion rate via X-rays. In order to perceive the intrinsic disk-jet coupling mechanism, this requires simultaneity of radio and X-ray observations to account for the duty-cycle of active galactic nuclei. However, given that the processes that govern the \(M_{\bullet}\)-\(L_{\rm R}\)-\(L_{\rm X}\) relation are highly-secular, this leads to an intrinsic scatter of \(\sigma=0.96\pm 0.13\) dex in the \(\mathcal{M}_{\bullet}\)-direction (Gultekin et al., 2019), "indicating a large amount of unexplained variance." Additionally, Gultekin et al. (2022) caution against using the fundamental plane of black hole activity without additional constraints beyond just straightforward X-ray and radio observations. All together, the noted problems with either of the aforementioned planar relations further cements the \(M_{\bullet}\)-\(\phi\)-\(v_{\rm max}\) relation's superiority as a best-in-class black hole mass scaling relation for spiral galaxies.
### Implications
One advantageous application we envision for the \(M_{\bullet}\)-\(\phi\)-\(v_{\rm max}\) relation is to use it to construct black hole mass functions (BHMFs) from surveys of spiral galaxies. Already, the \(M_{\bullet}\)-\(\phi\) relation has been utilized to model the local BHMF derived from spiral galaxies (Davis et al., 2014; Fusco et al., 2022). The simple addition of \(v_{\rm max}\), which is widely available for many spiral galaxies, could better aid in modeling the shape of the BHMF with lower scatter. This is particularly useful as the BHMF is well known at the high-mass end, but lacking clarity at the low-mass end, which is the purview of spiral galaxies.
The BHMF is virtually unknown at \(M_{\bullet}<10^{5}\) M\({}_{\odot}\) because of a dearth of observational evidence of IMBHs. Extrapolating the \(M_{\bullet}\)-\(\phi\)(Davis et al., 2017) and \(M_{\bullet}\)-\(v_{\rm max}\)(Davis et al., 2019) relations down to the low-mass end predicts \(M_{\bullet}<10^{5}\) M\({}_{\odot}\) IMBHs at \(|\phi|>26\fdg 8\pm 2\fdg 3\) and \(v_{\rm max}<130\pm 15\) km s\({}^{-1}\). Using these aforementioned values as inputs to Equation 1, we similarly find a line across the plane defining the upper-limit at \(\mathcal{M}_{\bullet}<5.0\pm 0.4\) dex. Thus, these values of \(|\phi|\) and \(v_{\rm max}\) serve as a sort of midline path down the plane into the IMBH regime. However, because of the flexibility of the plane, these need not be hard and fast values for identifying potential IMBH-hosting galaxies. That is, a galaxy with a slightly smaller \(|\phi|\) in conjunction with a slightly larger \(v_{\rm max}\), or _vice versa_, could still lie below \(M_{\bullet}=10^{5}\) M\({}_{\odot}\) on the fundamental plane. Particularly, in a forthcoming work (Davis et al., 2023), we use the fundamental plane to identify strong candidates for IMBH hosts among a sample of late-type spiral galaxies.
## 4 Conclusions
Arguably, spiral galaxies are the most interesting galaxies. This is not just because of their intrinsic beauty, but also because they are galactic laboratories of ongoing star formation, growth, and evolution. Moreover, the most interesting discoveries await in the realm of low-mass spiral galaxies as potential hosts of elusive IMBHs. However, these interesting characteristics make them more difficult to analyze than their more massive, simpler, and older cousins, _i.e._, early-type galaxies. Indeed, commonly-used black hole mass scaling relations like the \(M_{\bullet}\)-\(\sigma_{0}\) (central/bulge stellar velocity dispersion) or \(M_{\bullet}\)-\(M_{\star,\rm sph}\) relations are far less accurate for late-type galaxies than early-type galaxies. Specifically, the \(M_{\bullet}\)-\(\sigma_{0}\) relation has an intrinsic scatter of \(\sigma=0.32\) dex for early-type galaxies (Sahu et al., 2019), _cf._\(\sigma=0.57\) dex for late-type galaxies (Davis et al., 2017; Sahu et al., 2019); the \(M_{\bullet}\)-\(M_{\star,\rm sph}\) relation has an intrinsic scatter of \(\sigma=0.41\) dex for early-type galaxies (Sahu et al., 2019), _cf._\(\sigma=0.48\) dex for late-type galaxies (Davis et al., 2019; Sahu et al., 2019). Judged upon these criteria, the \(M_{\bullet}\)-\(\phi\)-\(v_{\rm max}\) relation (with \(\sigma=0.22\pm 0.06\) dex) is not only more accurate than other late-type black hole mass scaling relations, but also those for early-type galaxies.
Verily, one might expect that a tighter relationship would exist for single-component galaxies like elliptical galaxies, rather than multi-component spiral galaxies. However, if we consider the evolution of these systems and the increase in entropy from spiral to elliptical galaxies, it becomes evident that this initial assumption may not hold. Although late-type galaxies may be _complex_, early-type galaxies are _complicated_; the distinction being that **complexity** implies _many understandable components_ and **complication** implying less components, but more chaos and disorder**. This can be understood by tracking the impact of merger histories and the genesis of morphologically-dependent black hole mass scaling relations (Graham, 2023; Graham & Sahu, 2023; Graham, 2023; Graham & Sahu, 2023). As such, the unique merger history of a galaxy can effectively muddle the ordered rotationally-supported disk galaxies by transforming them into dispersion-supported elliptical galaxies. Moreover, this
helps explain why we find that the strongest correlation with black hole mass is not via bulge properties, which are similar to the disordered spheroids of elliptical galaxies and may be the result of disk cloaking (Hon et al., 2022).
The fact that the \(M_{\bullet}\)-\(\phi\)-\(v_{\rm max}\) relation, which correlates the black hole mass with global properties of its host galaxy's disk rather than bulge properties, as in the \(M_{\bullet}\)-\(\sigma_{0}\) and \(M_{\bullet}\)-\(M_{\star,\rm sph}\) relations, shows that BH-galaxy coevolution is active over large scales. Indeed, recent work (Davies et al., 2019; Oppenheimer et al., 2020; Sanchez et al., 2023) has shown that \(M_{\bullet}\) is inversely correlated with the fraction of baryons in the circumgalactic medium of its host galaxy. This is thought to be because more massive black holes are more energetic, and thus transport more baryons beyond their host galaxies' virial radii, all while reducing gas accretion and star formation over time. This is clear evidence that processes of a central SMBH are capable of affecting change on scales over eleven orders of magnitude larger than extent of their event horizons!10
Footnote 10: The Milky Way’s Sgr A\({}^{*}\) has a shadow with a radius of \(0.21\pm 0.01\,\)AU (Event Horizon Telescope Collaboration et al., 2022) and the virial radius of the Galaxy is \(258\,\)kpc (Klypin et al., 2002), which is an astounding difference in scale of \((2.52\pm 0.12)\times 10^{11}\). For comparison, there is similar difference in scale between the width of a human hair and the radius of Earth.
Extrapolation of black hole mass scaling relations down into the IMBH range is important for future studies, including the design and predictions for space-based gravitational-wave interferometers (Amaro-Seoane et al., 2023). Therefore, we anticipate that the fundamental plane will be advantageous for estimating the demographics of IMBHs hosted by spiral galaxies. With more than one parameter, there is redundancy built into the planar relationship that makes it more resilient to abnormalities in a single parameter, helping it to be more robust. This adds a degree of confidence when using it to predict black holes masses below the limits of our sample (NGC 4395 with \(\mathcal{M}_{\bullet}=5.62\pm 0.17\,\)dex). Moreover, spiral-arm pitch angle is straightforward enough to measure that it could be accomplished most basically with just an uncalibrated image and a protractor. What is more, \(v_{\rm max}\) values are readily-available from large 21-cm line width surveys and easily-accessible in online archives. Therefore, we hope that the \(M_{\bullet}\)-\(\phi\)-\(v_{\rm max}\) relation will facilitate new and impactful studies and influence further advancements in black hole mass scaling relations and galaxy evolution.
The authors are grateful for stimulating discussions with Andrea Maccio, Joseph Gelfand, Ingyin Zaw, and Ivan Katkov. This material is based upon work supported by Tamkeen under the NYU Abu Dhabi Research Institute grant CASS. This research has made use of NASA's Astrophysics Data System, and the NASA/IPAC Extragalactic Database (NED) and Infrared Science Archive (IRSA). We acknowledge the use of the HyperLeda database ([http://leda.univ-lyon1.fr](http://leda.univ-lyon1.fr)).
Astropy (Astropy Collaboration et al., 2013, 2018) Hyper-Fit (Robotham & Obreschkow, 2015, 2016) Matplotlib (Hunter, 2007) NumPy (Harris et al., 2020) Pandas (McKinney, 2010) PySR (Cranmer, 2023) Python (Van Rossum & Drake, 2009) SciPy (Virtanen et al., 2020) uncertainties
ORCID IDS
Benjamin L. Davis
[https://orcid.org/0000-0002-4306-5950](https://orcid.org/0000-0002-4306-5950)
Zehao Jin
[https://orcid.org/0009-0000-2506-6645](https://orcid.org/0009-0000-2506-6645)
## Appendix A The Milky Way
Sgr A\({}^{*}\) in our Galaxy has been robustly studied by many independent methods, determining its black hole mass with incredible accuracy and precision. We adopt the mass determined by the multi-star orbit analysis of Boehle et al. (2016), but many such other studies present consistent masses, most notably the black hole mass determined by the size of its shadow (Event Horizon Telescope Collaboration et al., 2022). Therefore, we strongly intended to include the Milky Way in our sample, just as it was included in the determination of the \(M_{\bullet}\)-\(\phi\) and \(M_{\bullet}\)-\(v_{\rm max}\) relations. However, our home galaxy stands out as a significant outlier below the \(M_{\bullet}\)-\(\phi\)-\(v_{\rm max}\) plane, _i.e._, its dynamically-measured \(M_{\bullet}\) is under
massive with respect to that predicted by the plane. Specifically, the fundamental plane predicts \(\mathcal{M}_{\bullet}=7.26\pm 0.24\,\mathrm{dex}\), whereas Sgr A\({}^{*}\) is highly-constrained to \(\mathcal{M}_{\bullet}=6.60\pm 0.02\,\mathrm{dex}\). You can see how the Milky Way stands out in Figure 4.
In absolute terms, this does not necessarily make the Milky Way the most extreme outlier in the sample, but the accuracy of its dynamically-measured \(M_{\bullet}\) means that it is weighted heavily in the regression of the fundamental plane. The coefficients for Equation 1, with the Milky Way included, changes to \(\alpha=-5.57\pm 0.06\), \(\beta=3.95\pm 0.06\), \(\gamma=7.31\pm 0.06\), and intrinsic scatter \(\sigma=0.28\pm 0.06\,\mathrm{dex}\) in the \(\mathcal{M}_{\bullet}\)-direction. This represents a small change in the predicted black hole mass; for a galaxy with the median \(\phi\) and \(v_{\mathrm{max}}\), the plane without the Milky Way yields \(M_{\bullet}=(2.16\pm 1.13)\times 10^{7}\,\mathrm{M}_{\odot}\) and including the Milky Way it becomes \(M_{\bullet}=(2.06\pm 1.36)\times 10^{7}\,\mathrm{M}_{\odot}\). Thanks to its position relatively near the balance point of the fundamental plane, its affect does not noticeably tug the plane off in any direction. However, it does diminish the accuracy of the fundamental plane, as evidenced by the \(0.06\,\mathrm{dex}\) increase in the intrinsic scatter due entirely to our Galaxy. Thus, one can choose to use the alternative coefficients for Equation 1 that considered the Milky Way and yield highly-consistent black hole mass predictions with only a small decrease in accuracy, which is still more accurate than either the \(M_{\bullet}\)-\(\phi\) or \(M_{\bullet}\)-\(v_{\mathrm{max}}\) relations alone.
Since the mass of Sgr A\({}^{*}\) is practically unassailable, the fault in our Galaxy must lie in its \(\phi\) and/or \(v_{\mathrm{max}}\). Of course, both quantities are difficult to measure from inside the Galaxy; even with modern data, it is hard to truly represent the geometric shape that astronomers from the Andromeda Galaxy would see or what \(v_{\mathrm{max}}\) they would measure from long-slit spectroscopic observations of the Milky Way. As shown in Figure 4, the \(M_{\bullet}\)-\(\phi\) relation predicts a less accurate black hole mass, whereas the \(M_{\bullet}\)-\(v_{\mathrm{max}}\) relation actually predicts a more accurate black hole mass than the fundamental plane. Indeed, rearranging Equation 1 to solve for \(\phi\) yields a prediction of \(|\phi|=19\fdg 3\pm 2\fdg 1\) for the Galaxy, which is not far-fetched to envision. Our adopted value of \(13\fdg 1\pm 0\fdg 6\)(Vallee, 2015) is the median pitch angle derived from a meta-analysis of 50 studies with a range of \(3\arcdeg\leq|\phi|\leq 28\arcdeg\). Moreover, with our intimate vantage of the Milky Way, observations can be overwhelmed with small-scale structures, such as a high pitch angle structure in the Sagittarius Arm (Kuhn et al., 2021), that can complicate determinations of the global Galactic pitch angle.
For a final consideration, it could be that Sgr A\({}^{*}\) is simply under-massive. Indeed, Oppenheimer et al. (2020) point out that Sgr A\({}^{*}\) is under-massive with respect to other SMBHs in galaxies with similarly-sized halos as the Milky Way. In fact, it is thought that an under-massive Sgr A\({}^{*}\) could be conducive to supporting the genesis of life in the Milky
Figure 4: This figure is identical to Figure 3, except it includes the Milky Way (marked with \(\star\)). For an animation of Figure 2 that also includes the Milky Way, see the following link, [http://surl.li/iggks](http://surl.li/iggks).
Way (_e.g._, Lingam et al., 2019). Thus, the Milky Way, and its central black hole, must be at least consistent with the anthropic principle (Dicke, 1957).
|
2302.00070 | Debiasing Vision-Language Models via Biased Prompts | Machine learning models have been shown to inherit biases from their training
datasets. This can be particularly problematic for vision-language foundation
models trained on uncurated datasets scraped from the internet. The biases can
be amplified and propagated to downstream applications like zero-shot
classifiers and text-to-image generative models. In this study, we propose a
general approach for debiasing vision-language foundation models by projecting
out biased directions in the text embedding. In particular, we show that
debiasing only the text embedding with a calibrated projection matrix suffices
to yield robust classifiers and fair generative models. The proposed
closed-form solution enables easy integration into large-scale pipelines, and
empirical results demonstrate that our approach effectively reduces social bias
and spurious correlation in both discriminative and generative vision-language
models without the need for additional data or training. | Ching-Yao Chuang, Varun Jampani, Yuanzhen Li, Antonio Torralba, Stefanie Jegelka | 2023-01-31T20:09:33Z | http://arxiv.org/abs/2302.00070v2 | # Debiasing Vision-Language Models via Biased Prompts
###### Abstract
Machine learning models have been shown to inherit biases from their training datasets, which can be particularly problematic for vision-language foundation models trained on uncurated datasets scraped from the internet. The biases can be amplified and propagated to downstream applications like zero-shot classifiers and text-to-image generative models. In this study, we propose a general approach for debiasing vision-language foundation models by projecting out biased directions in the text embedding. In particular, we show that debiasing only the text embedding with a calibrated projection matrix suffices to yield robust classifiers and fair generative models. The closed-form solution enables easy integration into large-scale pipelines, and empirical results demonstrate that our approach effectively reduces social bias and spurious correlation in both discriminative and generative vision-language models without the need for additional data or training. The code is available at [https://github.com/chingyaoc/debias_vl](https://github.com/chingyaoc/debias_vl).
Machine Learning, ICML
## 1 Introduction
Foundation vision-language models, such as CLIP (Radford et al., 2021), DALLE-2 (Ramesh et al., 2022), Imagen (Saharia et al., 2022), and Stable Diffusion (Rombach et al., 2022), which are trained on extensive multimodal data at a massive scale, have led to a significant shift in the landscape of machine learning systems. Specifically, contrastive vision-language encoders like CLIP have the ability to perform zero-shot inferences without fine-tuning, and language embeddings can be used to train high-quality text-to-image models (Rombach et al., 2022).
While vision-language models demonstrate impressive capabilities, it is important to recognize that they may also exacerbate biases (Mehrabi et al., 2021; Agarwal et al., 2021; Wang et al., 2021). Recent studies (Birhane et al., 2021) have shown that the datasets these models are trained on can contain inappropriate image-text pairs with stereotypes, racist content, and ethnic slurs. The biases are then propagated to downstream applications (Agarwal et al., 2021; Wang et al., 2021), resulting in biased predictions. In addition to social biases, zero-shot models derived from vision-language models can also suffer from more general form of spurious correlations such as image background, leading to poor group robustness (Zhang and Re, 2022). Biases also exist in generative models, where generated images may exhibit bias towards certain genders and races (Cho et al., 2022; Mishkin et al., 2022). Substantial progress has been made recently toward mitigating biases in vision-language models (Parraga et al., 2022; Berg et al., 2022; Zhang and Re, 2022). However, many current approaches for addressing bias in models require training or fine-tuning the models using resampled datasets or modified objectives, which can be computationally intensive for foundation models.
In this work, we propose a general approach for self-debiasing foundation vision-language models by projecting out biased directions in the text embedding. Given a vision-language encoder such as CLIP, we define a set of biased directions in the embedding using prompts that describe the biases. For instance, prompts like "a photo of a male/female" define a biased subspace in the latent space. One approach to mitigating these biases is to construct a projection matrix, a linear transformation of the text embedding that projects out the biased directions (Bolukbasi et al., 2016). However, solely relying on prompts to define biased directions may be unstable and noisy (Gonen and Goldberg, 2019). To address this issue, we propose a calibration loss that minimizes the discrepancy of a pair of prompt embeddings. For example, given a projection matrix that removes gender information, the projected vectors of prompts "a photo of a male doctor" and "a photo of a female doctor" should be similar. Based on this principle, we design an objective to calibrate the projection matrix, which has an easily solvable closed-form solution. This allows for the construction of the projection matrix to be _training-free and requires no downstream dataset or labels_, making it suitable for large-scale models. Empirically, we find that debiasing only the text embedding with a calibrated projection matrix suffices to improve the group robustness of zero-shot models on well-established benchmarks.
We then extend our approach to generative models such as Stable Diffusion (Rombach et al., 2022), a widely adopted text-to-image model conditioned on text embeddings from CLIP (Radford et al., 2021). Interestingly, we find that debiasing the text embedding does not fully mitigate the bias in the generative models. In particular, our empirical evidence suggests that generative models may further reinforce bias as they are trained on different but also biased datasets than CLIP. To address this issue, we introduce a weighting term in the calibration loss to counteract the bias in generative models. The weight can be efficiently optimized via an iterative algorithm with feedback from generative models. Similar to debiasing zero-shot models, the projection matrix can be used to debias text-to-image models without altering the model parameters.
In short, this work makes the following contributions:
* We present a simple and general approach for debiasing vision-language models;
* The proposed approach does not require training, data, or labels, making it computationally efficient for use with foundation models;
* We evaluate our approach through experiments with both discriminative (zero-shot, text-image retrieval) and generative (text-to-image) vision-language models.
## 2 Related Works
Vision-Language models (Radford et al., 2021; Ramesh et al., 2022; Saharia et al., 2022; Rombach et al., 2022) have become increasingly widespread in recent years. However, these models are known to suffer from spurious correlations and can be biased toward certain races and gender. Birhane et al. (2021) study the datasets these models are trained on and show that their biases can be inherited by the models. Various methods have been proposed to address biases, but many of them only address single-modality models.
Biases in Language ModelsLarge-scale language models are shown to contain harmful or misrepresentative biases (Blodgett et al., 2020; Nadeem et al., 2020; Weidinger et al., 2021). Previous research has demonstrated the presence of gender bias in natural language processing systems (Bolukbasi et al., 2016; Zhao et al., 2019) as well as racial bias (Manzini et al., 2019; Garg et al., 2018). Bolukbasi et al. (2016) first proposed the use of orthogonal projection to remove gender biases in word embeddings. This approach was later extended to debiasing sentence embeddings (Liang et al., 2020). Alternative methods include regularizing the models with constraints on training data (Zhao et al., 2017; Huang et al., 2019) or directly modifying the dataset (Sun et al., 2019; Zhao et al., 2019). However, scaling these approaches to large foundation models can be challenging as they often require retraining the backbone encoders.
Biases in Vision ModelsGender and racial biases have also been widely explored in computer vision (Alvi et al., 2018; Wang and Deng, 2020), in terms of discriminative models (Wang et al., 2019) and generative models (Xu et al., 2018; Grover et al., 2019; Cho et al., 2022). Many debiasing approaches aim to learn good representations via adversarial training (Madras et al., 2018; Wang et al., 2020), or augmenting the biased dataset (Ramaswamy et al., 2021; Chuang and Mroueh, 2021). Beyond social bias, many works study spurious correlations, a more general form of bias that can include features such as image background or other non-target attributes that are correlated with labels. This problem of spurious correlations is often studied and tackled as a group robustness problem (Sagawa et al., 2019; Izmailov et al., 2022). Kirichenko et al. (2022) show that last layer re-training is sufficient for robustness to spurious correlations, which aligns with our finding that debiasing the zero-shot weights suffices to yield robust classifiers.
Biases in Vision-Language, and BeyondRecently, biases in multimodal settings have gained significant attention (Agarwal et al., 2021). Wang et al. (2021) propose to remove dimensions in the CLIP embedding that are highly correlated with gender attributes. Berg et al. (2022) debias the CLIP models with prompt learning via an adversarial approach. Recently, Zhang and Re (2022) address the group robustness of vision-language models with contrastive learning. These previous works are data-oriented, where models are trained or finetuned on labeled datasets. In contrast, our approach is fully zero-shot, which does not require any downstream dataset and model training.
## 3 Biases and Spurious Correlations
We consider a dataset in which each input \(x\in\mathcal{X}\) is associated with multiple attributes, including the target class \(y\in\mathcal{Y}\) and a spurious attribute \(a\in\mathcal{A}\). We focus on the case where biases are present and the attribute \(a\) is spuriously correlated with the label \(y\). For instance, the class "doctor" could be correlated with the spurious attribute "gender" in the datasets foundation models are trained on (Birhane et al., 2021). Importantly, these biases can be transferred to downstream tasks, both discriminative and generative.
Discriminative ModelsIn this work, we examine the biases present in zero-shot classifiers obtained via a vision-language encoder such as CLIP. These classifiers are built by assigning each row of the linear classifier weight \(\beta\in\mathbb{R}^{K\times d}\) to be the embedding of a "class prompt", for example, "a photo of a [class name]" (Radford et al., 2021). Importantly, it does not require any data or training to construct these zero-shot classifiers. However, it is possible for these zero-shot classifiers to inherit biases from the dataset used to train the vision-language models. To study these biases, we utilize the group robustness framework proposed by Sagawa et al. (2019). In this setting, groups are defined by a com
bination of the labels and spurious attributes: \(\mathcal{G}\in\mathcal{Y}\times\mathcal{A}\). Given a distribution \(P_{g}\) conditioned on \(g\in\mathcal{G}\) and a loss function \(\ell:\mathcal{Y}\times\mathcal{Y}\rightarrow\mathbb{R}\), group robustness requires that the classifier \(f:\mathcal{X}\rightarrow\mathcal{Y}\) achieves a small gap between its worst-group error and average error:
\[\max_{g\in\mathcal{G}}\mathbb{E}_{x,y\sim P_{g}}\left[\ell(f(x),y)\right]- \mathbb{E}_{x,y\sim P}\left[\ell(f(x),y)\right]. \tag{1}\]
The definition of metrics for text-image retrievals, such as maximal skewness (Geyik et al., 2019), will be deferred to the experiment section.
Generative ModelsA text-to-image model learns a conditional distribution \(\hat{P}(X|Z=z)\), where \(z\) is the embedding of the prompt. However, the biased nature of the dataset used to train the generative model can also affect the distribution \(\hat{P}\). To measure the bias present in generative models, recent works (Choi et al., 2020; Teo and Cheung, 2021) propose using statistical parity. Specifically, given a classifier \(h:\mathcal{X}\rightarrow\mathcal{A}\) for the spurious attribute, the discrepancy of the generative distribution \(\hat{P}\) is defined as follows:
\[\max_{a\in\mathcal{A}}\mathbb{E}_{x\sim\hat{P}}\left[\mathbb{1}_{h(x)=a}\right] -\min_{a\in\mathcal{A}}\mathbb{E}_{x\sim\hat{P}}\left[\mathbb{1}_{h(x)=a}\right]. \tag{2}\]
A fair generative model minimizes the discrepancy by ensuring that each attribute \(a\in\mathcal{A}\) has an equal probability.
## 4 Debiasing Discriminative Models
It is essential for a robust classifier to evade dependence on irrelevant features present in images. This necessitates the classifier to be invariant to image backgrounds and insensitive to attributes such as race or gender. Prior research has employed datasets with target labels and spurious attributes to quantify and eliminate biases (Sagawa et al., 2019; Zhang and Re, 2022). However, this approach is not feasible in a zero-shot setting, where data and training are prohibitive.
### Measuring Biases with Prompts
In contrast to previous approaches, our proposed method for measuring biases utilizes prompts, drawing inspiration from studies on debiasing word embeddings (Bolukbasi et al., 2016). The use of vision-language contrastive training allows for the description of irrelevant features through natural language. As such, embeddings of prompts such as "a photo of a [irrelevant attribute]" can capture these spurious features in the visual embedding. Consequently, the bias of a classifier can be quantified by computing the cosine similarity between its weights and the corresponding spurious feature. Table 1 illustrates the cosine similarity between the embeddings of prompts that describe the target classes and irrelevant attributes, using two popular group robustness benchmarks: Waterbird (Sagawa et al., 2019) and CelebA (Liu et al., 2015). The details of datasets and the specific prompts can be found in section 5 and appendix C.1. The results demonstrate that the classifier weights are inclined towards certain irrelevant attributes (gender or image background), implicitly implying that the classifiers are using these spurious directions to make predictions.
### Debiasing via Orthogonal Projection
As the zero-shot weights can also be viewed as natural language embeddings, a straightforward approach is to follow the debiasing pipeline employed in word and sentence embeddings (Bolukbasi et al., 2016; Liang et al., 2020). In particular, to make the classifier invariant to these irrelevant features, we align the classifier weights with the orthogonal complement of these embeddings. Let \(A\in\mathbb{R}^{d\times m}\) be a matrix whose columns are the embeddings of spurious prompts. The orthogonal projection matrix is then:
\[P_{0}=I-A(A^{T}A)^{-1}A^{T}.\]
We can use the projection matrix to eliminate spurious directions in a text embedding \(z\) as \(P_{0}z\).
### Calibrating the Projection Matrix
It is essential to acknowledge that the estimation of the irrelevant feature directions may introduce an approximation error in the projection matrix (Gonen and Goldberg, 2019). Additionally, in certain scenarios, it may be challenging to thoroughly describe the irrelevant attribute using a limited number of prompts, resulting in increased uncertainty in the projection matrix estimation. This issue is also evident in our empirical results (Table 2 and 4), where the use of orthogonal projection fails to enhance performance.
To improve the estimation of the projection matrix, we leverage _positive pairs_ of prompts that are expected to have the same semantic meaning after projection. In particular, the embedding of prompts such as "a photo of a [class name] with [spurious attribute]" should only contain information about "[class name]" after projecting out the spurious information, as Figure 1 illustrates. Motivated by this intuition, we propose to regularize the difference between the projected embeddings using a set of positive pairs \(S\):
\[\min_{P}\ \left\|P-P_{0}\right\|^{2}+\frac{\lambda}{|\mathcal{S}|}\sum_{(i,j) \in\mathcal{S}}\left\|Pz_{i}-Pz_{j}\right\|^{2}, \tag{3}\]
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline & \multicolumn{2}{c}{**CelebA**} & \multicolumn{2}{c}{**Waterbird**} \\ & male & female & land & water \\ \hline dark hair / landbird & 0.83 & 0.78 & 0.75 & 0.66 \\ blond hair / waterbird & 0.77 & 0.85 & 0.65 & 0.70 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Cosine similarity between classifier weights and spurious directions:** in both datasets, the classifier weights are biased toward certain spurious attributes. The labels and spurious attributes are binary variables in both datasets.
where \((z_{i},z_{j})\) is the embedding of pair \((i,j)\) in \(\mathcal{S}\) and \((i,j)\) are prompts that describe the same class but different spurious attributes. The loss encourages the linear projection \(P\) to be invariant to the difference between \((i,j)\), i.e., the spurious attributes. The optimization problem has a convenient closed-form solution, as demonstrated in Lemma 4.1.
**Lemma 4.1**.: _The minimizer of the calibration loss is_
\[P^{*}=P_{0}\Big{(}I+\frac{\lambda}{|\mathcal{S}|}\sum_{(i,j)\in\mathcal{S}}(z _{i}-z_{j})(z_{i}-z_{j})^{T}\Big{)}^{-1}.\]
We can obtain an interpretation of the minimizer by relating it to singular value decomposition (SVD). Let \(Z_{\text{diff}}\in\mathbb{R}^{d\times|S|}\), where the columns of \(Z_{\text{diff}}\) enumerate the pairwise difference \(z^{i}-z^{j}\) for all \((i,j)\in S\). The matrix \(Z_{\text{diff}}\) defines a subspace that represents the variation in the embedding when the irrelevant feature is changed. Using \(Z_{\text{diff}}\), the minimizer can be written as \(P^{*}=P_{0}(I+\lambda^{\prime}Z_{\text{diff}}Z_{\text{diff}}^{T})^{-1}\) where we define \(\lambda^{\prime}=\lambda/|\mathcal{S}|\) to simplify the notation. Assume that the SVD of \(Z_{\text{diff}}\) is \(U\Sigma V^{T}\). Then we have \(Z_{\text{diff}}Z_{\text{diff}}^{T}=U\Sigma^{2}U^{T}\). The optimal solution \(P^{*}\) can then be rewritten as
\[P^{*}=P_{0}(U(I+\lambda^{\prime}\Sigma^{2})U^{T})^{-1}=P_{0}\underbrace{U(I+ \lambda^{\prime}\Sigma^{2})^{-1}U^{T}}_{\text{Calibration Matrix}}.\]
We can see that \(U(I+\lambda^{\prime}\Sigma^{2})^{-1}U^{T}\) acts as a calibration term. Before multiplying the text embedding with the projection matrix \(P_{0}\), variation due to the change of the spurious feature, namely, the eigenvectors with large squared singular value in \(Z_{\text{diff}}\) (spurious direction) will be down-weighted due to the inverse \((I+\lambda^{\prime}\Sigma^{2})^{-1}\). Therefore, varying the spurious attributes should result in similar embeddings after multiplying the calibration matrix.
### Relation to an Equalization Loss
Finally, we provide an equivalent form of the calibrated projection and relate it to an equalization loss. Ideally, we want each row of the classifier weight \(\beta\in\mathbb{R}^{K\times d}\) to have similar cosine similarity to pairs of embeddings in \(\mathcal{S}\). For instance, the embedding of "a photo of a doctor" should be equally similar to "a photo of a male doctor" and "a photo of a female doctor". In this section, we will show that the optimum of the calibration loss does satisfy this criterion.
We consider the following objective for obtaining a debiased text embedding \(z\in\mathbb{R}^{d}\) of a prompt given its initialization \(z_{0}\in\mathbb{R}^{d}\) from the text encoder:
\[\min_{z}\ \|z-z_{0}\|^{2}+\frac{\lambda}{|\mathcal{S}|}\sum_{(i,j)\in \mathcal{S}}(z^{T}z_{i}-z^{T}z_{j})^{2}. \tag{4}\]
The loss encourages the embedding \(z\) to have similar cosine similarity to embeddings in positive pairs while maintaining proximity to the initialization \(z_{0}\). Objective (4) has the same optimal solution to the calibration loss (3).
**Lemma 4.2**.: _The minimizer of objective (4) reads_
\[z^{*}=\underbrace{\Big{(}I+\frac{\lambda}{|\mathcal{S}|}\sum_{(i,j)\in \mathcal{S}}(z_{i}-z_{j})(z_{i}-z_{j})^{T}\Big{)}^{-1}}_{\text{Calibration Matrix}}z_{0}\]
_In particular, we have \(P_{0}z^{*}=P^{*}z_{0}\) where \(P^{*}\) is the minimizer of the calibration loss (3)._
Lemma 4.2 shows that the optimal solution of (4) is equivalent to multiplying the original embedding \(z\) with the calibration matrix defined before. Applying the projection \(P_{0}\) to \(z^{*}\) leads to the same weight in Lemma 4.1. This interpretation is particularly useful in cases where the ideal solution does not lie in the middle of \(z_{i}\) and \(z_{j}\), as will be shown in section 6 where we address biases in generative models.
The equalization objective has a similar motivation as the equalization step proposed by Bolukbasi et al. (2016) in their work on removing gender bias from word embeddings. Similar to the idea of positive pairs, given a set of word embeddings that has the same semantic meaning except for gender, their approach centers these embeddings by setting them to the average embedding of the set. After centering, any word in the dictionary will be equidistant to all words in the set. However, our approach differs in that we modify the embedding of the target prompt \(z\), rather than the embedding of positive pairs, making it more suitable for debiasing zero-shot classifiers as we are primarily concerned with the embedding of \(z\).
## 5 Experiments: Discriminative Models
We now evaluate our approach with experiments in discriminative (zero-shot classifier, text-image retrieval) and generative (text-to-image) models.
### Group Robustness against Spurious Correlations
We evaluate our approach on two popular benchmarks for evaluating spurious correlations, Waterbird (Sagawa et al., 2019) and CelebA (Liu et al., 2015) by following the setting of Zhang and Re (2022). On Waterbird, a water/land background is a confounding factor for the waterbirds/landbirds class, while on CelebA the binary gender is the spurious feature for blond/dark hair. Therefore, both datasets contains four groups defined by the labels and the spurious attributes.
Figure 1: **Calibration with Positive Pairs.** Upon projecting out irrelevant features (such as gender), the embeddings of group prompts should exhibit similarity and contain only information pertaining to the target class (e.g. doctor).
As such, both datasets contain four groups defined by the labels and the spurious attributes.
We evaluate our approach against several baselines, including zero-shot classification (Radford et al., 2021), empirical risk minimization (ERM) with linear probing (Kumar et al., 2022), and ERM with non-linear adapter (Gao et al., 2021). Additionally, we also consider three recent methods designed to improve the group robustness of vision-language foundation classifiers:
* Weight Space Ensembling (WiSE-FT) (Wortsman et al., 2022), which trains a linear classifier first using ERM and then combines the classifier outputs with the initial zero-shot predictions;
* Deep Feature Reweighting (DFR) (Kirichenko et al., 2022), which trains a linear probe on embeddings obtained from a pre-trained model using group-balanced data. Following Zhang and Re (2022), the group labels are replaced with zero-shot predictions;
* Contrastive Adapter (CA) (Zhang and Re, 2022), which trains adapters using contrastive learning to bring embeddings in the same class closer.
It is important to note that **all of the baselines** except the zero-shot classifier **require at least training data and class labels**, while our debiasing approach does not require access to any input data, labels, or group labels, which follows the principles of zero-shot learning.
We evaluate the performance of our proposed approach using two CLIP backbones: ResNet-50 (He et al., 2016) and ViT-L/14 (Dosovitskiy et al., 2020). The results are presented in Table 2. The results indicate that a simple application of the orthogonal projection (Orth-Proj) by itself only yields limited improvement of the worst group accuracy, whereas the calibration loss (Orth-Cali) significantly improves robustness across datasets and base models. The proposed Orth-Cali method achieves comparable or even smaller gaps between average and worst group accuracy compared to the state-of-the-art contrastive adapter (Zhang and Re, 2022), without the need for any data or labels. Note that the baselines generally achieve better average accuracy as they require fine-tuning on the target datasets.
Empirically, we found that gradually increasing the parameter \(\lambda\) improves the worst group accuracy and leads to a stable solution as shown in Table 3. Therefore, for all the experiments on discriminative models, we set \(\lambda\) to \(1000\) by default. To investigate the importance of orthogonal projection and calibration, we present an ablation study in Table 4. The results indicate that the calibration loss alone (\(P_{0}=I\)) performs well on the CelebA dataset, as the spurious feature (gender) is relatively easy to describe with prompts. However, performance drops on the Waterbird dataset without a good initialization from the orthogonal projection. More ablation studies can also be found in Appendix D, where we demonstrate the importance of class names in positive pairs.
(2022), we propose to utilize the MaxSkew metric, introduced by Geyik et al. (2019), to evaluate the level of fairness in the retrieval results. Specifically, we conduct our analysis on the FairFace dataset (Kurkkanen and Joo, 2019), which is specifically designed to address issues of fairness in facial recognition systems. Given a ranked list of images in response to a text query, let \(r_{a,k}\) be the ratio of the top k images that are labeled with attribute \(a\). Then MaxSkew@k is defined as \(\max_{a\in\mathcal{A}}\log\frac{r_{a,k}}{1/|\mathcal{A}|}\). It quantifies the maximal discrepancy between the ratio of top k images labeled with a specific sensitive attribute, denoted as \(r_{a,k}\), and the uniform weight \(1/|\mathcal{A}|\), where \(\mathcal{A}\) represents the set of sensitive attributes. The MaxSkew metric provides a useful measure of fairness in text-image retrieval systems, by assessing the degree to which the retrieval results are evenly distributed across sensitive attributes. A small MaxSkew value indicates that the distribution of retrieved images across different sensitive attributes is close to being uniform.
To measure the bias, we query the validation set of FairFace based on 10 prompts that are uncorrelated with facial expressions or sensitive attributes, e.g., "a photo of a [concept] person", where the [concept] is a neutral concept such as evil or smart. The detailed prompts are described in Appendix C. We measure the MaxSkew based on three labeled attributes of FairFace: gender, race, and age. Table 5 shows the average MaxSkew@1000 over concepts, demonstrating that our approach significantly reduces the MaxSkew across different attributes and backbones.
## 6 Debiasing Generative Models
In this section, we explore the possibility of extending the methodology we developed for discriminative models to generative models. Our primary focus is on addressing social group biases, specifically gender and race discrepancy, as measured by metric (1). Given a set of sensitive attributes, we aim to ensure an equal probability of representation for each attribute within the output distributions.
To achieve this, we optimize the equalization loss (4) without applying the initial orthogonal projection matrix \(P_{0}\). This is because our goal is to balance rather than completely eliminate biased information in the generated images. The positive pairs used for the generative models consist of an enumeration of "a photo of a [attribute]" where the attribute is a member of the set of gender or races. For instance, to mitigate gender bias, we adopt \(\mathcal{S}=\{\)("a photo of a male", "a photo of a female")\(\}\). This is distinct from the positive pairs used in 4.3, as we have removed the [class name] with the intention of generalizing the solution beyond the specific training class name. The motivations will be further discussed in the experimental sections.
### Generative Models Reinforce Biases
However, we found that the calibrated embeddings did not perform well in practice. This is due to the fact that we have overlooked the biases inherent in the generative model itself. The generative model and the prompt encoder are often trained on different datasets, which can result in the generated images inheriting biases from both datasets. For example, the dataset used to train the stable-diffusion model (LAION-2B(en) (Schuhmann et al., 2022)) is different from the one used to train CLIP, the prompt encoder. It may be necessary to carefully consider the biases present in both the generative model and the prompt encoder. Indeed, our empirical investigation corroborates this hypothesis. As an example, the "calibrated" prompt for "a photo of a doctor" still results in biases in the generated images. In contrast, an unbiased embedding of the prompt "a photo of a doctor" is slightly inclined towards the female embedding rather than being equidistant, as illustrated in Figure 2. Hence, tilting the text embedding has the potential to counteract the bias inherent in the generative model.
### Reweighted Equalization Loss
In order to address the biases inherent in both the generative model and the prompt encoder, we introduce a term \(w_{i}\) into the equalization loss function that adjusts the relative influence of the similarity to \(z_{i}\) and \(z_{j}\).
\[\min_{z}\ \|z-z_{0}\|^{2}+\frac{\lambda}{|\mathcal{S}|}\sum_{(i,j)\in \mathcal{S}}\left(w_{i}z^{T}z_{i}-w_{j}z^{T}z_{j}\right)^{2}.\]
When \(w_{i}>1\), the loss function places more emphasis on the similarity to \(z_{i}\), while \(w_{i}<1\) gives greater weight to the similarity to \(z_{i}\). This allows us to adjust the biases present in the generative model. Note that the weighted loss can be solved via Lemma 4.2 by replacing \(z_{i}\) with \(w_{i}z_{i}\).
Figure 2: **Bias of Generative Models.** The unbiased prompt embedding is slightly inclined towards the female embedding in order to counteract the bias present in generative models.
\begin{table}
\begin{tabular}{c|c c c|c c c} \hline \hline & \multicolumn{3}{c|}{**CLIP ViT-B/32**} & \multicolumn{3}{c}{**CLIP ViT-L/14**} \\ & Gen & Race & Age & Gen & Race & Age \\ \hline Zero-shot & 0.206 & 0.743 & 0.797 & 0.206 & 0.768 & 0.703 \\ Orth-Proj & 0.146 & 0.755 & **0.635** & 0.349 & 0.605 & 0.706 \\ Orth-Cali & **0.102** & **0.638** & 0.641 & **0.200** & **0.461** & **0.662** \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Measuring biases on FairFace.** We MaxSkew@1000 (the smaller the better) on FairFace validation set.
Iterative OptimizationWe present an iterative algorithm to optimize the weight via feedback from generative models. The goal is to minimize the discrepancy defined in equation (2). In each iteration, a set of images is generated conditioned on the optimal embedding given the current weight. Let \(r_{i}^{t}\) be the ratio of an attribute \(a_{i}\in\mathcal{A}\) in the output distribution at time \(t\) classified with zero-shot CLIP where \(\mathcal{A}\) is the set of sensitive attributes. We adopt the following update rule:
\[w_{i}^{t+1}=\max\left(w_{i}^{t}+\eta\cdot\left(r_{i}^{t}-1/|\mathcal{A}| \right),\epsilon\right)\]
where \(\epsilon\) is a small positive value (such as 1e-5) to ensure the positivity of the weights and \(\eta\) is the learning rate. In each iteration, if the ratio \(r_{i}^{t}\) of an attribute \(a_{i}\) is smaller than the uniform weight \(1/|\mathcal{A}|\), the weight \(w_{i}^{t}\) will decrease by an amount based on the difference between the ratio and the uniform weight, which encourages the embedding to incline towards \(z_{i}\). In contrast, when the ratio \(r_{i}^{t}\) is greater than \(1/|\mathcal{A}|\), the weight will increase and thus reduce the impact of \(z_{i}\). We summarize the approach in Algorithm 1.
A key advantage of Algorithm 1 is that it does not require to backpropagate through either CLIP or the generative models, where the computational cost associated with backpropagation through the text-to-image model's diffusion process can be substantial. Empirical evaluations have shown that the algorithm typically converges within a reasonable number of iterations, on the order of less than 10 to 20 iterations.
```
0: An initial weight vector \(w^{t=0}\) and \(\{z_{i}\}_{i=1}^{|\mathcal{A}|}\), learning rate \(\eta\), batch size \(n\), and number of iterations \(T\). for\(t=1,\ldots,T\)do Solve \(z^{*}\) given weight \(\mathbf{w}^{t}\) Generate a set of images \(\{x^{k}\}_{k=1}^{n}\) conditioned on \(z^{*}\) Estimate the ratio \(r_{i}^{t}\) with zero-shot CLIP Calculate maximal gap \(\Delta^{t}=\max_{i}r_{i}^{t}-\min_{j}r_{j}^{t}\) Update \(w_{i}^{t+1}=\max\left(w_{i}^{t}+\eta\cdot\left(r_{i}^{t}-\frac{1}{|\mathcal{A }|}\right),\epsilon\right)\) endfor Set \(t^{*}=\arg\min_{t}\Delta^{t}\) and Solve \(z^{*}\) given weight \(w^{t^{*}}\) return\(z^{*}\)
```
**Algorithm 1** Debiasing Generative Models
## 7 Experiments: Generative Models
In order to evaluate the effectiveness of our approach in the context of generative models, we conducted experiments utilizing the Stable Diffusion v1.4 framework (Rombach et al., 2022). The primary focus of the experiments is to query the generative model using profession-related prompts, specifically "a photo of a [profession]", where the [profession] we consider are listed in table 6. Empirically, the generated images were found to exhibit a strong bias towards certain gender and race, and we will debias the generative models with the proposed equalization loss in this section.
Auto-Evaluation.Evaluating generative models can be challenging without the use of human labels. Inspired by Cho et al. (2022), we used sensitive attribute classifiers to predict the sensitive attributes of the generated images. The discrepancy, as defined in equation (2), was then calculated. In particular, we leverage the CLIP zero-shot models to predict the sensitive attributes.
Human Evaluation.Despite the scalability, the prediction from CLIP could be erroneous, especially for complicated tasks such as race classification. Therefore, we also evaluate our approach with human evaluation, where we invite in-house annotators of different genders and races to label the sensitive attributes of the generated images. More details and the interface are included in appendix C.2.
For both settings, we generate 100 images for each profession for evaluation. The debiased and biased models share the same random seed for fair comparison.
### Debiasing Models with Known Classes
Gender and Racial BiasesIn alignment with the framework proposed by Karkainen and Joo (2019), we consider the gender attributes of male and female, and racial attributes of White, Asian, Black, Indian, and Latino. We first evaluate the class-dependent setting, in which we independently debias each profession in the training set using Algorithm 1, by calculating a separate set of weights for each profession. The results, presented in the first part of Table 7, demonstrate a significant reduction in gender discrepancy after debiasing. The discrepancy between males and females drops roughly 50% and 60% in terms of human and automatic evaluation. To further illustrate the effectiveness of our approach, we present quantitative results for the prompt "a photo of a doctor" in Figure 3. By applying the calibration matrix to balance the male and female directions, the gender diversity of the generated images significantly improved. Additional examples can be found in appendix D.
Compared to gender, we found that addressing racial bias is a more challenging task compared to addressing gender bias. One source of complexity is the ambiguity of ethnicity, as individuals may identify with multiple races. This is reflected in the results of our experiments, where we observed disagreements between CLIP and human annotators, as well as among different human annotators. Even though our approach performs similarly to the baseline under human evaluation, it demonstrates non-trivial improvement under automatic evaluation. This suggests that our equalizing objective is effective in minimizing discrepancy, but fur
\begin{table}
\begin{tabular}{c|c} \hline \hline
**Train** & **Test** \\ \hline athlete,doctor,nurse & builder,dentist,farmer,lecturer,lawyer, \\ scientist, teacher & manager,musician,pilot,pol politician,singer \\ \hline \hline \end{tabular}
\end{table}
Table 6: **List of Profession.** We average the weight obtain from debiasing training prompt and test it with unseen professions.
ther human feedback may be necessary while implementing Algorithm 1 in order to ensure optimal performance.
Beyond Social BiasesOur proposed approach can also be applied to address biases in general spurious attributes beyond social biases. As an example, we draw inspiration from the WaterBird dataset (Sagawa et al., 2019) and debias the prompt "a photo of a land bird" by using "a photo of a land background" and "a photo of a water background" as positive pairs. As illustrated in Figure 4, the results show that our approach successfully generates images of land birds in both land and water backgrounds, whereas the original models only generated images with land backgrounds.
### Generalization to Unseen Classes
Solving the iterative optimization for each prompt can be computationally intensive. Ideally, we would like to obtain a debiasing matrix that works for every prompt, and one can treat it as a standard preprocessing step before inputting the embedding into the generator. Therefore, in this section, we investigate a much more challenging setting: the feasibility of obtaining a single calibration matrix that effectively mitigates biases for all prompts.
Interestingly, we found that the weights for each profession do not deviate too much, which implies the potential to use a single set of weights for debiasing all classes. In particular, we calculate the average weights of each profession in the training set and apply the resulting calibration matrix to the professions in the testing set. The results are presented in Table 8. As evident from the results, this task proves to be more challenging. However, our approach still achieves a significant improvement over the original model in mitigating gender bias, where the discrepancy drops roughly 20% under human evaluation. We also notice that some classes are harder to debias, e.g., farmer and builder, which leads to nearly maximized discrepancy, while the discrepancies of manager and lawyer are less than \(0.1\).
Nevertheless, we can see that it remains difficult to fully mitigate racial biases. Even though our approach outperforms the baseline in terms of automatic evaluation, the faulty supervision from CLIP during iterative optimization results in a greater discrepancy under human evaluation. This again suggests that removing racial biases is hard to achieve without proper feedback from humans. It is an interesting future direction to combine Algorithm 1 with human feedback, where similar methodologies have achieved great success in other tasks such as language modeling (Bakker et al., 2022).
## 8 Conclusion
In this work, we present a new approach to debiasing vision-language foundation models by utilizing prompts to measure and eliminate biases. The proposed calibrated projection effectively mitigates biases in both discriminative and generative vision-language models.
\begin{table}
\begin{tabular}{c|c|c|c} \hline & & **Gender** & **Race** \\ \hline \multirow{2}{*}{Auto} & Stable-Diff & 0.688\(\pm\)0.298 & 0.466\(\pm\)0.111 \\ & Debiased (Ours) & **0.076\(\pm\)0.052** & **0.252\(\pm\)0.113** \\ \hline \multirow{2}{*}{Human} & Stable-Diff & 0.641\(\pm\)0.229 & 0.704\(\pm\)0.080 \\ & Debiased (Ours) & **0.166\(\pm\)0.049** & **0.664\(\pm\)0.134** \\ \hline \end{tabular}
\end{table}
Table 7: **Discrepancy between Groups.** The maximal discrepancy defined in (2) drops after debiasing.
Figure 4: **Generation against Non-social Biases.** The results demonstrate the ability of the proposed method to generate images of land birds in both land and water backgrounds.
\begin{table}
\begin{tabular}{c|c|c|c} \hline & & **Gender** & **Race** \\ \hline \multirow{2}{*}{Auto} & Stable-Diff & 0.609\(\pm\)0.256 & 0.641\(\pm\)0.192 \\ & Debiased (Ours) & **0.378\(\pm\)0.331** & **0.477\(\pm\)0.190** \\ \hline \multirow{2}{*}{Human} & Stable-Diff & 0.675\(\pm\)0.249 & **0.620\(\pm\)0.181** \\ & Debiased (Ours) & **0.452\(\pm\)0.359** & 0.738\(\pm\)0.206 \\ \hline \end{tabular}
\end{table}
Table 8: **Generalizability to Unseen Classes.** We use a single calibration matrix to minimize the discrepancy for unseen classes.
Figure 3: **Removing Gender Bias of Stable Diffusion.** We fix the random seed of initial latent noise of Stable Diffusion (Rombach et al., 2022) and generate the images with the prompt “a photo of a doctor”. The results demonstrate that applying the calibration matrix to the prompt embedding significantly improves the balance between male and female in the generated images.
Acknowledgements Thanks to Arjun Akula, Susanna Rico, Joshua Robinson, Lucy Chai, Kabir Swain, Manel Baradad, Shobhita Sundaram, Pei-Ling Chiang, and Yi-Yi Chu for their helpful comments and suggestions. This work was in part supported by NSF BIGDATA IIS- 1741341, NSF CAREER 1553284, and NSF AI Institute TILOS. CC is supported by an IBM PhD Fellowship.
|
2309.06983 | Creating Community in a Data Science Classroom | A community is a collection of people who know and care about each other. The
vast majority of college courses are not communities. This is especially true
of statistics and data science courses, both because our classes are larger and
because we are more likely to lecture. However, it is possible to create a
community in your classroom. This article offers an idiosyncratic set of
practices for creating community. I have used these techniques successfully in
first and second semester statistics courses with enrollments ranging from 40
to 120. The key steps are knowing names, cold calling, classroom seating, a
shallow learning curve, Study Halls, Recitations and rotating-one-on-one final
project presentations. | David Kane | 2023-09-13T14:18:22Z | http://arxiv.org/abs/2309.06983v1 | # Creating Community in a Data Science Classroom
###### Abstract
A community is a collection of people who know and care about each other. The vast majority of college courses are not communities. This is especially true of statistics and data science courses, both because our classes are larger and because we are more likely to lecture. However, it is possible to create a community in your classroom. This article offers an idiosyncratic set of practices for creating community. I have used these techniques successfully in first and second semester statistics courses with enrollments ranging from 40 to 120. The key steps are knowing names, cold calling, classroom seating, a shallow learning curve, Study Halls, Recitations and rotating-one-on-one final project presentations. **Keywords**: education, data science.
###### Names
Community starts with names. If two people don't know each other's names, then it is hard to say that they really belong to the same "community." The more that students know each other's names, the tighter the classroom community will be.
Learn all your students' names. Is that easy? No! But all it takes is time and concentration. Few of us can become more charismatic. All of us can learn our students' names. Most schools have a system for sharing student photos with their instructors. Make use of it. Of course, students do change their appearance over time. Changes in hair style and color are often tricky. As an instructor, I have two advantages. Since students (see below) sit in the same area of the classroom each day, I can use location as a cue. I can also study students during class while they work on in-class assignments.
Use students' names. Greet them when they come into the lecture hall. Will that freak them out? You bet! But it will also impress them, and please them, although they will never admit that to you. All of us want to be seen. Teach students each other's names. The typical lecture classroom is a collection of strangers. Students come to class alone. They sit alone, often in the same seat or at least region of the classroom. If they have a friend in the class, they will often come into the class with that friend, sit with that friend, and then leave with that friend, never having interacted with another student in the class. If they have a couple of friends in the class -- a common scenario for members of a sports team -- they will travel and sit in a pack. They don't mean to be unfriendly but other students will often perceive them to be. Breaking down these barriers between students is the single most important trick to creating a classroom community.
## Seating
Here are quotes from my syllabus, along with commentary:
* Seating is organized, by campus geography, into several large "Groups" of 20 to 30 students: first years, Eliot House, Quadlings, et cetera. Details depend on enrollment.
Groups based on student housing are probably easiest, not least because you want groups in which students are more likely to run into each other outside of class. Groups based on class year are also sensible, especially keeping all the first years together. The best approach will depend on the details of your campus and student body. Each group will, naturally, be diverse on dimensions other than the grouping criteria.
* Students work in "Pairs" of two "Partners." Sometimes, this will be "side-by-side," each of you with a computer open, each writing code, but talking with each other throughout. Other times, we will "pair program," meaning just one computer open and both of you collaborating on a single project. You will work with a different partner every class.
Students forced to work together for an entire class will have no choice but to learn each other's names. Student culture changes over time, but one aspect has been constant for decades. Almost all students wish they knew more other students, and are happy to be introduced to them. At the same time, few students will sit down next to a stranger in a classroom and introduce themselves. Requiring students to work in pairs solves this collective action problem. They want to meet each other. I require students to work with different Partners each class because, without that requirement, they won't.
If you are the stronger student in a Pair, do not simply charge ahead. Instead, make sure that your Partner keeps up with you. Help each other! If you aren't talking with each other often, then you are doing it wrong. There is no better way to learn than to teach. The stronger student should type less and talk more.
There is nothing more exciting that a lecture hall with 50 conversations going on simultaneously.
Besides your Partner, the students sitting immediately beside, behind and in front of you are members of your Circle that day. Introduce yourself to them when you/they arrive.
Students won't want to do this. It will feel strange. Yet awkwardness in the pursuit of community is no vice.
Sadly, this will only happen if you enforce it. Fortunately, enforcement is easy. I begin each class with a random cold call and ask the lucky student to introduce me to both her Partner and to the students in her Circle. (If the first student fails at this, I will give the entire class 30 seconds to do some quick introductions around their Circle before I ask another student.) By the second week, students are doing this on their own, as soon as they enter the classroom. This creates a very different atmosphere. Recall the slogan for the TV show _Cheers_: "Where everybody knows your name." A well-functioning classroom community begins with everyone knowing your name, you knowing theirs, and them knowing each other's.
Record the name of your Partner in the Google sheet for the day and the names of your Circle in a different Google sheet. Each person does this, even though doing so leads to duplication. (Don't stress about spelling.)
Requiring that names are recorded makes things easier for shy students. They have no choice but to record the other students' names. It is a course requirement. Each Google sheet is pre-formatted with all the students in the class. Students will list Partner/Circle names next to their own. This makes it easy for me to confirm that students are all present, without wasting class time on calling attendance. If I notice a student is missing -- which is easy to do with a glance at the sheet -- I call on a member of their Group and inquire about the missing student's well-being. When that student professes ignorance, as they always will, I ask them to text/e-mail the missing student to "make sure she is OK." And I really do care about the health/safety of my students! But I am also well-aware that the causal effect of this practice is to maximize lecture attendance.
### Cold Calling
Nothing keeps students engaged more than cold calling. These cold calls are low pressure. Wrong answers do not matter. They are not counted in a student's grade. To illustrate this, I often ask questions -- like "What is my favorite soccer team?" -- which students can't possibly answer. Whenever a student is stumped, or even appears stumped, I quickly answer the question myself or move on to the next student. But, in general, questions are so simple, tied so directly to what the students have just been working on, that students can easily answer. Yet just the fact that they might be cold called ensures that they are all paying attention, all the time.
I use an interactive R function to randomly select the student to call on. Students can see my RStudio session, projected onto the big screen at the front of the room, and watch me run the function. They know that my cold calling is random, that there is no favoritism. (They are often surprised at how non-random a truly random algorithm will appear, with the same student being called on twice or even three times in a single class session, even in a large
class.) Cold calling becomes a bit of a game, one in which students are both observers and participants.
### Working classroom
You learn how to play soccer with the ball at your feet. You learn how to program with your hands on the keyboard.
The term "flipped classroom" (Lemov (2015)) has two implications: the first about what happens in the classroom and the second about what happens outside the classroom. I prefer the term "working classroom," because it references what goes in the classroom -- working not lecturing -- without making claims about what should occur outside the classroom.
In particular, lectures, whether given in class or required outside of class, are among the worst methods for transferring information. First, lectures (either in person or video) are too slow for almost 50% of the class. They have covered this topic in another class. They understood this concept from the reading. You are wasting their time by explaining X again. Second, lectures are too fast for almost 50% of the class, for the same reasons. By definition, a lecture can only be correctly paced for, at most, a handful of students.
A "working classroom" is about what occurs in the class itself. For me, this work is either programming or talking/writing about statistics. Because students work in pairs, they are always working. They are always either typing or directing what their partner should type. They are always either talking, or listening to their partner. It is impossible to not be engaged in a working classroom.
A working classroom creates a pit of success. Students can't help but to learn something, even if it is only to practice a skill. (Soccer players practice passing every day. Data scientists
should practice using the computer to work with data every day. You can always get better, even at something you already "know" how to do.)
### No student left behind
The weakest students are most at risk for estrangement from the classroom community. In every class, there will be strong students and weak students. The vast majority of my focus as an instructor is devoted to the weakest 20% of the students, especially the bottom 5%. Those are the students who most need my help and are most likely to benefit from it. Those are the students I want the teaching staff to take care of. We should not ignore our stronger students, of course. But, at the end of the semester, I judge myself most on the causal effect I have had on the students who struggled most at the beginning.
First, I create the shallowest possible learning curve. Each step is as small as possible. Once I have taught you A, I want B to be easy. Once you understand B, C should be simple. And so on. All steps are baby steps. By the end of the semester, we will have covered as much material as a traditional class. Students learn as much, if not more. But, instead of just problem sets every few weeks and a high pressure exam or two, they have 30 or more assignments, each building on the previous ones, all required.
Second, work should be spread out as much as possible. Learning statistics is like learning a new language: you should practice every day. In a typical class, students have tutorials due on Monday, several hours of simple questions which are easy to do as long as you open the textbook. It is hard to make students do the reading. It is easy to ask them 100 questions which make it almost impossible for them not to do the reading. Class on Tuesday (and Thursday) features 75 minutes of intense collaborative data science work. Problem sets are due on Wednesday. Students are encouraged to work together, to ask and answer questions
of each other and of course staff. Final project milestones are due on Fridays. If you want students to work about 10 hours per week outside of class, then they will learn the most if they spend 1 or 2 hours per day. Spreading out their work like this is not their natural inclination. They need our help.
Third, students need to come to class. From my syllabus:
Missing Class: You expect me to be present for lecture. I expect the same of you. There is nothing more embarrassing, for both us, than for me to call your name and have you not be there to answer. But, at the same time, conflicts arise. It is never a problem to miss class if, for example, you are out of town or have a health issue. Just email me and your assigned TF explaining the situation. Please do so on the day you will be missing class. We don't need advanced warning.
Note how cold calling provides a justification for enforcing lecture attendance. Since the algorithm is random, nothing prevents it from producing the name of a student who is not present. Of course, this would not really be a problem in class, but it does provide an excuse for insisting that students attend all classes, or inform us ahead of time that they won't be present. The more often that students attend class, the more that they will learn, the more that they will feel a part of the classroom community.
### Teaching Staff
The larger the course, the more important the efforts of the teaching staff toward nurturing a community. Colleges vary dramatically in the types of teaching support they provide to data science courses. The raw number of teaching staff, while almost always a function of the number of students in the course, varies. Teaching staff can be anyone from junior undergraduates to senior graduate students, even post-docs. The titles and (permitted) duties of teaching staff
often depends on their undergraduate/graduate status. The number of hours is a function of both the policies of the institution and the availability of the teaching staff themselves. I will ignore that variation and address common issues, referring to all teaching staff as teaching fellows (TFs). Advice:
* Think in terms of hours rather than positions or roles. The total number of hours per week is the key resource, whether that is one TF who works 20 hours or four TFs who each work 5 hours. One key advantage of hours is that it is an institution-approved metric of workload. You may think that a specific TF is, for example, responsible for grading problem sets, but your institution does not use "grade the problem sets" as a metric. Another advantage of hours is that it helps to alleviate the principal-agent problem between you and your TFs. It is tough to ensure that TFs devote the correct amount of effort to their responsibilities. Specifying hours rather than tasks makes conflicts easier to manage.
* Minimize time spent on grading. You and your TFs should automate it as much as possible. Services like Gradescope (Singh et al. (2017)) and PrairieLearn (West et al. (2015)) are helpful. Don't bother providing much written feedback, both because doing so is time consuming and because students often ignore it. Have a TF or two who specialize in grading. In that way, almost every hour that other TFs are paid for will involve time spent with students.
* Make use of the beginning and the end of the semester. Many schools pay TFs for the entire semester, even for weeks before and after classes are actually meeting. Those are hours you can use even if other classes don't have student/TF meetings at those times.
* Maximize the amount of time which TFs spend with students, either in small groups or one-on-one. Instead of (often optional) sections in which TFs lecture to students,
arrange Recitations, small 30 or 60 minute meetings between TFs and 1 to 4 students. I call these "Recitations" to highlight that they are different from the "sections" which students are used to. Use other terminology if you prefer.
## Recitations
TFs should not attend your lectures. Although there are (maybe!) benefits to having them in lecture, the opportunity cost is huge. Instead of lurking in the back of the lecture hall, reading Twitter\(\mathbb{X}\), for 2-3 hours per week, they could be meeting with small groups of students.
TFs should not have (traditional) office hours. Most office hours are unused by students. (Note that TFs have the incentive to schedule their office hours at times and locations that students are less likely to attend.)
Recitations are different from traditional sections for two reasons. First, they involve much smaller groups. Instead of a single 60 minute section with 20 students -- who might or might not attend, who might or might not participate -- listening to a TF lecture, that same TF would meet with students in groups of 4, for 60 minutes each. I am not recommending that your TFs work more hours than they do now. They/you are saving the 2-3 hours which they would have spent in lecture and the 1-2 hours they would have spent preparing their own lectures each week. They spend those 3-5 extra hours with students in Recitations.
A community consists of people who care about one another. We want our teaching staff to be invested in the success of their students. We want our students to care about the opinions of the teaching staff, beyond the brute cudgel provided by grading. The best way to create a meaningful relationship between TFs and students is via hours spent together, sitting around a table, talking about data science and, ideally, working toward a common goal.
From the point of view of building community, the topic of the Recitations is almost irrelevant. My recommendation is to focus those meetings on final projects. A good structure is to have milestones for your final projects due on Friday each week. The Recitation for that week will focus on the TF helping students to complete the milestone. There is nothing wrong with spending time answering questions or discussing topics from class, but the main focus is the final project. We want TFs and students to think of the projects as something they work on together. We want TFs to be proud of their students when they present their final projects. We want students to want to make their TFs proud. Recitations make them care more about each other than they otherwise would.
### Study Halls
The best replacement for office hours are Study Halls, 3 hour blocks of time, located in a large space like a dining hall, hosted by a single TF. From my instructions to students and teaching staff:
At every Study Hall, the TF will ensure that everyone knows everyone else's name. These classes are communities and community begins with names. The process starts with the first student arriving and sitting at the table. They and the TF chat. (It is always nice for the student to take the initiative and introduce themselves to the TF. Remembering all your names is hard!) A second person arrives and sits at the same table, followed by introductions. Persons 3 and 4 arrive. More introductions. Help your TF by introducing yourself, even if you are 90% sure they remember your name. Be friendly!
At this point, the table is filled. Another person arrives. Instead of that person starting a new table, the TF gives the new student their spot and moves their
belongings to a new table. No student ever sits alone. The TF hovers around the table until more students arrive and start filling out table #2. And so on. At each stage, students are responsible for, at a minimum, introducing themselves to the TF and, even better, to the other students. Best is when students who are already present shower newly arriving students with welcomes and introductions.
All students benefit from your efforts to create a community around your class. But the students who benefit the most are the ones least likely to have a community of their own. Popular, sociable students will always have someone to study with, someone to work on the problem sets with. Shy students, those with fewer friends and worse social skills, love Study Halls because the structure ensures that there will always be a place for them. They will be welcomed because we have created a community in which being welcoming is a requirement.
### Final projects
Research projects in statistics and data science classes often work well.1 Rotating one-on-one presentations (ROOOP) can work in any class in which students create a final project. The only necessary requirement is something to show, something around which to center the discussion.
Footnote 1: See Ledolter (1995), Wardrop (1999), and White (2018) for discussion about the use of projects.
The mechanics of the process are outlined in this example e-mail to students, interspersed with my comments.
Below are details on the process for Demo Day. But, really, don't sweat it. Everything just sort of works out. Just make sure you bring your (fully charged) laptop. Do not arrive late or points will be deducted.
My framing is intended to minimize student stress, to make the event fun. Calling it "Demo Day" highlights the connection to the non-academic world, a connection which my courses try to cultivate and which students appreciate. (Unlike us, almost all of them will leave academia.) The two most important logistical issues are student laptop readiness and an on-time start, so I mention both in the opening paragraph, the only part of the e-mail which I am confident most students will read.
Main purpose of Demo Day is to get feedback from your peers so that you can use the next 10 days to make your final submission even better.
Although Demo Day is graded, the final version of student projects are not due for another week or so. Without "guidance," students will often not start to work on their projects till the last possible minute. By having Demo Day so far in advance of the final due date, we enable/force students to spread out the work on their projects.
Student presentations themselves are not graded. First, doing so is stressful to students. Second, it is hard for course staff to "see" every presentation, at least from start to finish. Third, because there are so many more students than staff, we inevitably see some students during their first presentation and other students for their 7th or 8th. The latter are much smoother and more comfortable than the former, unsurprisingly.
However, we do grade the quality of the code and the other materials associated with the presentation. This forces students to have completed their projects, even though they have another week or more before the final version is due.
Arrive a few minutes early. We start on time! If, for some reason, you need to present in the first slot, arrive 15 minutes early. Once you arrive, put your stuff (backpack, coat) off to the side of the room. Print your name clearly on the sign up sheet at the front of the room.
In a class with 20 students, mechanics are easy. With scores, even hundreds, of students, details matter. You need a mechanism for keeping track of which students actually showed up. You need to plan for movement around the room.
The bigger the room you can use, the better. The process does work, however, even if you are in a small room with students all presenting next to each other, sitting around a seminar table. Just have them keep their voices down.
Students are split into two groups: A and B. The A group starts as "presenters." Grab your computer and sit down one seat in from the edge of the aisle. Spread out around the room, not too near anyone else. Bring up your website. Load up your GitHub repo in a browser tab.
In my introductory data science class, all students complete individual projects using R and Quarto. The final product is a Quarto website featuring a few pages with graphics and analysis.
Members of group B will select a seat next to a presenter. We rotate. It doesn't matter where you start. Introduce yourself! Chat with your new friend.
If we have an even number, there should be one listener next to each presenter. If there are an odd number, we will have one extra presenter in group A. That person will just sit quietly during the first round.
A bell announces the start of the first round. The presenter starts with their four sentence elevator pitch about their project. Then the listener asks questions, about anything they want! Maybe they want to look at the code on GitHub to see how an effect was created. Maybe they want to talk about the model. Maybe they
want a tour of the data cleaning code. Maybe they just want to poke around the data. Whatever they like!
I require students to write and then memorize a four sentence opening summary about their projects. The world is filled with busy people. If you want them to spend time with your work, you need to give them a smooth, coherent case for doing so.
A bell, rung after 4 minutes, announces the end of the round. The presenters stay where they are. The listeners get up, and move on to the next presenter. (Pay attention to the flow of people around the room so that you know where to go next.) The bell goes off and we start another round. This may all seem complex but it just naturally works.
In practice, we generally don't end up using a bell. Instead, I, standing at the front of the room and raising my voice, yell "Time for all listeners to stand up and move on to the next presentation!" Even with that drama, students often need to be shimmied along. On the one hand, this is nice to see. They are so engaged with the current presentation that they don't want to leave it. But, like a game of musical chairs (without the missing chair), student N needs student N+1 to move before she can sit down. Once the next student has arrived, the presenter begins.
Organizing the movement around the room is more difficult than you might expect because students don't always pay attention to where they are supposed to go next.
After 8 rounds, the groups switch. B will now present and A will listen. A puts away their computers and leaves the room to allow B time to set up. B students go to the front of the room and (anonymously) write down the names of one or two members of A who created impressive websites and/or gave nice presentations.
(These will not affect our grades, but we will let students know if their peers thought they did excellent work.) B students then get their computers and set up, just as A students did. Students from A then come back in the room and sit down next to a presenter. Bell goes off and the presentations start again. After another 8 rounds, we are done. B students leave the room. A students go to the front and write down their favorite presentations. B students come back, everyone gathers their stuff, and Demo Day is over. We finish 75 minutes after we start, just like a regular class session.
Sadly, we can't assume that students will read this e-mail closely, or at all.
Rotating one-on-one presentations work virtually, as we found out in 2020. The basic structure is the same on-line as it is in-person, and for all the same reasons. The more that students speak, the more that they get out of the session. When listening, the fewer other listeners, the more that they will pay attention.
One possible modification would be to have a single student present to a small group of other students. I have found this to be a bad idea for several reasons.
* Students pay close attention to their peers in a one-on-one presentation. They won't even try to look at their phones, except during the transitions between sessions. That is, sadly, not true in even small groups. Fewer listeners mean a more engaged, albeit smaller, audience.
* Group presentations allow for fewer presentations by each student. Consider a group of 16. With ROOOP, each student presents 8 times. If, instead of groups of 2, we used groups of 4, then each student would only present 4 times. The more times that a student is allowed to present her work, the better.
Another modification is a "poster style" presentation in which students set up to present, either a physical poster or just with their laptop, and other students wander around the room, listening to various presentations. This is better than nothing, but far worse in terms of creating a community because it does not maximize the number of students spending one-on-one time with each other. A community is built up from small group interactions, from a meeting of the minds. Although such meetings can occur during poster style presentations, they are much less likely.
## Conclusion
Good teaching begins with community. If your students feel that they are part of a community, they will work harder and learn more. There is no single trick which creates a community. Instead, there are one hundred or so tricks, each of which has a small effect on its own. The most important of these are knowing names, cold calling, classroom seating, a shallow learning curve, Recitations, Study Halls and rotating-one-on-one final project presentations. No charisma required.
|
2309.08776 | Projected Task-Specific Layers for Multi-Task Reinforcement Learning | Multi-task reinforcement learning could enable robots to scale across a wide
variety of manipulation tasks in homes and workplaces. However, generalizing
from one task to another and mitigating negative task interference still
remains a challenge. Addressing this challenge by successfully sharing
information across tasks will depend on how well the structure underlying the
tasks is captured. In this work, we introduce our new architecture, Projected
Task-Specific Layers (PTSL), that leverages a common policy with dense
task-specific corrections through task-specific layers to better express shared
and variable task information. We then show that our model outperforms the
state of the art on the MT10 and MT50 benchmarks of Meta-World consisting of 10
and 50 goal-conditioned tasks for a Sawyer arm. | Josselin Somerville Roberts, Julia Di | 2023-09-15T21:42:06Z | http://arxiv.org/abs/2309.08776v2 | # Projected Task-Specific Layers for Multi-Task Reinforcement Learning
###### Abstract
Multi-task reinforcement learning could enable robots to scale across a wide variety of manipulation tasks in homes and workplaces. However, generalizing from one task to another and mitigating negative task interference still remains a challenge. Addressing this challenge by successfully sharing information across tasks will depend on how well the structure underlying the tasks is captured. In this work, we introduce our new architecture, Projected Task-Specific Layers (PTSL), that leverages a common policy with dense task-specific corrections through task-specific layers to better express shared and variable task information. We then show that our model outperforms the state of the art on the MT10 and MT50 benchmarks of Meta-World consisting of 10 and 50 goal-conditioned tasks for a Sawyer arm.
## I Introduction
Complex manipulation is common in a number of desirable real-world robotic use cases--such as wiping various kitchen surfaces in a busy restaurant, routing cables in a datacenter, or screwing parts in an manufacturing assembly. While these individual tasks are different, they are often composed of similar manipulation primitives. Humans intuitively recognize and scaffold upon these primitives to implement different tasks, but it is challenging for robots, which are often only trained on individual, specific tasks, to do the same. Enabling robots to reason through multiple related tasks with multi-task reinforcement learning [1] can unlock more general-purpose robotics--allowing efficient learning across similar tasks and using their shared structure to learn a better performing policy [2, 3, 4].
Multi-task learning often, however, requires a careful choice of tasks and balanced sampling, and even then may not always improve learning. For complex manipulation, learning-based approaches may generalize over unseen tasks [5] but can still be difficult to scale successfully [6, 7]. Recent work has argued that learning methods should use shared task structures [8, 9] but most approaches still learn a single shared policy used across all tasks which may not adequately represent variations between tasks [6, 10].
Instead, we propose a new backbone architecture, the Projected Task-Specific Layers (PTSL), which combines a large, shared fully-connected policy with low-rank task-specific layers as shown in Fig. 1. After each layer, the hidden state from the shared policy and the low-rank task-specific policy are combined, making PTSL expressive of different tasks. We evaluate PTSL as a standalone backbone or on top of the Context-based Representation (CARE [4]) encoder that leverages text descriptions of the task as metadata to project a mixture of encodings.
The main contributions of this work are:
* We propose the **PTSL** architecture for deep multi-task reinforcement learning.
* Our results with PTSD outperforms CARE [4] on both the MT10 and MT50 Goal-conditioned benchmarks from Meta-World [11].
* Further, our results suggest that multi-task learning with a shared projection is more sample efficient and can improve learning on individual tasks.
* Finally, our results provide insights into the benefits of intermediate architectures sharing an embedding space between task-specific layers and a backbone.
Fig. 1: Simplified diagram of different architectures for multi-task reinforcement learning: Shared backbone for all tasks _(left)_, Individual backbone for each task _(center)_ and Projected Task-Specific Layers **(ours)**_(right)_
## II Related work
### _Reinforcement Learning for Robotics_
Reinforcement learning (RL) has recently shown success in various domains such as Atari [12], Go [13], and Starcraft [14]. In robotics, RL has been well studied in self-driving cars [15], manipulation [16, 17, 18], and locomotion [19]. Multi-task reinforcement learning is a subfield of RL for teaching a single agent to solve multiple tasks. Whereas single-task RL optimises for one reward function, multi-task learning optimises for multiple objectives [2].
One of the main challenges of multi-task learning is _task interference_ or _negative transfer_: learning a new task can deteriorate the performance of a previously learned task. This has been documented in previous literature [3, 20] and several approaches have been proposed for mitigation. One approach is to train single policies for each task and use knowledge transfer [21] or policy distillation [22], but this requires separate networks or a large number of network parameters. Other previous approaches [23, 24] have addressed this issue by modifying the training algorithm or having a model architecture that can support different sub-policies [4, 25], but these methods are often slow and scale poorly to the number of tasks. Thus, in this work we choose to address this issue by focusing not on the optimization method but the underlying architecture.
### _Soft Actor-Critic_
The Soft Actor-Critic (SAC) [26] algorithm optimizes the maximum-entropy RL objective using off-policy data to learn. SAC has been demonstrated to perform better [11] than other algorithms such as Proximal Policy Optimization (PPO) [27] or Trust Region Policy Optimization (TRPO) [28]. Accordingly, we choose to use the multi-task adaptation of SAC with disentangled alphas (which refers to having separate alpha coefficients for every task learned by the policy [29]) to focus on the architecture of the agent.
### _Multi-task Architectures_
Two common and opposing architectures for multi-task learning are the multi-headed actor and the shared actor architecture [11]. The multi-headed actor consists of a single network with one head per task, but does not scale well due to the sheer number of parameters. Meanwhile, the shared actor consists of having a single network for all tasks, which does not allow for task-specific corrections and often leads to poor performance as the number of tasks increases [30].
Within these paradigms, a number of approaches exist. Mixture of Experts (MOE) [31, 32] methods have independent experts with a learned gating network to output weights. Similar work has been done with Hard routing [33] which consists of having a routing network that selects the expert for each task. However, good expert assignment can be nontrivial. A third improvement on the multi-headed architecture is called Soft Modularization [25], where each "step" of the network is composed of multiple linear layers and a routing network decides how to route the linear layers of one step to the ones of the next step.
Another approach is to have a shared network with a preprocessor that produces a vector to represent the state, often denoted as an encoder. CARE (Contextual Attention-based Representation learning) proposes a mixture of encoder architecture [4]. The idea is to have several independent encoders that encode the task in a feature vector. Then, another component produces attention cores based only on the task to combine these embeddings. This embedding is then fed to a fully linear network. However, while a task-dependent encoder is useful, a shared policy like in CARE [4] may not be the best approach because some tasks may require small variations in the shared policy.
## III Method
In this section, we introduce our **Projected Task-Specific Layers (PTSL)** framework, which can be used alone or to replace the backbone of methods such as CARE [4]. PTSL reconciles encoding approaches and routing approaches for multi-tasking.
### _Projected Attention Layers_
Encoding the state in a task-specific way, like in CARE, may not be relevant for all tasks. For example, in manipulation, pushing and pulling may not be as relevant to the action of picking and placing objects. Yet because it is still beneficial to have a shared policy because they are similar.
PTSL's architecture permits small task-specific variations in the policy. This was inspired by Projected Attention Layers (PAL) [34] which is a method that permits to a general Transformer to have small task-specific variations in Natural Language Processing (NLP). For each task, the input goes through a series of transformer layers.
Each layer is made of two parts: one large shared vanilla attention layer and one small task-specific attention layer. The latter is computed by projecting down the input to a smaller dimension and then applying a vanilla attention layer that is trained specifically for the task before projecting it back up to the original dimension and adding it to the shared attention layer output.
This process is then repeated for each layer of the transformer. The idea is to have a general backbone that will contain most of the knowledge (akin to grammar rules for a NLP Transformer), but allow small task-specific variations to be added to the backbone as some tasks may require different treatment (e.g. classification versus summarization). PALs have been shown to be very efficient in NLP tasks and we believe that this approach can be applied to robotic manipulation as well.
### _Projected Task-Specific Layers_
We propose **Projected Task-Specific Layers (PTSL)**, which adapts the PAL architecture to a linear layer setup instead of a transformer, with a few modifications. This architecture can replace the backbone of other methods to obtain complex agents such as **CARE + PTSL**.
Similarly to PAL, PTSL is made of a shared backbone and low-rank task-specific layers as detailed in Fig. 2. The
backbone is a linear layer that is shared between all tasks. The task-specific layers are linear layers that are specific to each task. Where our approach differs from PAL is regarding the projections to the task-specific dimension.
### _Problem formulation and Preliminaries_
In a multi-task setting, we have a set of tasks \(\mathcal{T}=\{T_{1},T_{2},...,T_{n}\}\) and a set of environments \(\mathcal{E}=\{E_{1},E_{2},...,E_{n}\}\) where each task \(T_{i}\) is associated with an environment \(E_{i}\). Each environment \(E_{i}\) is a Markov Decision Process (MDP) defined by a tuple \((\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R},\gamma)\) where \(\mathcal{S}\) is the state space, \(\mathcal{A}\) is the action space, \(\mathcal{P}\) is the transition probability function, \(\mathcal{R}\) is the reward function, and \(\gamma\) is the discount factor. The goal of the agent is to learn a policy \(\pi_{\theta}\) that maximizes the expected return \(J(\pi_{\theta})\) where \(\theta\) are the parameters of the policy.
In a multi-task setting, the agent has to learn a policy \(\pi_{\theta_{i}}\) for each task \(T_{i}\). The agent is evaluated on its ability to learn a policy for each task \(T_{i}\) so therefore we are interested in the average return over all tasks. Since we evaluate our method on Meta-World, we will focus on the discrete signal that is the success rate of the agent on each task. Because we consider a multi-task setting with no unseen tasks during training, we do not consider the problem of generalization to unseen tasks for this work.
#### Iii-C1 Notation
We introduce the following notation:
* \(I\) the input dimension,
* \(O\) the output dimension,
* \(H\) the hidden dimension
* \(D\) the task-specific dimension,
* \(T\) the number of tasks,
* \(N\) the number of hidden layers (meaning that we have \(N+1\) layers in total).
In addition, we note the input \(x\) and \(x_{i}\) as the input of the \(i\)-th layer (So \(x=x_{0}\)). Additionally, we define:
* \(\text{SH}^{i}\) the shared \(i\)-th linear layer.
* \(\text{TS}^{i}_{j}\) the \(i\)-th task specific layer for the \(j\)-th task.
* \(P^{i}_{\text{down}}\) the \(i\)-th projection layer that projects from \(H\) to \(D\) (except for \(P^{0}_{\text{down}}\) that projects from \(I\) to \(D\)).
* \(P^{i}_{\text{up}}\) the \(i\)-th projection layer that projects from \(D\) to \(H\) (except for \(P^{N}_{\text{up}}\) that projects from \(D\) to \(O\)).
#### Iii-C2 Projection to the task-specific dimension
In a transformer setup, we often have \(I=H=O\), which allows PAL to have a single shared down projection \(P^{*}_{\text{down}}\) and a single shared up projection \(P^{*}_{\text{up}}\). Under a maximum number of parameters constraint, it is better to have a single shared projection than a projection for each layer [34].
This is less relevant in our case as \(I\) and \(O\) are not necessarily equal to \(H\) and the gain of sharing the projection is less important as transformers such as BERT have 12 layers while we only have 3 hidden layers (meaning 4 layers in total). Thus, we experiment with both options: a single shared projection and a projection for each layer. In the case of a single shared projection, we will note \(P^{*}_{\text{down}}\) and \(P^{*}_{\text{up}}\) the shared projection layers. Note that we still have two independent projections \(P^{0}_{\text{down}}\) and \(P^{N}_{\text{up}}\) that are not shared due to the difference in input and output dimensions.
In our case, we investigate whether having these two specific projections is worthwhile. Indeed if \(I\) and \(O\) are small, it may be more beneficial to skip the projection so that \(\text{TS}^{0}_{j}\) goes from \(I\) to \(D\) and \(\text{TS}^{N}_{j}\) goes from \(D\) to \(O\). We can compute the number of parameters for each case:
* **First Task-specific layer \(\text{TS}^{0}_{j}\)**: with a projection the number of parameters is \(I\times D+N.(D\times D+D)\) while without a projection it is \(N.(I\times D+D)\).
* **Last Task-specific layer \(\text{TS}^{N}_{j}\)**: with a projection the number of parameters is \(D\times O+N.(D\times D+D)\) while without a projection it is \(N.(D\times O+O)\).
For Meta-World, it was more beneficial to have an individual down projection \(P^{0}_{\text{down}}\) but no up projection \(P^{N}_{\text{up}}\) as \(I=104\)
Fig. 2: PTSL Architecture **(ours)**, explained in Section III-B for details. The dotted red lines represent residual connections (not always present). See Section III-C3 for details. Projection modules that are reused are represented with the same color. See Section III-C2 for details.
(output of the CARE encoder) and \(O=8\).
#### Iii-A3 Residuals across task-specific layers
We also add a residual connection between the task-specific layers, as illustrated by the dotted red in Figure 2. Let us note the function \(g:(\mathbb{R}^{D},\mathbb{R}^{D})\rightarrow\mathbb{R}^{D}\) that combines the projection of our input \(P^{i}_{\text{down}}(x_{i})\) with the output of the previous task-specific layer \(\text{TS}^{i}_{j}(P^{i-1}_{\text{down}}(x_{i-1}))\) to obtain the input of the current task-specific layer \(\text{TS}^{i}_{j}\).
In this paper we will consider four different functions \(g\):
* **No residual**: \(g(x^{\text{proj}}_{i},y^{\text{task}}_{i-1})=x^{\text{proj}}_{i}\).
* **Addition**: \(g(x^{\text{proj}}_{i},y^{\text{task}}_{i-1})=x^{\text{proj}}_{i}+y^{\text{task}}_ {i-1}\).
* **Learnable sum**: \(g(x^{\text{proj}}_{i},y^{\text{task}}_{i-1})=\alpha x^{\text{proj}}_{i}+\beta y ^{\text{task}}_{i-1}\) with \(\alpha,\beta\in\mathbb{R}\) learnable parameters.
* **Learnable projection**: two vectors are concatenated to a vector of \(\mathbb{R}^{2D}\) that is projected back to \(\mathbb{R}^{D}\). \(g(x^{\text{proj}}_{i},y^{\text{task}}_{i-1})=P_{g}\times\text{Concat}\left(x ^{\text{proj}}_{i},y^{\text{task}}_{i-1}\right)\) with \(P_{g}\in\mathbb{R}^{D\times 2D}\), a learnable projection shared across tasks and layers.
## IV Experiments
In this section, we evaluate PTSL in the Meta-World multi-task RL environment and compare against baselines. We also conduct ablation studies to verify the effectiveness of our method. To support the community, our code is made publicly available at [https://github.com/JosselinSomervilleRoberts/PTSL](https://github.com/JosselinSomervilleRoberts/PTSL) which is a fork of the CARE [4] repository that can be found at [https://github.com/facebookresearch/mtrl](https://github.com/facebookresearch/mtrl). In addition to the code, the repository contains all the commands to reproduce our experiments, our run results, and some implementation and training tips.
The first goal of our experimental evaluation is to assess if the PSTL architecture improves the performance of a multi-task agent without increasing the number of parameters. This comparison is done in two different settings: a short horizon, to evaluate the sample efficiency of PTSL; and a long horizon, to make sure that our method converges to an efficient policy. We compare our method to current state-of-the-art architecture baselines: CARE [4], Soft Modularization [25], and MT-SAC. The choice of baselines is described in detail in Section IV-B.
We also assess which variant of PTSL yields the best results, and evaluate whether having a shared or independent projection is better and what residual function should be used. The residual functions considered are no residual, the addition, the learnable sum, and the learnable projection (See Section III-C3 for the definitions).
### _Benchmark_
We use Meta-World's MT10 and MT50 Goal-conditioned tasks as our benchmarks. Meta-World is a multi-task RL benchmark containing 50 robotic manipulation tasks performed by a simulated Sawyer robot. MT10 and MT50 are two evaluation protocols based on Meta-World, where MT10 contains 10 tasks (shown in Fig. 3), and MT50 contains all 50 tasks. The state space is 12-dimensional and consists of tuples of 3D Cartesian end-effector position, 3D Cartesian positions of one or two objects, and the goal position. All our tasks in MT10 and MT50 are goal-conditioned tasks.
### _Baselines_
We will compare our method to the following baselines that are implemented in the CARE and our repository:
**MT-SAC**: a shared backbone [26] baseline with disentangled alphas [29] which has a simple shared fully connected backbone. There are \(N=3\) hidden layers with a hidden dimension of \(H=400\) resulting in \(P=1,641,222\) parameters for both MT10 and MT50.
**Soft Modularization**: the Soft Modularization baseline which learns different policies for each task using a routing network. With also \(N=3\) hidden layers with a hidden dimension of \(H=400\) resulting in \(P=485,766\) parameters for MT10 and \(P=485,806\) for MT50.
**CARE**: the CARE [4] baseline1. Also \(N=3\) hidden layers with a hidden dimension of \(H=400\). There are \(A=4\) experts in the Mixture of Experts with a hidden dimension of \(H_{A}=50\) resulting in \(P=1,871,534\) parameters for both MT10 and MT50.
Footnote 1: Although we used the implementation provided by CARE with the same parameters and the same version of Meta-World, we were unable to reproduce their results even after averaging over 10 or 20 seeds. This is a known issue (seetitts://github.com/facebookresearch/mtrl/issues). The results provided here may appear different from what was published in the original paper. In the rest of this work, for the purpose of comparison, all the reported CARE values will be the results that we were able to reproduce on the same horizon and not results published in the original paper.
**CARE + PTSL (_ours_).** The proposed method on top of CARE. In order to keep the comparison fair, we use nearly the same number of parameters and layers as CARE, with \(P=1,871,532\) parameters and \(N=3\) layers. In order to obtain these numbers, we had to reduce \(H\) to \(326\) and set \(D\) to \(50\) for MT10 (this is with shared projections) and we kept the same encoder parameters as CARE. For MT50, we reduced \(D\) to \(32\) and \(H\) to \(274\) in order to reach \(P=1,869,908\) parameters.
**CARE + PTSL (shallow) (_ours_)** The proposed PTSL with fewer layers on top of CARE, \(N=2\) hidden layers, \(H=400\), and \(D=50\) resulting in \(P=1,556,454\) parameters. All other parameters are identical to the previous CARE +
Fig. 3: The MT10 benchmark from Meta-World contains 10 tasks: reach, push, pick and place, open door, open drawer, close drawer, press button top-down, insert peg side, open window, and open box.
PTSL architecture for MT10. We will show that with fewer layers and less parameters, we obtain similar performance on a short horizon. This method is only tested for MT10.
**PTSL only (ours)** The proposed PTSL as a simple standalone shared backbone with disentangled alphas. Once again we set \(N=3\) and we chose \(H\) and \(D\) to match the number of parameters of CARE. This means that for MT10, \(H=367\) and \(D=50\) resulting in \(P=1,869,833\) parameters. For MT50, \(H=325\) and \(D=32\) resulting in \(P=1,871,187\) parameters.
### _Comparative evaluation_
Figure 3(a) shows the average success rate on the 10 tasks of the MT10 Goal-conditioned benchmark from Meta-World [11] for MT-SAC, Soft Modularization, CARE, and CARE + PTSL (both deep and shallow architectures). Because the success rate is a binary variable, it is noisy, so we averaged the results across multiple seeds (the number of seeds is noted as \(n\), set to 10 for short horizon and 4 for long horizon). For each value, we report the mean and standard error.
We consider 1 million steps as our long horizon, and we consider 200 thousand steps as our short horizon because it is 5 times shorter than the long horizon. As noted earlier, all methods are trained using SAC with disentangled alphas.
Table I and Fig. 3(a) show that our method outperforms all the baselines on the short horizon on MT10. For comparison, in the Meta-World paper [11], it takes around 1.5M steps for the Multitask SAC agent to reach the accuracy that our CARE + PTSL agent reaches within 200K steps. This suggests that our method is highly sample-efficient. As explained in Section IV-B, CARE does not perform as well as described in the original paper [4]. Furthermore, we show that using a shallow PTSL network yields very similar results on MT10 after a short horizon, suggesting that lighter PTSL architectures can still generalize.
Table I and Figure 3(c) also shows that PTSL outperforms all baselines with more tasks (MT50) even if this means reducing the size of the hidden layers (to keep the same number of parameters). On the short horizon, CARE + PTSL outperforms all methods.
Tables II and III as well as Figures 3(b) and 3(d) show that PTSL is able to learn a good policy on the long horizon. For MT10, CARE + PTSL performs best and reaches a top success rate of \(0.772\). For MT50, we noticed that CARE stopped learning after 400K steps and that the simple Multitask SAC was catching up, so we trained a
\begin{table}
\begin{tabular}{l c c} \hline \hline Success — MT10-Cond. & After 1M steps & Best \\ \hline Multitask SAC [11] & 0.706 \(\pm\) 0.050 & 0.737 \(\pm\) 0.055 \\ Soft Mod. [25] & 0.533 \(\pm\) 0.039 & 0.554 \(\pm\) 0.050 \\ CARE [4] & 0.648 \(\pm\) 0.060 & 0.683 \(\pm\) 0.066 \\
**CARE + PTSL (Ours)** & **0.742 \(\pm\) 0.067** & **0.772 \(\pm\) 0.053** \\
**PTSL only (Ours)** & 0.697 \(\pm\) 0.043 & 0.721 \(\pm\) 0.050 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Success rate of the baselines on MT10 Goal-conditioned on the long horizon (1M steps per task). Results are reported both at the end of the 1M steps and at the best average value. The Results are averaged over \(n=4\) seeds for each method. We report the mean and standard error.
\begin{table}
\begin{tabular}{l c c} \hline \hline Success — MT50-Cond. & After 1M steps & Best \\ \hline Multitask SAC [11] & 0.466 \(\pm\) 0.013 & 0.489 \(\pm\) 0.016 \\ Soft Mod. [25] & 0.154 \(\pm\) 0.010 & 0.206 \(\pm\) 0.018 \\ CARE [4] & 0.388 \(\pm\) 0.028 & 0.495 \(\pm\) 0.024 \\ CARE + PTSL (Ours) & 0.354 \(\pm\) 0.015 & 0.427 \(\pm\) 0.020 \\
**PTSL only (Ours)** & **0.610 \(\pm\) 0.021** & **0.614 \(\pm\) 0.020** \\ \hline \hline \end{tabular}
\end{table} TABLE III: Success rate of the baselines on MT50 Goal-conditioned on the long horizon (1M steps per task). Results are reported both at the end of the 1M steps and at the best average value. Results are averaged over \(n=4\) seeds for each method. We report the mean and standard error.
Fig. 4: Training curves of different methods on all benchmarks. For MT10, PTSL converges faster than baselines, and for MT50, we see a gain in sample efficiency. The bolded line represents the mean over \(n=10\) runs for the short horizon and \(n=4\) for the long horizon. The shaded area represents the standard error.
\begin{table}
\begin{tabular}{l c c} \hline \hline Success — MT50-Cond. & After 1M steps & Best \\ \hline Multitask SAC [11] & 0.466 \(\pm\) 0.013 & 0.489 \(\pm\) 0.016 \\ Soft Mod. [25] & 0.154 \(\pm\) 0.010 & 0.206 \(\pm\) 0.018 \\ CARE [4] & 0.388 \(\pm\) 0.028 & 0.495 \(\pm\) 0.024 \\ CARE + PTSL (Ours) & 0.354 \(\pm\) 0.015 & 0.427 \(\pm\) 0.020 \\
**PTSL only (Ours)** & **0.610 \(\pm\) 0.021** & **0.614 \(\pm\) 0.020** \\ \hline \hline \end{tabular}
\end{table} TABLE III: Success rate of the baselines on MT50 Goal-conditioned on the long horizon (1M steps per task). Results are reported both at the end of the 1M steps and at the best average value. Results are averaged over \(n=4\) seeds for each method. We report the mean and standard error.
standalone PTSL agent. While the CARE + PTSL agent does not perform exceptionally (slightly worse than CARE alone), the standalone PTSL outperforms all methods. Importantly, PTSL achieves a **score of 0.61 on MT50 Goal-Conditioned after only 1 million steps**. This surpasses the reported results from the original CARE paper [4] of \(0.54\) after 2 million steps, and the Soft-Modularization paper [25] of \(0.60\) after 1 million steps **on MT50-Fixed** (which is easier than Goal-Conditioned).
### _Ablation study_
The result in the previous section suggests that PTSL (with CARE or standalone) is both sample efficient and yields a good policy on long horizons. In this section, we further examine the influence of the different components of the PTSL architecture on the performance of the model.
**Shared versus independent projections.** In Section III-C2, we discussed the relevance of using a shared projection in our context since we have only a small number of layers. We decided to compare the two approaches on the short horizon setting with the same parameters: \(H=326\), \(D=50\), and \(N=3\). This means that the independent projection has slightly more parameters (about 12% more). We show the results in Figure 4(a).
Figure 4(a) shows that the shared projection is better than the independent projection, and this is the case even though the independent projection has more parameters. The shared projection is therefore more efficient than the independent projection in our context, as it helps the network to learn a consistent mapping between the shared embedding space and the task-specific embedding space. This result is in agreement with the results from Stickland et. al. [34] where they show that the shared projection is better than the independent projection for NLP. We also note that the shared projection is more stable than the independent projection as it has a lower variance.
**Comparison of residual functions.** In Section III-C3, we discussed the relevance of using residual functions as they have shown great results in other Deep Learning tasks such as computer vision [35]. To verify if residuals are relevant for PTSL, we evaluated four variants on the short horizon setting, this time with the same number of parameters. This means that we used \(H=326\), \(D=50\), and \(N=3\) for all methods except the _Learnable projection_ that uses \(H=321\) resulin roughly \(1.871\) million parameters. We show the results in Figure 4(b).
Figure 4(b) shows that having no residuals is better on the short horizon. This means that while the model is more expressive, the additional complexity makes it less sample-efficient and is not beneficial.
## V Conclusion
In this work, we present the **Projected Task-Specific Layers (PTSL)**, a novel method inspired by NLP, that surpasses the state-of-the-art on the MT10 and MT50 Goal-Conditioned benchmark from Meta-World.
In this paper, we showed that our method is able to learn a high-performing policy faster than other popular methods with higher sample efficiency and without introducing more parameters. Furthermore, PTSL can be integrated _with_ existing methods like CARE to improve them. Finally, we showed the benefits of sharing a low-dimensional embedding space between the shared backbone and the task-specific layers. The benefit of PTSL becomes even more obvious with more diverse tasks (MT50).
In future work, we will include transfer learning of individual layers from MT10 to MT50, hierarchical individual layers to better scale, and the implementation of a routing network for the individual layers to better share parameters.
## Acknowledgment
The authors would like to thank Saurabh Kumar, Yoni Gozlan, Paul-Emile Giacomelli and Brian Park for their feedback and advice during the preparation of this manuscript.
\begin{table}
\begin{tabular}{l l} \hline \hline Success — MT10-Cond. & 200K (\(n=10\)) \\ \hline
**No residual** & **0.511 \(\pm\) 0.034** \\ Learnable sum & 0.410 \(\pm\) 0.031 \\ Learnable projection & 0.385 \(\pm\) 0.032 \\ \hline \hline \end{tabular}
\end{table} TABLE V: Success rate of CARE + PTSL with various residual functions. Results are averaged over \(n=4\) seeds for each method. We report the mean and standard error. The Sum residual is not indicated as it did not converge.
Fig. 5: Training curves of different methods on all benchmarks. The bolded line represents the mean over \(n=10\) runs for the short horizon and \(n=4\) for the long horizon. The shaded area represents the standard error.
\begin{table}
\begin{tabular}{l l} \hline \hline Success — MT10-Cond. & 200K (\(n=10\)) \\ \hline Independent projection & 0.369 \(\pm\) 0.045 \\
**Shared projection** & **0.511 \(\pm\) 0.034** \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Success rate of CARE + PTSL with independent versus shared projections on MT10 Goal-Conditioned. Results are averaged over \(n=4\) seeds for each method. We report the mean and standard error. |
2309.07445 | SIB-200: A Simple, Inclusive, and Big Evaluation Dataset for Topic
Classification in 200+ Languages and Dialects | Despite the progress we have recorded in the last few years in multilingual
natural language processing, evaluation is typically limited to a small set of
languages with available datasets which excludes a large number of low-resource
languages. In this paper, we created SIB-200 -- a large-scale open-sourced
benchmark dataset for topic classification in 200 languages and dialects to
address the lack of evaluation dataset for Natural Language Understanding
(NLU). For many of the languages covered in SIB-200, this is the first publicly
available evaluation dataset for NLU. The dataset is based on Flores-200
machine translation corpus. We annotated the English portion of the dataset and
extended the sentence-level annotation to the remaining 203 languages covered
in the corpus. Despite the simplicity of this task, our evaluation in
full-supervised setting, cross-lingual transfer setting and prompting of large
language model setting show that there is still a large gap between the
performance of high-resource and low-resource languages when multilingual
evaluation is scaled to numerous world languages. We found that languages
unseen during the pre-training of multilingual language models,
under-represented language families (like Nilotic and Altantic-Congo), and
languages from the regions of Africa, Americas, Oceania and South East Asia,
often have the lowest performance on our topic classification dataset. We hope
our dataset will encourage a more inclusive evaluation of multilingual language
models on a more diverse set of languages. https://github.com/dadelani/sib-200 | David Ifeoluwa Adelani, Hannah Liu, Xiaoyu Shen, Nikita Vassilyev, Jesujoba O. Alabi, Yanke Mao, Haonan Gao, Annie En-Shiun Lee | 2023-09-14T05:56:49Z | http://arxiv.org/abs/2309.07445v3 | SIB-200: A Simple, Inclusive, and Big Evaluation Dataset for Topic Classification in 200+ Languages and Dialects
###### Abstract
Despite the progress we have recorded in the last few years in multilingual natural language processing, evaluation is typically limited to a small set of languages with available datasets which excludes a large number of low-resource languages. In this paper, we created SIB-200--a large-scale open-sourced benchmark dataset for topic classification in 200 languages and dialects to address the lack of evaluation dataset for Natural Language Understanding (NLU). For many of the languages covered in SIB-200, this is the first publicly available evaluation dataset for NLU. The dataset is based on Flores-200 machine translation corpus. We annotated the English portion of the dataset and extended the sentence-level annotation to the remaining 203 languages covered in the corpus. Despite the simplicity of this task, our evaluation in full-supervised setting, cross-lingual transfer setting and prompting of large language model setting show that there is still a large gap between the performance of high-resource and low-resource languages when multilingual evaluation is scaled to numerous world languages. We found that languages unseen during the pre-training of multilingual language models, under-represented language families (like Nilotic and Altantic-Congo), and languages from the regions of Africa, Americas, Oceania and South East Asia, often have the lowest performance on our topic classification dataset. We hope our dataset will encourage a more inclusive evaluation of multilingual language models on a more diverse set of languages.1
Footnote 1: [https://github.com/dadelani/sib-200](https://github.com/dadelani/sib-200)
## 1 Introduction
In the last few years, developing massively multilingual Pre-trained Language Models (PLMs) to scale to several written languages is a very active area of research--e.g. covering 100 languages Devlin et al. (2019); Conneau et al. (2020); Liu et al. (2020); Xue et al. (2021); He et al. (2023). However, evaluation is often limited to a few tens of languages with benchmark datasets Conneau et al. (2018); Hu et al. (2020); Ruder et al. (2021); Zhang et al. (2022), limiting the large-scale evaluation of current multilingual language models on many languages especially the truly low-resourced languages.
While there is evidence from previous works that languages not covered during pre-training often lead to lower performance, such analysis is also limited to a small selection of languages with annotated datasets Ponti et al. (2020); Pfeiffer et al. (2020); Adelani et al. (2022); Lee et al. (2022).
Recently, there is a push to scale evaluation datasets to more than 100 languages, but this requires a very expensive annotation effort in terms of money and time. Often, this scaling is only carried out by a large community effort that spans many years like the Universal Dependency (UD) project Nivre et al. (2017); Nivre et al. (2020); de Marneffe et al. (2021) or financed by BigTech companies Goyal et al. (2022); NLLB-Team et al. (2022); Federmann et al. (2022); Conneau et al. (2022); Pratap et al. (2023). However, the focus of these large-scale evaluation is on machine translation and speech recognition tasks--ideal for text generation tasks. In reality, there are only few benchmarks for NLU tasks that cover all the languages seen during the pre-training of multilingual PLMs ImaniGooghari et al. (2023).
The large benchmark datasets that are available are UD, Taxi1500 Ma et al. (2023) WikiANN Pan et al. (2017), and Belebele Bandarkar et al. (2023) for dependency parsing, text classification, named entity recognition, and reading comprehension respectively. The largest is Taxi-1500 for 1500 languages--based on the Bible but the dataset is not publicly available due to copyright issues of the dataset. WikiANN on the other hand, was automatically annotated and with few instances for low-resource languages. UD and Belebele were
manually annotated and covered between 100 and 125 languages. However, many low-resource languages are still missing in the evaluation.
In this paper, we created SIB-200--a large-scale open-sourced benchmark dataset for topic classification to address the lack of evaluation datasets for natural language understanding. The dataset is based on Flores-200 dataset (Goyal et al., 2022; NLLB-Team et al., 2022)--a multi-way parallel corpus (i.e. same sentences are available in 204 languages). We annotated the English portion of the Flores-200 dataset and extend the sentence-level annotation to the remaining 203 languages covered in the machine translation corpus.
Our evaluation shows that there is still a large gap between the performance of high-resource and low-resource languages when multilingual evaluation is scaled to numerous world languages. Languages unseen during the pre-training of multilingual PLMs, under-represented language families (like Nilotic and Altantic-Congo languages), and languages from the regions of Africa, Americas, Oceania and South East Asia, often have the lowest performance on our text classification dataset. We also find that blindly scaling up the number of languages without scaling up the domains in pre-taring is unhelpful (e.g., Glot-500 pre-trained on 500 languages largely under-performs XLM-R pre-trained on 90 languages). It is crucial to mix text from various domains. For languages unseen during pre-training, we show the potential of multilingual language adaptive fine-tuning (MAFT)2(Tang et al., 2020; Alabi et al., 2022) in improving the performance of these languages by leveraging synthetic data for languages with tiny monolingual data. Evaluation of this approach on African languages results in significant improvement (up to +5% in accuracy on average) for the previously unseen languages.
Footnote 2: adaptation of an existing multilingual PLM to multiple or new sets of languages simultaneously.
Finally, we extend our evaluation to the zero-shot settings by training individually on English, French, Arabic and Chinese (Simplified) languages using XLM-R (Conneau et al., 2020), and performing zero-shot evaluation on other languages. We compared these results with prompting large language models (LLMs) like GPT-4. Our results show that LLMs perform poorly on over 63.6% (or 131 out of 204 languages) of the languages with less than 70% in accuracy while zero-shot adaptation from the English model only leads to performance less than 70% accuracy in 81 languages (or 39.3% of languages)3. This shows that leveraging cross-lingual transfer from high-resource languages is much better than prompting LLMs for many low-resourced languages in this task.
Footnote 3: Performance of XLM-R on English is 92.1% in accuracy while prompting GPT-4 in English gave 76.6% in accuracy.
## 2 SIB-200 dataset
### Data source
We introduce our new dataset, SIB-200--a Simple Inclusive and Big topic classification dataset for over 200 languages and dialects. We leveraged the multi-way parallel Flores-200 dataset (NLLB-Team et al., 2022) for the creation of the dataset. Flores-200 corpus is an extension of Flores-101 (Goyal et al., 2022)--for 101 languages. In both datasets, the source sentences were collected in English and translated by professional translators to several languages. In total, the corpus contains 3,001 sentences divided into DEV (997 sentences), DEVTEST (1,012 sentences) and TEST (992 sentences) sets. However, the authors did not release the TEST set.
Flores-200 released additional information to provide meta-data information about the domains and topics of the articles covered in the dataset. The domains are based on WikiNews, WikiJunior, and WikiVoyage with a total of 842 articles while the topics are based on "crime", "disasters", "entertainment", "geography", "health", "nature", "politics", "science", "sports", and "travel".4 However, a quick review of the dataset revealed that at the sentence level, the article can belong to more than one topic. Therefore, we decided to add our topic categorization at the sentence level. Performing annotation at
\begin{table}
\begin{tabular}{l r r r r} \hline \hline
**Label** & **TRAIN** & **DEV** & **TEST** & **TOTAL** \\ \hline science/technology & 176 & 25 & 51 & 252 \\ travel & 138 & 20 & 40 & 198 \\ politics & 102 & 14 & 30 & 146 \\ sports & 85 & 12 & 25 & 122 \\ health & 77 & 11 & 22 & 110 \\ entertainment & 65 & 9 & 19 & 93 \\ geography & 58 & 8 & 17 & 83 \\ \hline Total & 701 & 99 & 204 & 1,004 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **SIB-200 dataset**. We provided the data size of the annotated data by their SPLIT and category
the sentence level also gives us the additional advantage of having more samples to annotate (2,009 rather than 562 instances5).
Footnote 5: Although 842 articles are in Flores-200, only 562 articles are open-sourced as part of DEV and DEVTEST sets.
### Data annotation
We recruited four annotators who are native speakers of English to label 2,001 sentences obtained from the DEV and DEVTEST sets of Flores-2006. We make use of an internal annotation tool for text classification. The annotation labelling scheme covers 15 categories, 10 are from the original Flores-200 categorization of articles (SS2.1), and the others are "business", "religion", "technology", "education", and "uncategorized". We assigned sentences that do not fit any of the defined categories, and sentences lacking sufficient context about their topic to "uncategorized". An example of a sentence labelled as "uncategorized" is _"In Berlin, police estimated 6,500 protestors"_.
Footnote 6: All annotators are also authors of this paper.
The annotators took about two weeks to complete the task, however on average it takes up to 60 seconds to annotate a sentence (approximately, 33 hours of annotation time).
### Quality control
We report Fleiss Kappa score [10] to measure the agreement of annotation. The Fleiss Kappa score among the four annotators is **0.44**--which signifies a moderate level of agreement.
Choosing the final label per sentenceWe assigned the final label to a sentence by majority voting. Specifically, we assign a label to a sentence if at least two annotators agree on the category, but we excluded the situation, where any two annotators conflicted with the other two annotators. For example, for the sentence _"The major organ of the circulatory system is the heart, which pumps the blood."_, the first two annotators assigned "science" while the last two assigned "health". In total, we assigned a single label to 1,695 sentences, but there were 314 sentences with conflicts in the annotation. We asked the lead annotator to adjudicate the sentences with conflicting annotations and assigned a single label to each sentence. We later combined the fixed conflicting annotations with the others to give us back a total of 2009 annotated sentences.
Final classification datasetFor the final dataset, we excluded sentences with the label of "uncategorized", we only selected label categories with more than 80 sentences, this removed categories such as "business" (80 sentences), "disasters" (73 sentences), "crime" (72 sentences), "education" (52 sentences), and "religion" (46 sentences). We note that having too many categories with few sentences makes building text classification models a bit difficult leading to a lower performance. Also, we combined "science" (138 sentences) and "technology" (114 sentences) category into a single category of "science/technology". Finally, we removed the "nature" category because there is a lot of conflict with "science" and "geography" categories. Our preliminary experiments show that adding "nature" significantly lowers the performance of our classifier. About half of the Flores-200 is part of the SIB-200 dataset (i.e. 1004 out of 2009 sentences).
Table 1 shows the number of sentences per label in each of the TRAIN, DEV, and TEST splits. We divided the sentences into the split using the 70%, 10%, 20% ratio.
## 3 Languages and their categorizations
Table 2 and Table 3 shows the grouping of languages in the SIB-200 dataset. We categorized them based on the following characteristics: (1) geographical regions, (2) language family, (3) coverage in multilingual PLMs, and (4) Joshi's classification [11]--a categorization based on their labelled/unlabelled resources on the web--making it easy to analyze results.
Categorization by geographical regionsTable 2 shows the grouping of languages into regions according to the United Nations Geoschem7. The regions are: Africa, Americas, Asia 1 or Western & Central Asia, Asia 2 or Southern Asia, Asia 3 or South-Eastern & Eastern Asia, Europe 1 or Northern/Western/Southern Europe, Europe 2 or Eastern Europe, and Oceania. Asia, Europe, and Africa regions have the largest number of languages with 82, 57, and 56 languages respectively. The Oceania and the Americas regions have the lowest number of languages with four and five respectively.
Footnote 7: [https://en.wikipedia.org/wiki/United_Nations_geoscheme](https://en.wikipedia.org/wiki/United_Nations_geoscheme)
Categorization by language familySIB-200 languages are grouped into 21 language families as shown in Table 3, the largest groups are: Indo-European (79 languages), Atlantic-Congo (34 languages), Afro-Asiatic (21 languages), Austronesia
(21 languages) and Turkic (11 languages).
Glot-500 (395M), which are trained on several languages: XLM-R and Glot-500 were trained on 100 and 500 languages respectively. We also fine-tune region-specific PLM trained on multiple country-level or continent-level languages: AfriBERTa (126M), AfroXLMR (550M), MuRIL (236M) and IndicBERTv2 (278M). We restrict region-level analysis to Africa and India because we only found these two regions with multilingual PLMs covering many languages.
MAFT with fewer data and synthetic dataWe explore how to improve over regional PLMs using MAFT--adaptation of an existing multilingual PLM to multiple or new set of languages simultaneously, this was effective for adapting XLM-R to 20 languages spoken in Africa [1]. To extend to more languages, we apply MAFT to 61 African languages with at least 10MB of monolingual data (AfroXLMR-61). The data was obtained from the concatenation of different web sources like AfroXLMR training corpus, MT560 [13] (mostly religious articles), Flores-200 (multi-domain), and Wikipedia. In total, this results in 17GB of data. To further extend to more languages with less than 10MB of data, we generate machine-translated data using NLLB for 34 African languages (including 18 in AfroXLMR-61). The selected 34 languages are the ones with less than 10MB or only have MT560 (religious domain). We make use of the English news commentary dataset8[10] with over 600,000 sentences to translate to these 34 languages. We refer to the resulting model after adaptation as AfroXLMR-75 which has been pre-trained on 21GB of data.
Footnote 8: we used version 16 of the data released for WMT.
Large Language ModelsLastly, we also report results by prompting two popular large language models: GPT-3.5-Turbo (gpt-3.5-turbo-0613) and GPT-4 (gpt-4-0613). Compared with smaller language models from MLM and MAFT, they feature strong instruction-following capabilities without task-specific fine-tuning.
### Training and evaluation scenarios
Fully-supervisedIn this setting, we trained on each language in SIB-200 and evaluated on the same language. We did this evaluation for 204 lan
\begin{table}
\begin{tabular}{l r|c c c c|c c c c|c c} \hline \hline & & \multicolumn{4}{c|}{**Fully Supervised**} & \multicolumn{4}{c|}{**Cross-Lingual Transfer (XLMR)**} & \multicolumn{2}{c}{**Zero-Shot Prompt**} \\
**Language Family** & **Count** & **MLP** & **Glot-500** & **XLM-R (base)** & **XLM-R** & **English** & **French** & **Chinese** & **Arabic** & ** GPT-3-Turbo** & ** GPT-4** \\ \hline English & - & 59.9 & 82.8 & 90.0 & 92.1 & 92.1 & 91.9 & **92.5** & 91.2 & 71.8 & 76.6 \\ \hline Indo-European & 79 & 62.3 & 72.4 & 81.4 & **86.2** & 82.4 & 83.2 & 82.8 & 83.0 & 55.3 & 66.6 \\ Atlantic-Congo & 34 & **61.3** & 49.6 & 50.5 & 57.9 & 41.4 & 41.4 & 41.9 & 42.0 & 29.2 & 29.2 \\ Afro-Asiatic & 21 & 61.4 & 59.2 & 67.1 & **72.6** & 67.4 & 68.1 & 67.7 & 68.4 & 43.4 & 54.6 \\ Austronesian & 21 & 59.8 & 62.1 & 68.8 & **73.9** & 64.0 & 64.3 & 64.5 & 64.9 & 44.1 & 47.1 \\ Turkic & 11 & 64.8 & 74.2 & 79.8 & **85.1** & 80.2 & 80.9 & 80.4 & 80.9 & 50.2 & 59.2 \\ Sino-Tibetan & 9 & **68.8** & 66.2 & 62.2 & 65.4 & 57.9 & 58.3 & 57.1 & 57.1 & 30.7 & 40.6 \\ Nilotic & 5 & **58.6** & 35.0 & 48.2 & 53.7 & 34.8 & 33.0 & 34.0 & 34.0 & 16.1 & 10.1 \\ Dravidian & 4 & 64.7 & 76.1 & 84.4 & **87.9** & 87.8 & 88.1 & 88.2 & 88.0 & 57.2 & 69.6 \\ Tai-Kadai & 3 & 67.7 & 61.3 & 70.9 & **76.8** & 68.4 & 67.8 & 68.9 & 69.2 & 35.6 & 44.7 \\ Uralic & 3 & 62.1 & 74.1 & 86.5 & **89.6** & 89.1 & 90.4 & 90.2 & 89.6 & 62.4 & 74.8 \\ Austrosaiatic & 3 & 66.5 & 65.5 & 66.2 & **68.1** & 67.5 & 66.8 & 67.2 & 66.2 & 34.8 & 48.7 \\ Mande & 2 & **57.4** & 36.1 & 42.7 & 48.7 & 32.5 & 32.4 & 32.3 & 32.1 & 18.0 & 13.3 \\ Japonic & 1 & 73.8 & 81.5 & 87.9 & **89.9** & 89.3 & 90.3 & 89.7 & 88.8 & 63.4 & 75.8 \\ Koreanic & 1 & 67.8 & 76.5 & 86.5 & **88.5** & 88.7 & 89.4 & 89.2 & 88.7 & 67.8 & 78.2 \\ Mongolic-Khitan & 1 & 66.2 & 74.8 & 82.9 & **88.5** & 86.1 & 85.8 & 85.5 & 86.2 & 57.7 & 67.6 \\ Constructed & 1 & 61.4 & 72.8 & 87.5 & **89.4** & 88.5 & 89.2 & 90.4 & 88.6 & 58.7 & 70.3 \\ Quechuan & 1 & 53.7 & 59.4 & 57.9 & **64.1** & 46.3 & 48.3 & 49.1 & 50.8 & 36.2 & 18.5 \\ Basque & 1 & 62.9 & 72.4 & 83.5 & **89.2** & 89.2 & 90.0 & 89.7 & 88.9 & 55.3 & 53.1 \\ Aymaran & 1 & **55.7** & 37.4 & 42.5 & 52.5 & 39.1 & 40.4 & 38.5 & 41.3 & 15.9 & 6.6 \\ Tujian & 1 & 57.7 & 63.7 & 69.6 & **76.3** & 61.3 & 61.7 & 61.7 & 61.1 & 32.3 & 28.2 \\ Kartvelian & 1 & 63.7 & 78.4 & 83.4 & **88.5** & 89.1 & 89.8 & 89.7 & 88.6 & 44.7 & 66.1 \\ \hline Average & - & 62.8 & 64.2 & 71.0 & **75.9** & 69.1 & 69.5 & 69.5 & 69.5 & 43.3 & 48.7 \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Overall result of the performance of different text-classification models across different language families.** We compared different settings: fully-supervised, cross-lingual transfer and zero-shot prompting of LLMs. Cross-lingual transfer is based on the XLM-R model as it is the best-performing PLM. Performances from 4 source languages: English, French, Chinese and Arabic are reported.
guages and compared the performance of different text classification models. The MLP models were trained for 300 iterations, and we used either word _ngram tokens_ or _XLM-R tokens_. For the multilingual PLM, we fine-tune each language training data for 20 epochs, with a maximum sequence length of 164, batch size of 16, and learning rate of \(0.00001\) on a single Nvidia A10 GPU. **Here, we assume access to labelled data in the target language.**
Cross-lingual transferFor this setting, we **fine-tune** XLM-R on a language in Joshi's class 5 (we call it a "source" language), and **evaluate** on other languages. For this setting, we **fine-tune** XLM-R on a language in Joshi's class 5 (we call it a "source" language), and **evaluate** on other languages. We trained in four languages with three different scripts i.e. English, French, Arabic and Chinese (Simplified). **Here, we assume access to labelled data in a few high-resource languages.**
Zero-shot promptWe prompt GPT-3.5/4 for text classification for the 204 languages using an English template. We make use of a simple template from Sanh et al. (2022): _'Is this a piece of news regarding [("science, technology, travel, politics, sports, health, entertainment, or geography"])? [(INPUT)]'_. **Here, we assume no access to labelled data in any language**
## 5 Results
### Baseline results
In order to demonstrate the effectiveness of our data set for multilingual evaluation, we benchmark the performance across various models and group the results by categorizations (Table 4). As XLM-R consistently outperforms Glot-500 across almost all language families, we use XLM-R as the baseline model in the cross-lingual transfer experiments9. Comparing English versus other languages, fine-tuning XLM-R on English achieved an accuracy of 92.1%, indicating that _the task itself is not difficult if given a properly pre-trained MLM and \(\sim 700\) training samples_. However, when fine-tuning the same model in other languages, the performance drops vastly to an average accuracy of 75.9%. Similarly, in the cross-lingual transfer and zero-shot prompt scenarios, the performance further drops.
Footnote 9: the full results are in Appendix A
Performances across language familiesThe distribution of accuracy scores is imbalanced across language families. _Atlantic-Congo, Nilotic, Mande, Aymaran and Quechuan languages have the lowest accuracy scores_. Even under the fully supervised scenario, the best-performed model reaches <65% accuracy scores on these languages. There also tends to be a larger performance gap between fully-supervised and cross-lingual transfer scenarios, suggesting a poor semantic alignment Conneau and Lample (2019) for these languages. Surprisingly, Tubian is the only additional language family that has >10% drop from the fully supervised to cross-lingual transfer scenario. When moving further to the zero-shot prompt scenario, Basque shows the biggest performance drop (-36%), next come the above-mentioned languages. Interestingly, despite this large decrease, Basque scores exceptionally high (\(\approx\)90%) in the fully supervised and cross-lingual transfer scenarios.
Performances across Joshi's classes and geographical regionsFigure 1 visualizes the performance of XLM-R10 across different regions and Joshi's classes. We see a clear trend that languages with higher Joshi's classes perform better. Specifically, all languages with Joshi's class \(\geq\)3 have accuracy scores of \(\approx\)90%. _For languages in the same Joshi's class, African languages perform the worst, and European languages perform the best._
Figure 1: Heatmap of the performance by Region in each Joshi’s class.
On Joshi's class 0, African languages are even at least 20% worse than languages from other continents. Notably, there is no language with Joshi's class >3 in Africa and no American/Oceania languages have Joshi's class >1. _African and Oceania languages are also the only exceptions where MLP outperforms XLM-R_, implying a poorly learned representation of them. Future research should focus more on languages from these regions. Appendix B provides the evaluation across the eight sub-regions instead of four in Figure 1.
Performances across modelsIn the fully supervised scenario, XLM-R performs the best on 16 out of the 22 language families. Among the remaining 6 language families, applying the simplest MLP classifier with n-gram input features outperforms more complex transformer-based MLMs (Glot-500 and XLM-R), suggesting they are not well adapted to these 6 language families. _Glot-500, despite being pre-trained with many more languages, outperforms XLM-R only on Sino-Tibetan languages_. Even on Sino-Tibetan languages, it fails to outperform the simplest MLP baseline. Cross-lingual transfer results are similar when using different source languages. On most language families, the results are comparable to fully supervised ones. Zero-shot prompting leads to a big drop due to the lack of supervised samples. The performance is good only for a few language families such as Indo-European, Uralic, Japonic and Korean.
### Factors affecting performance
In order to determine the critical factor in this multilingual classification task, we conducted in-depth case studies on the model architecture choices and language categorizations.
Effect of language coverage in pre-trainingFigure 2 compares MLP, XLM-R and Glot-500 models based on language and script coverage in pre-training based on four groups: (1) language seen, script seen in XLM-R (2) language unseen, script seen in XLM-R (3) script unseen in XLM-R, language seen in Glot-500 (4) script unseen by both models. The results in each group are sorted by their performance on fine-tuned XLM-R model. Overall, _XLM-R performs the best all languages seen in its pre-training corpus without any exception_. Even for languages unseen in the pre-training corpus of XLM-R, it outperforms Glot-500 in most cases as long as the written scripts are seen. Glot-500 performs the best only for 3 out of all the 204 languages, implying their learned representations are far from sufficient. The reason could be that Glot-500 is pre-trained and evaluated on a religious corpus, which is quite different from the news do
Figure 3: Accuracy of the XLM-R model vs Pre-Training corpus size in the fully supervised scenario. Bigger pre-training corpus in a target language generally improves the model performance.
Figure 2: **Fully supervised Model Performance**. We group languages by whether they and their scripts are seen in the pre-training corpus of XLM-R. Languages are ordered by the XLM-R performance in every group.
main in our task. In order to achieve a better generalization, we may have to mix text from various domains in the pre-training stage.
Effect of pre-training sizeFigure 3 shows the change of accuracy scores with increasing corpus included in the pre-training stage of XLM-R, where the corpus size is logarithmically scaled for better visualization. We can see that _with as little as 0.1GB pre-training corpus, the XLM-R model can already achieve \(>\)80% accuracy for almost all languages_, which further verified that this task itself is not difficult. Though the accuracy generally grows with increasing corpus size, and the model performance starts to saturate with \(>1\)GB pre-training corpus. Since African languages typically have tiny pre-training corpora or unavailable while European languages have large ones. This can explain the poor model performance on African languages.
Effect of scriptTo see how the choice of scripts affects the model performance, we choose eight languages that can be written in different scripts, and visualize the performance of XLM-R, MLP with n-gram features (MLP-ngram), and MLP with words from the XLM-R tokenizer (MLP-XLM-R) in Figure 4. We can see that (1) The performance of MLP-XLMR usually correlates with that of XLM-R. This implies that under the XLM-R tokenizer, languages have their own preferred written scripts regardless of the effects from pre-training (because this preference stays the same even with the simplest MLP classifier); (2) The slope of XLM-R is often steeper than that of MLP-XLMR, implying the preferred script for a language also has better pre-trained representations; (3) The slope of MLP-n-gram is often less steep. This implies that n-gram features are more robust across different scripts compared with word features obtained from the XLM-R tokenizer; (4) The preferred script is often the more commonly used one for every language, suggesting _future work can focus on one single preferred script for every language_.
### Comparison of different scenarios
Fine-tune vs. PromptedOut of all the 204 languages, GPT-4 outperforms GPT-3.5-turbo in 157 languages. Only on Buginese, Kabive, Mizo, Nuer and Ayacucho Quechua, GPT-3.5-Turbo outperforms GPT-4 for \(>10\%\). However, zero-shot prompting consistently underperforms fine-tuned methods. This might be due to the subjectivity of this task. It is hard to include extensive descriptions of the classification criteria in the prompt. Adding more examples to the prompt might improve the performance.
Cross-Lingual transfer vs Fully supervisedFigure 5 takes a closer look at the comparison between cross-lingual transfer and fully-supervised methods. We can see that for all languages that are included in the pre-training corpus of XLM-R, the cross-lingual transfer performs similarly to fully supervised methods. The best source language for cross-lingual transfer is, surprisingly, French, rather than English, which has the largest amount of pre-training corpus, though the difference among various source languages is tiny. This suggests _languages included in the XLM-R pre-training corpus are pretty well aligned with all the four chosen high-resource languages_. The advantage of fully supervised methods over cross-lingual transfer becomes prominent mainly when the target language is not included in the pre-training corpus of XLM-R and its script is included. In this case, fully supervised methods can improve the performance by fine-tuning the model on the target languages, but cross-lingual transfer fails to capture the alignment with high-resource languages.
### Region-specific pre-training
Evaluation of region-specific PLMsWhile our evaluation is primarily focused on multilingual PLMs trained on 100 languages or more, models pre-trained on a group of linguistically or geographically related languages often lead to better performance as observed for Indian languages (Table 5)
Figure 4: Script performance differences when one language has two different scripts. XLM-R and MLPs show the same trend. Using ngram features are more robust to script changes than using the XLM-R tokenizer.
and African languages (Table 6). IndicBERTv2 and MurilBERT achieved better overall performance over XLM-R (550M parameters) despite their smaller capacity (236M-278M parameters), especially for Indian languages they both support, and better for languages not covered by XLM-R. Similarly for African languages, AfroXLMR--an adaptation of XLM-R through multilingual adaptive fine-tuning (MAFT) (Alabi et al., 2022) to 17 African languages gave roughly \(+9\) improvement in performance. AfriBERTa on the other hand slightly gave worse result than XLM-R despite seeing the same number of African languages during pre-training (although not the exact languages) because it was pre-trained on less amount of data (1GB). Despite the improvement of AfroXLMR, it performs terribly for Nilotic, Mande and many Atlantic Congo families which shows that including more African languages in pre-training could improve performance.
Performance of applying MAFT to more African languagesWe evaluated on two MAFT models described in (SS4.1). Our evaluation of AfroXLMR-75 shows that MAFT with synthetic data was effective in improving the performance over AfroXLMR across many languages in Africa, especially for Nilotic (\(+6.6\)), Mande (\(+6.9\)) and Atlantic-Congo (\(+6.3\)) languages, similar to the findings of Urbizu et al. (2023). The performance improvement for AfroXLMR-61 was smaller on average (\(+2.4\)). There are few cases where it leads to a slight drop in performance on more-resourced languages due to curse-of-multilinguality (Conneau et al., 2020). The two newly developed PLMs are available on HuggingFace.11 We provide the full result in Appendix C.
Footnote 11: [https://huggingface.co/Davlan](https://huggingface.co/Davlan)
## 6 Related Work
Multilingual evaluation datasets:There have been several efforts to curate multilingual evaluation datasets, including various downstream tasks such as part-of-speech tagging (Nivre et al., 2016, 2020; Dione et al., 2023), named entity recognition (Pan et al., 2017; Adelani et al., 2022; Mhaske et al., 2023), natural language inference (Conneau et al., 2018), text classification (Ma et al., 2023), machine translation (Adelani et al., 2022; Goyal et al., 2022; NLLB-Team et al., 2022), and question answering (Lewis et al., 2020; Ogundepo et al., 2023; Shen et al., 2023; Doddapaneni et al., 2023; Bandarkar et al., 2023). All these initiatives have played a pivotal role in advancing the field of cross
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{6}{c}{Language Family} \\ & **A.Congo** & **Afro A.** & **Nilón.** & **Mande** & **Aust.** & **Indo-E** & **All** \\
**Models** & **(34)** & **(12)** & **(5)** & **(2)** & **(1)** & **(1)** & **(55)** \\ \hline \hline MLP & 61.3 & 59.6 & 58.6 & 57.4 & 61.1 & 57.6 & 60.5 \\ AfriBERTa & 58.7 & 50.9 & 54.2 & 49.6 & 50.5 & 53.7 & 56.1 \\ XLM-R & 57.9 & 65.4 & 53.7 & 48.7 & 85.3 & 89.8 & 59.9 \\ AfroXLMR & 70.8 & 69.2 & 55.7 & 57.1 & **88.4** & **90.4** & 69.2 \\ \hline \hline AfroXLMR-61 & 74.8 & 68.3 & 57.2 & 56.0 & 88.2 & 89.1 & 71.6 \\ AfroXLMR-75 & **77.1** & **69.5** & **62.3** & **64.0** & 87.5 & 89.7 & **74.1** \\ \hline \hline \end{tabular}
\end{table}
Table 6: **African-centric Evaluation on SIB-200**.
Figure 5: **Comparison of Various Scenarios**. We group languages by whether they and their scripts are seen in the pre-training corpus of XLM-R. Languages are ordered by the XLM-R fully-supervised performance in every group.
\begin{table}
\begin{tabular}{l c c c c|c} \hline \hline & \multicolumn{4}{c}{Language Family} \\ & **Indo-E** & **Dravidian** & **Austro-Asia** & **Sino-Tib** & **All** \\
**Models** & **(18)** & **(4)** & **(1)** & **(1)** & **(24)** \\ \hline XLM-R & 86.5 & 87.9 & 24.6 & 48.7 & 82.6 \\ IndicBERTv2 & 85.4 & 88.3 & **65.5** & 43.2 & 83.3 \\ MurilBERT & **87.5** & **89.9** & 23.5 & **66.3** & **84.4** \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Indic-centric Evaluation on SIB-200**.
lingual and multilingual NLP. Our work, which focuses on the creation of an extensive multilingual text classification dataset covering 200 languages, builds upon a line of related works that have significantly contributed to the expansion of the NLP community.
Specific to text classification, a few multilingual datasets are IndicNLP BBC news (Kunchukutan et al., 2020), KINNEWS & KIRNEWS (Niyongabo et al., 2020), ANTC (Alabi et al., 2022), MasakhaNEWS (Adelani et al., 2023), and Taxi1500 (Ma et al., 2023). To the best of our knowledge, Taxi1500 is the most recent and largest of them all covering 1500 languages. However, this dataset is focused on the religious domain as the data comes from the Bible. Our work addresses a gap in multilingual text classification datasets by curating SIB-200 that covers a broader range of topics and domains.
Multilingual Large Language Models:In this work, we evaluated two categories of LLMs based on their pretraining objectives. These are Masked Language Models and Autoregressive Language Models (Yang et al., 2023; Zhao et al., 2023). Masked Language Model is the training paradigm for BERT-style12. On the other hand, Autoregressive Language Models, which are GPT-style decoder-only architectures, are trained by generating the next word in a sequence given the preceding words.
Footnote 12: BERT is one of the first neural language models that uses an with-encoder, masked pre-training, discriminative task.
While it is possible to train these models on a single language (Conneau and Lample, 2019), such monolingual models typically demonstrate less cross-lingual capabilities compared to their multilingual variants. Hence, recent developments have seen the emergence of multilingual LLMs like mBERT (Devlin et al., 2019), XLM-Roberta (Conneau et al., 2020), Glot-500 (Imani-Googhari et al., 2023), XGLM (Lin et al., 2022), and GPT-3 (Brown et al., 2020). These models are trained on a diverse set of languages, primarily high-resource ones due to available corpora. This also includes the development of region-specific models like AfriBERTa (Ogueji et al., 2021), MuRIL (Khanuja et al., 2021), and IndicBERTv2 (Doddapaneni et al., 2023). However, these models tend to underperform on low-resource languages unseen during pretraining despite their cross-lingual capabilities (Philippy et al., 2023; Winata et al., 2022).
Consequently, there have been works to enhance LLMs for a broader range of languages, including vocabulary expansion (Wang et al., 2019), the use of lexicon (Wang et al., 2022), multilingual adaptive fine-tuning (MAFT) (Alabi et al., 2022; ImaniGooghari et al., 2023), and parameter efficient methods (Pfeiffer et al., 2021, 2022). All these techniques that have been shown to improve the performance of LLMs for low-resource languages and in cross-lingual transfer settings. In this work, we leveraged MAFT to adapt an existing LLM to 75 languages. In addition, we evaluated several LLMs including region-specific LLMs and we showed the capabilities of the different models on SIB-200.
## 7 Conclusion
In this paper, we created SIB-200--a large scale open-sourced benchmark dataset for topic classification in 200 languages and dialects to address the lack of evaluation datasets for natural language understanding especially for low-resource languages. We performed extensive evaluation across full-supervised setting, cross-lingual transfer setting and prompting of LLMs settings. Furthermore, we grouped the 200 languages in different categories based on language families, geographical regions, Joshi's class and coverage in multilingual pre-trained language models to provide insights into which group of languages have poor performance on this simple and inclusive benchmark.
Our findings are (1) There is a large performance gap between high-resource languages and low-resource ones, especially for under-represented languages concentrated in Africa, Americas, Oceania and south east Asia. (2) Written scripts can have big impact to the model performance. We suggest using one single preferred script for every language. (3) Including languages into the pre-training corpus is important for learning good representations and cross-lingual alignments. (4) It is crucial to mix text from various domains in the pre-training stage. Pre-training in a single domain can significantly deteriorate the performance on other domains. (5) Continued pre-training to cover more languages is an effective improving method, even if pre-training on synthetic data obtained from MT systems.
We hope our dataset will encourage a more inclusive evaluation of multilingual language models on a more diverse set of languages.
## 8 Limitations
One of the main limitation of our work is that the labelled dataset created for other non-English languages are based on human translation and may suffer from translationese effect including a slight drop in performance. However, we believe this is an important contribution for many languages that often do not have news articles or Wikipedia articles that can be used for such annotation.
Another limitation is the choice of multilingual pre-trained language models, we note that XLM-R may not be the best multilingual encoder out there, there are other publicly available ones like InfoXLM (Chi et al., 2021), mDeBERTa (He et al., 2023) and others, however due to the scale of the experiments, we limited our evaluation to three multilingual models (XLM-R-base, XLM-R, and Glot-500). We believe our result may still be consistent with newer PLMs since they often cover similar set of languages as XLM-R.
## 9 Acknowledgement
David Adelani acknowledges the support of Deep-Mind Academic Fellowship programme. Jesujoba Alabi was partially funded by the BMBF project SLIK under the Federal Ministry of Education and Research grant 01IS22015C. This work was supported in part by Oracle Cloud credits and related resources provided by Oracle. We thank Google for providing GCP credits to train the AfroXLMR-61 model. Finally, we are grateful to OpenAI for providing API credits through their Researcher Access API programme to Masakhane for the evaluation of GPT-3.5 and GPT-4 large language models.
|
2309.11227 | Insights into neutron star equation of state by machine learning | Due to its powerful capability and high efficiency in big data analysis,
machine learning has been applied in various fields. We construct a neural
network platform to constrain the behaviors of the equation of state of nuclear
matter with respect to the properties of nuclear matter at saturation density
and the properties of neutron stars. It is found that the neural network is
able to give reasonable predictions of parameter space and provide new hints
into the constraints of hadron interactions. As a specific example, we take the
relativistic mean field approximation in a widely accepted Walecka-type model
to illustrate the feasibility and efficiency of the platform. The results show
that the neural network can indeed estimate the parameters of the model at a
certain precision such that both the properties of nuclear matter around
saturation density and global properties of neutron stars can be saturated. The
optimization of the present modularly designed neural network and extension to
other effective models are straightforward. | Ling-Jun Guo, Jia-Ying Xiong, Yao Ma, Yong-Liang Ma | 2023-09-20T11:38:56Z | http://arxiv.org/abs/2309.11227v2 | # Insights into neutron star equation of state by machine learning
###### Abstract
Due to its powerful capability and high efficiency in big data analysis, machine learning has been applied in various fields. We construct a neural network platform to constrain the behaviors of the equation of state of nuclear matter with respect to the properties of nuclear matter at saturation density and the properties of neutron stars. It is found that the neural network is able to give reasonable predictions of parameter space and provide new hints into the constraints of hadron interactions. As a specific example, we take the relativistic mean field approximation in a widely accepted Walecka-type model to illustrate the feasibility and efficiency of the platform. The results show that the neural network can indeed estimate the parameters of the model at a certain precision such that both the properties of nuclear matter around saturation density and global properties of neutron stars can be saturated. The optimization of the present modularly designed neural network and extension to other effective models are straightforward.
Introduction and motivation
With the development of technologies, the amount of experimental data is increasing rapidly. These massive data are usually from different experimental targets and provide information on different aspects of the same system. Therefore, in order to get a complete understanding of the physics involved, one needs to combine these data and analyze them systematically. However, it's usually a challenge for researchers to handle this kind of process due to the complexity of the data and the massive parameters in the model. The recently developed machine learning (ML) or artificial intelligence(AI)-driven technologies provide a way out. The ML methods have already earned credits in the fields of big data analysis due to their advantages of efficiency and adaptivity [1; 2; 3; 4], and they have already been applied in many different fields of physics, e.g., Refs. [5; 6; 7; 8; 9; 10; 11; 12; 13] and references therein.
In nuclear physics, the properties of nuclear matter (NM) have been investigated for a long period but no consensus has been arrived at. Several fundamental questions are waiting for clarification, for example, what are the constituents of the NM, whether is a phase transition involved in the dense compact star matter or not, is dense nuclear matter in other states than Fermi-liquid, et al., (see, e.g., reviews Refs. [14; 15; 16; 17; 18; 19; 20; 21; 22] and references therein). To resolve these questions, all the existing information from both experiments and theories should be combined in the corresponding analysis and a reliable technique, like ML developed here, is necessary.
The constraints on nuclear matter come from both terrestrial experiments and astrophysical observations. Owing to the analysis of the structures of heavy nuclei, e.g., \({}^{24}\)Mg, \({}^{90}\)Zr, \({}^{116}\)Sn and \({}^{208}\) Pb, and the data from heavy-ion collisions, one can provide information about NM properties around nuclear saturation density (\(n_{0}\approx 0.16\)fm\({}^{-3}\)) [23; 24; 25; 26], such as the binding energy of nucleon \(e_{0}\), the symmetry energy \(E_{\rm sym}\), the incompressibility coefficient \(K_{0}\), the skewness coefficient \(L_{0}\), and so on. Besides the above information obtained from terrestrial experiments, there are also constraints from astrophysics. The signals from, e.g., PSR J1614-2230, J0348+043, PSR J0740+6620, J0030+0451 and PSR J0740+662 constrain the mass-radius (MR) relation of neutron stars (NSs) [27; 28; 29; 30; 31; 32]. The observation of gravitational waves (GWs) from the binary neutron star (BNS) merger, GW170817 [33; 34], yields a new independent constraints--tidal deformations--on the MR relations more rigorously [35; 36] and the multi-messenger era of NSs has begun [37; 38]. Therefore, in the study on NM, both the data from terrestrial experiments and astrophysical observations should be considered and these massive data increase the difficulty of the analysis process.
On the other hand, the equation of state (EoS) plays a key role in the studies on NM, and it's
usually parameterized by effective models or theories, e.g., one boson exchange (OBE) model [39] and chiral effective field theory (\(\chi\)EFT) [40; 41], due to the nonperturbative nature of quantum chromodynamics (QCD). The information of QCD is encoded into the low energy constants (LECs) of these models and theories, which are determined by fitting the experimental data, but the fitting process is somehow fine-tuning because of the cancellation mechanism between attractive and repulsive terms. It's easy to imagine that this kind of searching of parameter spaces can be quite a labor- and time-consuming work.
In practice, the simultaneous treatment of the massive data and fine-tuning of the LECs to obtain a physical EoS is very difficult if not unfeasible. With the help of ML methods, the cost of the labor- and time-consuming work can be reduced. The recently developed artificial intelligence (AI) technologies [42; 43], applying ML methods simplify such process and some works have already been done along this line recently, see Refs. [36; 44; 45] for details.
The purpose of this work is to propose a ML platform to carry out the above idea. We shall construct a neural network (NN) with respect to the properties of NM around saturation density and the MR relations of NSs to constrain the behaviors of EoS. A specific but widely accepted nucleon force model is adopted as a preliminary illustration of the efficiency of this NN. The platform developed can be easily extended to other complex, general and practical models.
## II Neural network framework and its application
The structure of our NN platform for the parameter-searching processes is shown in Fig. 1. Concretly, the strategy of the NN is the following: At first, a model or theory with its pre-chosen parameter space should be given to train the computation module of NN, and it doesn't matter whether the pre-chosen parameter space is a physical one or not. Then, the trained computation module predicts possible values of parameters according to the Bayesian module calculation by experimental data from NM and NS. If the self-supervised module judges that the data set indicates a parameter set beyond the training set, it will make possible predictions and generate a corresponding training set. In this situation, the new training set and the previous one will be combined by an adaptive algorithm to train the computation module again, and this process will repeat till the probability from Bayesian analysis converges to a fixed value, which represents the confidence level of the input model.
### Application in a specific model
After a discussion on the basic idea of the NN, we illustrate its application in this part by using a widely used nuclear force model [46; 47]. The model takes the following form:
\[{\cal L}_{\rm RMF} = \bar{\psi}\left[i\gamma_{\mu}\partial^{\mu}-M-g_{\sigma}\sigma-g_{ \omega}\gamma_{\mu}\omega^{\mu}-g_{\rho}\gamma_{\mu}\tau_{a}\rho^{a\mu}\right]\psi \tag{1}\] \[\mbox{}+\frac{1}{2}\partial_{\mu}\sigma\partial^{\mu}\sigma- \frac{1}{2}m_{\sigma}^{2}\sigma^{2}-\frac{1}{3}g_{2}\sigma^{3}-\frac{1}{4}g_{ 3}\sigma^{4}\] \[\mbox{}-\frac{1}{4}W_{\mu\nu}W^{\mu\nu}+\frac{1}{2}m_{\omega}^{2 }\omega_{\mu}\omega^{\mu}+\frac{1}{4}c_{3}\left(\omega_{\mu}\omega^{\mu} \right)^{2}\] \[\mbox{}-\frac{1}{4}R_{\mu\nu}^{a}R^{a\mu\nu}+\frac{1}{2}m_{\rho}^ {2}\rho_{\mu}^{a}\rho^{a\mu},\]
where \(\psi\) is the iso-doublet of nucleon field with mass \(M\), \(\sigma\) is the iso-scalar scalar meson field and, \(\omega^{\mu}\) and \(\rho^{\mu}\) are, respectively, the iso-scalar and iso-vector meson fields with \(W^{\mu\nu}\) and \(R^{a\mu\nu}\) being their field strength tensors
\[W^{\mu\nu} = \partial^{\mu}\omega^{\nu}-\partial^{\nu}\omega^{\mu}\,\] \[R^{a\mu\nu} = \partial^{\mu}\rho^{a\nu}-\partial^{\nu}\rho^{a\mu}+g_{\rho} \epsilon^{abc}\rho^{b\mu}\rho^{c\nu}. \tag{2}\]
In this model, the nucleon-meson coupling terms are the most simple one-boson-exchange type interactions, the non-linear \(\sigma\) terms are crucial to describe incompressibility and, the non-linear \(\omega\) terms are added in order to reproduce the density dependence of nucleon self-energy.
In practice, to calculate the nuclear matter properties using model (1), some approximations should be applied. In the relativistic mean field (RMF) approximation which is widely used, the
Figure 1: Structure of NN platform.
problem is reduced to solve the following coupled equations of motion (EoMs):
\[m_{\sigma}^{2}\sigma+g_{2}\sigma^{2}+g_{3}\sigma^{3} = -\;g_{\sigma}\left(\rho_{n,s}+\rho_{p,s}\right)\;,\] \[m_{\omega}^{2}\omega+c_{3}\omega^{3} = g_{\omega}\left(\rho_{p}+\rho_{n}\right)\;,\] \[m_{\rho}^{2}\rho = g_{\rho}\left(\rho_{p}-\rho_{n}\right)\;, \tag{3}\]
where \(\rho_{n(p)}\) and \(\rho_{n,s(p,s)}\) are, respectively, the density and scalar density of neutron (proton)
\[\rho_{n(p),s} = \frac{m_{N}^{\star 3}}{\pi^{2}}\int_{0}^{t_{n(p)}}\mathrm{d}x \frac{x^{2}}{\sqrt{1+x^{2}}} \tag{4}\] \[= \frac{m_{N}^{\star 3}}{\pi^{2}}\left[\frac{1}{2}\left(t_{n(p)} \sqrt{1+t_{n(p)}^{2}}-\mathrm{arcsinh}t_{n(p)}\right)\right]\;,\]
with \(m_{N}^{\star}=M+g_{\sigma}\sigma\) being the effective mass of nucleons and \(t_{n(p)}=\frac{(3\pi^{2}\rho_{n(p)})^{1/3}}{m_{N}^{\star}}\). After determining the expectations of meson fields by solving the EoSs (3), the energy density \({\cal E}\) can be obtained by calculating Hamiltonian and the pressure \(P\) can be obtained via \(P=-{\cal E}+\rho_{N}\frac{\mathrm{d}{\cal E}}{d\rho_{N}}\).
Usually, in a model of nuclear matter, the solution of the EoMs and the prediction of the EoS suffer from the constraints from the nuclear matter properties around saturation density and the mathematical structure of the EoMs. From the mathematical structure of the EoMs (3), it can be seen that even in the simple Model (1), the EoMs have the multi-roots problem, which seemingly lead to the NN convergence problem. But one can expect the results given by the NN are smooth and physical due to the mathematical reasons: (i) Identity theorem of analytic functions; (ii) The full rank properties of a equation group coefficient matrix. Therefore, the solution planes of Eq. (3) will only intersect or be tangent if the natural condition is assumed. In either case, the solution plane given by NN will be defined uniquely by choosing a starting point, which will be the origin point in field expectation solution space in the following discussions.
In the explicit calculation, the computation module of NN #1 to solve above equation group are based on forty fully connected layers, where there is a batch normalization layer between each two fully connected layers to accelerate the training [49]. The fully connected layers are constructed as Fig. 2. The loss function in above NN is defined as
Footnote #1: The NN in this work is built on PyTorch platform. [48]
\[f_{\mathrm{loss},\sigma} = \left[m_{\sigma}^{2}\sigma+g_{2}\sigma^{2}+g_{3}\sigma^{3}+g_{ \sigma}\left(\rho_{n,s}+\rho_{p,s}\right)\right]^{2}\;,\] \[f_{\mathrm{loss},\omega} = \left[m_{\omega}^{2}\omega+c_{3}\omega^{3}-g_{\omega}\left(\rho_{ p}+\rho_{n}\right)\right]^{2}\;, \tag{5}\]
which enhances the ability of NN to give a smooth and physical prediction. The pre-training set is generated randomly at regions where \(\left\{\rho_{n},\;\rho_{p}\right\}\in\left[\left\{0,0\right\},\left\{10n_{0},1 0n_{0}\right\}\right]\) with distribution function
\(\mathcal{P}(\rho_{n},\rho_{p})=\frac{\sqrt{10}}{20}\sqrt{\frac{n_{0}}{\rho_{n}+ \rho_{p}}}\) in order to improve the ability to describe low density regions. The learning process is done with the help of Adam algorithm [51]. Based on the results of so solved EoMs, the EoS can be obtained from Lagrangian (1) and star properties can be calculated by solving the Tolman-Oppenheimer-Volkoff (TOV) equation [52; 53].
In the numerical calculation by using the NN constructed above, we impose the constraints from the neutron star properties and the nuclear matter properties around saturation density. Our choices are list in Table 1 which are estimated from [54; 55; 56; 34; 57; 58]
With respect to the constraints shown in Table 1, it's ready to calculate the parameters of Model (1) using the NN constructed in this work. Our results of the optimal solutions are
Figure 2: Layers to solve EoMs of meson fields. ReLU is used as activation function [50]. The layers construction for \(\rho\) field is ignored in current work for simplicity, since its EoM in Eq. (3) is just a linear function of nucleon density.
shown in Table 2#2. The optimal values of the parameter set lead to the NM properties shown in Table 1 and the corresponding MR relation illustrated in Fig. 3. From the NM properties and the MR relation, one can see that the optimal values from NN are globally consistent with the empirical values and the observations from GW170817. This demonstrates the rationality of the NN approach.
Footnote #2: The weight of parameter point is defined as \(\mathcal{W}=\prod\mathcal{P}_{i}\), where \(\mathcal{P}\) refers to probability density and \(i\) represents physical quantities.
We next plot the regions of the NM properties and the MR relations from the predicted parameter space with 10.1% confidence level within the data error band #3. In Fig. 4, the bands of the NM properties with and without GW170817 data are presented. One can see that the inclusion of the constraints from GW170817 shrinks the bands and softens the EoS globally. This observation shows the necessity to analyze both the constraints from NS relations and NM properties simultaneously.
Footnote #3: The confidence level is calculated by \(P_{c}=\int_{V}J\mathcal{W}\mathrm{d}V\), where the measure \(V\) is the parameter space region with \(O\) being the optimal point and \(J\) refers to the determinant of Jacobian matrix mapping data space to parameter space.
From Fig. 3 and Table 1, one can see an interesting fact that predicted quantities by NN with 7 free parameters are not located at the center values of 6 constraints. The possible reasons are: (i) The parameter space is redundant; (ii) The degree of data's freedom exceeds 7. The redundancy of parameter space means that at least two parameters in this system can be expressed by one
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \(g_{\sigma}\) & \(g_{\omega}\) & \(g_{\rho}\) & \(g_{3}\) & \(c_{3}\) & \(g_{2}\) & \(m_{\sigma}\) \\ \hline
9.82 & 11.8 & 3.42 & 1.26 & 72.6 & \(-1550\) MeV & 531 MeV \\ \hline \hline \end{tabular}
\end{table}
Table 2: Optimal values of the parameters calculated from NN. The emprical values \(M_{N}=938\) MeV, \(m_{\omega}=782\) MeV and \(m_{\rho}=765\) are physical values at vacuum taken as inputs for simplicity.
Figure 3: MR relation obtained by the optimal solution.
parameter, which indicates the optimal solution is actually an open area not a specific point in the parameter space. After scanning the area around the optimal solution listed in Table 2, we find that it's indeed a point, which excludes the possibility of the redundancy of the parameter space. Then the left reason is just that the dimension of data space is larger than 7. This is imaginable since the MR relation is obtained via the integral of EoS over the corresponding density interval, which means that NS properties are affected by the whole EoS line shape, not some specific points. Meanwhile, from NM properties and MR relations obtained in this work shown in Fig. 4, it's found that the statistical significance of NM properties in the Bayesian analysis is reduced by including the constraints of MR relation because the area obtained from the case with MR constraint deviates more from the \(e_{0}\) and \(E_{\text{sym}}\) constraints but meets the MR constraint better. It again shows the need to take care of both NM and NS properties, simultaneously, in nuclear force study. Besides, it's found that MR lines only lie at the right side of the constraint in Fig. 4, which provides evidence of the NN's ability to identify data-favored models.
Figure 4: Regions of the NM properties and MR relations predicted by NN.
Since parameter space is high dimensional, it's hard to fully describe the error bands of parameters. To have an idea of the distribution of a certain parameter, we vary it in the parameter space while fixing the others at the optimal values #4. The distributions of the parameters are shown in Fig. 5 and it can be seen that the probability density distributions of parameters are not regular normal distributions, which indicates the correlations between parameters, especially those multi-meson couplings.
Footnote #4: The number of events is set to be 10000.
## III Summary and Outlook
An NN platform is constructed for analysis on bands of strong interaction parameters with respect to both NS properties and NM properties. It has three main modules:
1. The computation module is used to solve the coupled EoMs to obtain the corresponding physical quantities, and the LOSS function used during the training process is the EoM itself so that this kind of process can be applied to other physical problems straightforwardly;
2. The Bayesian module is used to calculate the confidence level of the parameter space given by the computation module with reasonable physical error bars;
3. The self-supervised module, the most important part, gives the platform the ability to search parameter space automatically and adaptively with the advantages of NN, and will try to find the optimal solutions of parameters.
Figure 5: Distributions of the parameters estimated by NN.
As an example, we apply the constructed NN to a specific nucleon force model to show that the platform is able to give a reasonable prediction of parameter space. The numerical results suggest the necessity to analyze NS properties with the NM properties simultaneously since the predicted parameter space with only NM properties usually makes it hard to describe the MR relations. The results also indicate the NN platform's ability to identify data favored theoretical parametrization with the help of the optimal solution's confidence level, \(P_{c}\). The NN applied in this example is modularly designed so that the computation module can be adjusted to solve other RMF EoS and the other two modules can be applied directly to find the optimal parameter space for corresponding physical data.
In our future works, other NS and NM properties, such as tidal deformations, the symmetry energy density slope, and the skewness coefficient, will be added to the Bayesian module to provide more rigorous constraints on parameters, and the algorithms involved will be changed into the AI governed ones to improve the efficiency and adaptivity. The correlation analysis on parameter space will be carried out to distinguish the leading-order physical contribution parameters from the ones causing noise. The existing algorithms built in this work will be applied to other \(\chi\)EFTs, such as the setup in Ref. [59], which are nonlinear realizations of QCD symmetry. The constraints on these LECs of the \(\chi\)EFTs can be of great help to understanding non-perturbative properties of QCD, such as quark condensate and trace anomaly at low energy and dense regions.
In modern physics, multi-source data handling and error propagations are becoming more and more important to help one understand EFTs and provide rigorous constraints on possible new physics. The statistical analysis built in this platform can be extended to other model verification processes, especially those with massive data or complex mappings between physical quantities and parameters. It's hoped to provide a heuristic way for theoretical physics research.
###### Acknowledgements.
The work of Y. L. M. is supported in part by the National Key R&D Program of China under Grant No. 2021YFC2202900 and the National Science Foundation of China (NSFC) under Grant No. 11875147 and No. 12147103.
|
2309.09468 | Thermodynamics of imbibition in capillaries of double conical
structures-Hourglass, diamond, and sawtooth shaped capillaries- | Thermodynamics of imbibition (intrusion and extrusion) in capillaries of
double conical structures is theoretically studied using the classical
capillary model. By extending the knowledge of the thermodynamics of a single
conical capillary, not only the nature of spontaneous imbibition but that of
forced imbibition under applied external pressure are clarified. Spontaneous
imbibition in capillaries of double conical structure can be predicted from the
Laplace pressure in a single conical capillary. To understand the forced
imbibition process, the free energy landscape along the imbibition pathway is
calculated. This landscape shows either a maximum or a minimum. The former acts
as the energy barrier and the latter acts as the trap for the liquid-vapor
meniscus so that the imbibition process can be either abrupt with a pressure
hysteresis or gradual and continuous. The landscape also predicts a completely
filled, a half-filled and a completely empty state as the thermodynamically
stable state. Furthermore, it also predicts a completely filled and a
half-filled state of metastable liquid which can be prepared by the combination
of the intrusion and the extrusion process. Our study could be useful for
understanding various natural fluidic systems and for designing functional
fluidic devices such as a diode, a switch etc. | Masao Iwamatsu | 2023-09-18T04:02:06Z | http://arxiv.org/abs/2309.09468v1 | Thermodynamics of imbibition in capillaries of double conical structures-Hourglass, diamond, and sawtooth shaped capillaries-
###### Abstract
Thermodynamics of imbibition (intrusion and extrusion) in capillaries of double conical structures is theoretically studied using the classical capillary model. By extending the knowledge of the thermodynamics of a single conical capillary, not only the nature of spontaneous imbibition but that of forced imbibition under applied external pressure are clarified. Spontaneous imbibition in capillaries of double conical structure can be predicted from the Laplace pressure in a single conical capillary. To understand the forced imbibition process, the free energy landscape along the imbibition pathway is calculated. This landscape shows either a maximum or a minimum. The former acts as the energy barrier and the latter acts as the trap for the liquid-vapor meniscus so that the imbibition process can be either abrupt with a pressure hysteresis or gradual and continuous. The landscape also predicts a completely filled, a half-filled and a completely empty state as the thermodynamically stable state. Furthermore, it also predicts a completely filled and a half-filled state of metastable liquid which can be prepared by the combination of the intrusion and the extrusion process. Our study could be useful for understanding various natural fluidic systems and for designing functional fluidic devices such as a diode, a switch etc.
+
Footnote †: preprint: AIP/123-QED
## I Introduction
Imbibition (intrusion and extrusion) of liquid in microscale and nanoscale capillaries is one of the most fundamental problems of thermodynamics of liquid in confined space not only in various field of natural science [1; 2; 3; 4; 5; 6] but also in various engineering problems in micro- and nano-scale [7; 8; 9; 10; 11; 12; 13]. Recently, there have been growing interests in addressing the problem of imbibition in asymmetric capillaries with geometrical gradient [5] because it is relevant to the engineering [14; 15; 16] of various micro- and nano-fluidics functional devices.
Among various asymmetric capillaries, conical and double-conical capillaries [17] illustrated in Fig. 1 are the basic element of various natural as well as artificial systems. In particular, truncated conical capillaries have been extensively studied as the simplest model to study the effect of geometrical gradient and, in particular, as the model of imbibition into porous substrates [5]. Also, they have been studied for their potential applications as micro- and nano-fluidic devices [18; 19; 20; 21; 22; 12; 23] such as liquid diode [12; 20; 21; 16], ionic current rectifier [18], pump [19], Janus paper [22], and water harvesting [23]. Carbon nanocone seems the most promising candidate of conical nano capillaries [23; 24].
Those capillaries in Fig. 1 with double conical structures which consist of a converging and a diverging conical capillary also attract intensive attentions recently. For example, _converging-diverging_ hourglass shaped capillaries illustrated in Fig. 1(a) have been studied as the simplest model of biological aquaporin [25] and its biomimetic artificial devices for filters [26], pumps [27], gates [28] and rectifiers [29]. To fabricate hourglass shaped capillaries, mechanical deformation of carbon nanotube has been considered [30; 31]. In addition to the flow physics in converging-diverging capillaries [32], the flow physics in similar _diverging-converging_ diamond shaped capillaries (Fig. 1(b)) attract some attentions as a model fluidic device [33] and as a theoretical conceptual tool [34].
In addition to those converging-diverging hourglass shaped and diverging-converging diamond shaped structure, _converging-converging_ sawtooth shaped structure (Sawtooth-1 in Fig. 1(c) and _diverging-diverging_ sawtooth shaped structure (sawtooth-2 in Fig. 1(d) have attracted attentions [1; 2; 14; 15; 16] for their ratchet-like structure, which is expected to realize unidirectional transport.
Although a large amount of literature on double conical capillaries has already been accumulated [5; 6; 7; 8; 9; 11; 12; 13], most of the theoretical works studied transport properties numerically using macroscopic fluid dynamic equations [17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 23; 28; 29; 30; 31; 32; 33] or atomic molecular dynamic simulations [23; 24; 28; 29] which are limited to sub nano-scale. Relatively few studies based on thermodynamics have been conducted to understand the quasi-static imbibition processes [35; 36; 37; 38; 39; 4]. In particular, pressure-controlled intrusion and extrusion, or infiltration and defiltration, and, furthermore, infiltration pressure [40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 82; 84; 86; 88; 87; 89; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 170; 162; 163; 164; 165; 166; 167; 168; 169; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 188; 189; 180; 183; 185; 187; 188; 189; 191; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 213; 214; 215; 216; 217; 218; 219; 223; 219; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 25; 25; 25; 25; 256; 257; 258; 26; 267; 278; 289; 290; 301; 31; 329; 330; 331; 332; 34; 341; 35; 36, 37; 38; 391; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 72; 73; 74; 75; 76; 77; 78; 79; 80; 83; 84; 85; 86; 87; 88; 89; 92; 93; 94; 95; 96; 97; 98; 99; 101; 113; 114; 115; 116; 117; 118; 119; 121; 122; 123; 124; 125; 126; 127; 128; 129; 131; 140; 141; 143; 144; 145; 146; 147; 148; 159; 150; 151; 152; 1549; 160; 153; 156; 157; 158; 159; 161; 170; 163; 164; 165; 166; 167; 168; 169; 171; 172; 174; 175; 176; 177; 178; 179; 181; 183; 185; 186; 187; 188; 189; 192; 203; 219; 220; 221; 223; 231; 232; 234; 235; 236; 237; 238; 239; 250; 26; 207; 208; 209; 211; 233; 241; 242; 243; 245; 246; 247; 249; 251; 252; 268; 270; 271; 281; 282; 291; 292; 300; 291; 310; 311; 329; 331; 333; 340; 341; 353; 342; 35; 356; 357; 36; 377; 38; 393; 40; 415; 42; 43; 45; 46; 47; 48; 49; 516; 58; 59; 60; 621; 63; 64; 65; 66; 67; 68; 69; 70; 73; 75; 76; 78; 79; 81; 80; 84; 86; 87; 89; 93; 94; 95; 96; 97; 98; 102; 99; 114; 145; 156; 157; 158; 159; 162; 163; 164; 166; 167; 168; 179; 184; 185; 187; 189; 193; 194; 195; 196; 197; 198; 199; 200; 211; 224; 125; 126; 127; 128; 129; 204; 206; 207; 208; 209; 213; 217; 219; 225; 209; 214; 218; 226; 219; 233; 242; 251; 263; 252; 26; 271; 293; 272; 2
termine the criterion for the appearance of diode-like character (one-way transport) in a single conical capillary. In contrast, many researchers [44, 45, 20, 46] used hydrodynamic approach [47] and identified diode-like character in conical capillaries from the time scale of flow. However, such hydrodynamic studies can be meaningful only when the spontaneous imbibition (intrusion) without external applied pressure is realized and the steady flow is established.
Furthermore, we found that the free energy landscape shows either a maximum which acts as the barrier or a minimum which acts as the trap so that the imbibition can be either an abrupt transition with a hysteresis or a gradual continuous transition [21, 39]. This free energy maximum originates from the conical geometry and is nothing related to the nucleation barrier of capillary condensation [36, 35, 36] because the free energy landscape is evaluated by assuming a continuous intrusion of liquid from one end of the capillary [21, 39].
In this paper, we enlarge our previous studies of a single conical capillary [21, 39], and consider the thermodynamics of imbibition in doble conical capillaries. Here, the terminology "imbibition" collectively used to mean "intrusion" and "extrusion" of liquid. To this end, we consider not only the Laplace pressure [35, 38, 21] which is the main driving force of capillary flow [48, 49] but also the free energy landscape along the pathway of imbibition [39, 3, 21] under the applied external (infiltration) pressure. Our results will be useful to consider the possibility of various double conical capillaries as the functional liquid devices by pressure-controlled imbibition under the action of the infiltration pressure [40, 41, 42, 43, 10].
## II Imbibition in a converging and a diverging single conical capillary
### Morphological thermodynamics of imbibition
In this section we reconsider and enlarge our previous studies [21, 39] of the classical capillary model of imbibition (intrusion and extrusion) in a single conical capillary. Though the classical capillary model, which is the simplest case of morphological thermodynamic approach [50, 51], is macroscopic, it is believed to be valid down to the nanoscale [6] and would give useful information even to the micro and the nano scale phenomena. In this classical model, the surface free energy \(F\) comprises the free energy of the free liquid-vapor surface energy \(F_{\rm{lv}}=\gamma_{\rm{lv}}S_{\rm{lv}}\) and that of the liquid-solid surface energy of the capillary wall \(F_{\rm{sl}}=\gamma_{\rm{lv}}\cos\theta_{\rm{Y}}S_{\rm{sl}}\) wetted by the liquid. The total surface free energy is given by
\[F=F_{\rm{lv}}-F_{\rm{sl}}=\gamma_{\rm{lv}}S_{\rm{lv}}-\gamma_{\rm{lv}}\cos \theta_{\rm{Y}}S_{\rm{sl}}, \tag{1}\]
where \(\gamma_{\rm{lv}}\) and \(S_{\rm{lv}}\) represent the liquid-vapor surface tension and the surface area, respectively, and \(S_{\rm{sl}}\) is the solid-liquid (wet) surface area (see Tab. 1 for the complete list of symbols and their description.). The angle \(\theta_{\rm{Y}}\) is Young's contact angle defined by Young's equation, which is expressed as:
\[\cos\theta_{\rm{Y}}=\frac{\gamma_{\rm{Sv}}-\gamma_{\rm{sl}}}{\gamma_{\rm{lv}}}, \tag{2}\]
where \(\gamma_{\rm{Sv}}\) and \(\gamma_{\rm{sl}}\) represent the solid-vapor and the solid-liquid surface tensions, respectively. This Young's contact angle characterizes the wettability of capillary wall. Here, we neglect the effect of gravity since we consider capillaries whose diameters are smaller than capillary length. Also, we neglect the contribution of the line tension, which could play some role in nano scale.
Even though we will not consider the nucleation of liquid droplets or vapor bubbles in the middle of capillary, we will use the terminology "vapor" instead of "gas" throughout this paper. In fact, our model system can be applicable to any binary system of immiscible fluids including non-volatile liquid and gas systems.
We consider an axially symmetric capillary around the \(z\) axis whose inlet is at \(z=0\). We borrow the concept of transient state of the classical nucleation theory [4, 6], and assume an imbibition pathway along the capillary with a constant Young's contact angle [6, 21, 39]. Then, the solid-liquid surface free energy
-double conical capillary
when the liquid-vapor interface reaches \(z\) is given by [39]
\[F_{\rm{sl}} = 2\pi\gamma_{\rm{v}}\cos\theta_{\rm{Y}}\int_{0}^{z}R(z^{\prime}) \sqrt{1+\left(\frac{dR}{dz^{\prime}}\right)^{2}}dz^{\prime}, \tag{3}\]
where \(R(z)\) is the radius of the capillary at \(z\). The liquid-vapor surface free energy is given by
\[F_{\rm{lv}}=2\pi\gamma_{\rm{lv}}R(z)^{2}\frac{1-\cos\psi}{\sin^{2}\psi}\simeq \pi\gamma_{\rm{v}}R(z)^{2}, \tag{4}\]
where the half-opening angle \(\psi\) defined in Fig. 2 is approximated by \(\psi=0\) to simplify mathematics. Therefore, a spherical interface is replaced by a flat one [39] because inclusion of the spherical interface gives only a small correction which can be included by regarding the capillary radius \(R(z)\) as an effective radius [21].
The total liquid volume inside the capillary is also approximately given by
\[V(z)=\pi\int_{0}^{z}R_{i}(z^{\prime})^{2}dz^{\prime}+\frac{\pi}{3}R(z)^{3}\nu \left(\psi\right)\simeq\pi\int_{0}^{z}R(z^{\prime})^{2}dz^{\prime} \tag{5}\]
where a small volume correction of a spherical cap [21] (Fig. 2)
\[\nu\left(\psi\right)=\frac{\left(1-\cos\psi\right)^{2}\left(2+\cos\psi\right) }{\sin^{3}\psi} \tag{6}\]
is neglected. Since the surface free energy \(F(z)\) in Eq. (1) is simply given as a function of \(z\) by the sum of Eqs. (3) and (4), and the liquid volume \(V(z)\) is given by Eq. (5), the liquid pressure \(p_{\rm{L}}(z)\) defined by
\[p_{\rm{L}}(z)=-\frac{\partial F}{\partial V}=-\frac{1}{dV/dz}\frac{\partial F (z)}{\partial z}, \tag{7}\]
can be analytically calculated as
\[p_{\rm{L}}(z)=\frac{2\gamma_{\rm{v}}}{R(z)}\left(\cos\theta_{\rm{Y}}\sqrt{1+ \left(\frac{dR}{dz}\right)^{2}}-\frac{dR}{dz}\right) \tag{8}\]
which reduces to the standard Laplace pressure
\[p_{\rm{L}}(z)=\frac{2\gamma_{\rm{lv}}\cos\theta_{\rm{Y}}}{R} \tag{9}\]
in straight cylinders (\(R(z)=R\), \(dR/dz=0\)).
Now, we consider two conical capillaries of identical shape with either a converging or a diverging radius (Fig. 3) \(R_{\rm{C}}(z)\) or \(R_{\rm{D}}(z)\) given by:
\[R_{\rm{C}}(z) = R_{\rm{C}}(0)-\left(\tan\phi\right)z,\;\;\left(0\leq z\leq H \right), \tag{10}\] \[R_{\rm{D}}(z) = R_{\rm{D}}(0)+\left(\tan\phi\right)z,\;\;\left(0\leq z\leq H \right), \tag{11}\]
where \(\phi(0\leq\phi\leq 90^{\circ})\), and \(R_{\rm{C}}(0)=R_{\rm{C}}(z=0)\) and \(R_{\rm{D}}(0)=R_{\rm{D}}(z=0)\), and \(H\) represent the tilt angle of the wall, the radius at the inlet (\(R_{\rm{C}}(0)>R_{\rm{D}}(0)\)), and the length of the capillary (Fig. 3). Hence, the capillary parameters in Eqs. (10) and (11) are related by
\[R_{\rm{D}}(0)=R_{\rm{C}}(0)-\left(\tan\phi\right)H. \tag{12}\]
To study the imbibition, we have to specify the geometry of the conical capillaries. We select the tilt angle \(\phi\) and the aspect ratio
\[\eta_{\rm{C}}=\frac{H}{R_{\rm{C}}(0)} \tag{13}\]
as the two fundamental parameters to specify the geometry. Therefore, another aspect ratio
\[\eta_{\rm{D}}=\frac{H}{R_{\rm{D}}(0)} \tag{14}\]
Figure 2: Spherical meniscus of liquid in a capillary axially symmetric around \(z\) axis with a varying radius \(R(z)\). The half-opening angle \(\psi\) of the spherical interface is related to the tilt angle \(\phi\) and Young’s contact angle \(\theta_{\rm{Y}}\) of the capillary wall. Here, we show the convex meniscus with the contact angle \(\theta_{\rm{Y}}\) larger than \(90^{\circ}\). In fact, the meniscus must be concave to make the driving Laplace pressure positive and the spontaneous imbibition possible. We will neglect the spherical cap and approximate a spherical meniscus by a flat one so that small corrections by the spherical cap to the liquid volume and the liquid-vapor surface area are neglected.
Figure 3: Two axially symmetric conical capillaries with (a) a converging radius and (b) a diverging radius. The inlet radius of the diverging capillary and that of the converging capillary are \(R_{\rm{C}}(0)\) and \(R_{\rm{D}}(0)\), respectively. The length of the capillary is \(H\) and the tilt angle of wall is \(\phi\). The inlet at the left is immersed in the liquid reservoir with the liquid pressure \(p_{\rm{L}}\) and the outlet at the right is immersed in the vapor reservoir with the vapor pressure \(p_{\rm{v}}\). The liquid intrudes from the left to the right. When the liquid intrusion occurs without applied external pressure \(p_{\rm{ext}}=p_{\rm{l}}-p_{\rm{v}}=0\), the spontaneous imbibition (intrusion) is possible. The forced liquid intrusion and extrusion occurs by applying the external pressure \(p_{\rm{ext}}\neq 0\).
is determined from the two fundamental parameters by
\[\frac{1}{\eta_{\rm D}}=\frac{1}{\eta_{\rm C}}-\tan\phi \tag{15}\]
from Eq. (12). Furthermore, these two fundamental parameters are not independent owing to geometrical constraint \(R_{\rm D}(0)\geq 0\) and they satisfy
\[0<\eta_{\rm C}\leq\frac{1}{\tan\phi}, \tag{16}\]
where the equality holds when the capillary is a true cone with \(R_{\rm D}(0)=0\). Figure 4 presents the maximum aspect ratio \(\eta_{\rm C}=1/\tan\phi\) for given tilt angle \(\phi\). A large aspect ratio \(\eta_{\rm C}\) is possible only when the tilt angle \(\phi\) is low.
The capillary pressure in Eq. (8) becomes a modified Laplace pressure written as
\[p_{\rm L(i)}(z)=\frac{2\eta_{\rm V}\Pi_{i}(\theta_{\rm Y},\phi)}{R_{i}(z)}, \tag{17}\]
where the index \(i={\rm C}\), \({\rm D}\) distinguishes the "Converging" and the "Diverging" geometry, and the scaled pressure \(\Pi_{i}\) is given by
\[\Pi_{\rm C}\left(\theta_{\rm Y},\phi\right) = \frac{\cos\theta_{\rm Y}}{\cos\phi}+\tan\phi, \tag{18}\] \[\Pi_{\rm D}\left(\theta_{\rm Y},\phi\right) = \frac{\cos\theta_{\rm Y}}{\cos\phi}-\tan\phi \tag{19}\]
from Eqs. (10) and (11), which determine the sign and the magnitude of the modified Laplace pressure in conical capillaries.
A more accurate pressure formula [21], which takes into account the spherical liquid-vapor interface (Fig. 1) was derived in our previous paper [21], where the pore radius \(R(z)\) in Eq. (17) is replace by an effective radius corrected by the small volume of spherical cap and the scaled pressures in Eqs. (18) and (19) are replaced by [21]
\[\Pi_{\rm C}\left(\theta_{\rm Y},\phi\right) = \frac{\cos\theta_{\rm Y}}{\cos\phi}+\frac{2\tan\phi}{1+\sin(\theta _{\rm Y}-\phi)}, \tag{20}\] \[\Pi_{\rm D}\left(\theta_{\rm Y},\phi\right) = \frac{\cos\theta_{\rm Y}}{\cos\phi}-\frac{2\tan\phi}{1+\sin(\theta _{\rm Y}+\phi)}. \tag{21}\]
We can recover Eq. (18) and (19) by setting \(\psi=0\) (Fig. 1) or \(\theta_{\rm Y}-\phi=90^{\circ}\) in Eq. (20) and \(\theta_{\rm Y}+\phi=90^{\circ}\) in Eq. (21).
Figure 5 presents the exact (Eqs. (20) and (21)) and the approximate (Eqs. (18) and (19)) scaled pressures \(\Pi_{\rm C}\left(\theta_{\rm Y},\phi\right)\) and \(\Pi_{\rm D}\left(\theta_{\rm Y},\phi\right)\) as a function of Young's contact angle \(\theta_{\rm Y}\) for a low tilt angle \(\phi=10^{\circ}\) and a high tilt angle \(\phi=30^{\circ}\). Apparently, they have symmetry \(\Pi_{\rm C}\left(\pi-\theta_{\rm Y},\phi\right)=-\Pi_{\rm D}\left(\theta_{\rm Y },\phi\right)\) and \(\Pi_{\rm C}\left(\theta_{\rm Y},-\phi\right)=\Pi_{\rm D}\left(\theta_{\rm Y},\phi\right)\). An exact and an approximate curve does not differ appreciably unless the tilt angle \(\phi\) and the Young's contact angle \(\theta_{\rm Y}\) are high. In particular, the two curves cross zero exactly at the same critical Young's angle \(\theta_{\rm c(C)}\) and \(\theta_{\rm c(D)}\), where the capillary pressure in Eq. (17) vanishes. In order to keep our model as simple as possible, we will continue to use these approximate formulas in Eqs. (4) and (5) which leads to Eqs. (18) and (19).
Since the modified Laplace pressure \(p_{\rm L(i)}(z)\) acts as the driving force of liquid intrusion, spontaneous intrusion is possible when the scaled pressure \(\Pi_{i}\left(\theta_{\rm Y},\phi\right)\) is positive (see Fig. 5). This occurs when Young's angle \(\theta_{\rm Y}\) is smaller than the critical Yong's angle \(\theta_{\rm c(i)}\) (\(\theta_{\rm Y}<\theta_{\rm c(i)}\)). Consequently, the imbibition in a converging capillary is possible but that in a diverging capillary is prohibited when \(\theta_{\rm c(D)}<\theta_{\rm Y}<\theta_{\rm c(C)}\).
Figure 4: Maximum aspect ratio \(\eta_{\rm C}=1/\tan\phi\) as a function of the tilt angle \(\phi\). Two fundamental parameters \(\phi\) and \(\eta_{\rm C}\) are not independent. The region above the curve is not allowed. A long capillary with high aspect ratio \(\eta_{\rm C}=H/R_{\rm C}(0)\) is possible only when the tilt angle \(\phi\) is low.
The two critical angles \(\theta_{\rm c(D)}<\theta_{\rm Y}<\theta_{\rm c(C)}\) in Eq. (22) and \(\theta_{\rm c(D)}\) in Eq (23) divide \((\phi,\theta_{\rm Y})\) space in Fig. 6 into three regions I, II and III. In the region I, the spontaneous liquid intrusion occurs both in converging capillaries and in diverging capillaries because \(\theta_{\rm Y}<\theta_{\rm c(D)}<\theta_{\rm c(C)}\). In the region II, the spontaneous intrusion only in converging capillaries is possible and that in diverging capillaries is prohibited because \(\theta_{\rm c(D)}<\theta_{\rm Y}<\theta_{\rm c(C)}\). Therefore, in this region II, a single conical capillary functions as a liquid diode [20; 21]. A larger tilt angle \(\phi\) is advantageous from Fig. 6 to expand the region II. However, it requires a short capillary length \(H\) with a low aspect ratio \(\eta_{\rm C}\) from Fig. 4. In converging conical capillaries, furthermore, the spontaneous liquid intrusion can occur even if the wall is hydrophobic as long as \(90^{\circ}<\theta_{\rm Y}<\theta_{\rm c(C)}\) or \(90^{\circ}<\theta_{\rm Y}<90^{\circ}+\phi\) from Eq. (22). In fact, the spontaneous intrusion (infiltration) in hydrophobic and converging conical capillaries has been observed by the molecular dynamic simulation [40].
In the region III, the spontaneous liquid intrusion is prohibited both in converging capillaries and in diverging capillaries because \(\theta_{\rm c(D)}<\theta_{\rm c(C)}<\theta_{\rm Y}\). Only the forced imbibition, which is realized by applying the external (infiltration) pressure (Fig. 3) to liquid or vapor, is possible. Further investigation of the free energy landscape [39] is necessary to understand the details of the forced imbibition process.
### Free energy landscape of forced imbibition
When the modified Laplace pressure in Eq. (17) is negative, i.e., \(p_{\rm L(\hat{i})}(z)<0\) or \(\Pi_{i}(\theta_{\rm Y},\phi)<0\), the spontaneous liquid intrusion is prohibited. It is necessary to apply a positive external pressure \(p_{\rm ext}=p_{\rm I}-p_{\rm v}>0\) (Fig. 3) to cancel this negative Laplace pressure to force the intrusion of liquid. On the other hand, when the capillary is completely filled by a positive Laplace pressure \(p_{\rm L(\hat{i})}(z)>0\), it is necessary to apply a negative external pressure \(p_{\rm ext}<0\) to cancel this positive Laplace pressure to force extrusion of liquid from the capillary. To determine the magnitude of the applied pressure \(p_{\rm ext}\), we have to understand the free energy landscape of imbibition.
The thermodynamics of forced imbibition process is described by the free energy [4; 21; 39; 50; 51; 55]
\[\Omega_{i}=F_{i}-p_{\rm ext}V_{i}, \tag{24}\]
where \(F_{i}\) is the surface free energy in Eq. (1) and \(V_{i}\) is the liquid volume inside the capillary given by Eq. (5). Then, the driving capillary pressure becomes
\[p_{i}(z)=-\frac{\partial\Omega_{i}}{\partial V_{i}}=p_{\rm ext}+p_{\rm L(\hat{ i})}(z), \tag{25}\]
where \(p_{\rm L(\hat{i})}(z)\) is the modified Laplace pressure in Eq. (17). If the driving pressure \(p_{i}(z)\) is always positive within the capillary (\(0\leq z\leq H\)), the intrusion of liquid into the whole capillary is realized. If the driving pressure is always negative within the capillary, extrusion of liquid from the whole capillary is achieved, and the capillary will be empty and filled by vapor.
The liquid intrusion starts at the inlet (\(z=0\)) when \(p_{i}(z=0)\geq 0\) in Eq. (25). The critical external pressure \(p_{\rm ext}=p_{\rm c(\hat{i})}\) is given by the condition \(p_{i}(z)=0\) at \(z=0\), which leads to
\[p_{\rm c(i)}=-p_{\rm L(\hat{i})}(0)=-\frac{2\eta_{\rm v}\Pi_{i}(\theta_{\rm Y},\phi)}{R_{i}(0)}. \tag{26}\]
Figure 6: Critical Young’s angles \(\theta_{\rm c(C)}\) and \(\theta_{\rm c(D)}\) given by Eqs. (22) and (23) as functions of the tilt angles \(\phi\). The \((\phi,\theta_{\rm Y})\) space is divided into three regions I, II and III by these two lines. In the region II, the spontaneous liquid intrusion is possible only in converging capillaries. In the region I, the spontaneous liquid intrusion is possible in both converging capillaries and diverging capillaries, while it is possible in neither converging capillaries nor diverging capillaries in the region III.
double conical capillary
To visualize the pressure and the free energy landscape, we introduce the non-dimensional pressure \(\bar{p}\) and the free energy \(\omega_{i}\) through [39; 21]
\[\Omega_{i}=\gamma_{\mathrm{v}}\pi R_{i}^{2}(0)\omega_{i}(\bar{z}), \tag{27}\]
and the non-dimensional quantities
\[\bar{z}=\frac{z}{H},\ \ \ (0\leq\bar{z}\leq 1) \tag{28}\]
\[\bar{p}=\frac{Hp_{\mathrm{ext}}}{\gamma_{\mathrm{v}}},\ \ \ \alpha_{i}=\frac{H\tan\phi}{R_{i}(0)}= \eta_{\mathrm{i}}\tan\phi. \tag{29}\]
Therefore, the non-dimensional critical pressures are given by
\[\bar{p}_{\mathrm{c(C)}} = \frac{Hp_{\mathrm{c(C)}}}{\gamma_{\mathrm{v}}}=-2\eta_{\mathrm{C} }\Pi_{\mathrm{C}}(\theta_{\mathrm{Y}},\phi) \tag{30}\] \[\bar{p}_{\mathrm{c(D)}} = \frac{Hp_{\mathrm{c(D)}}}{\gamma_{\mathrm{v}}}=-2\eta_{\mathrm{D} }\Pi_{\mathrm{D}}(\theta_{\mathrm{Y}},\phi) \tag{31}\]
from Eq. (26).
Figure 7 presents the non-dimensional critical pressure \(\bar{p}_{\mathrm{c(i)}}\) which corresponds to \(p_{\mathrm{c(i)}}\) as a function of Young's angle \(\theta_{\mathrm{Y}}\) when \(\phi=10^{\circ}\) and \(\eta_{\mathrm{C}}=4.0\). It also shows the two other characteristic pressures \(\bar{p}_{\mathrm{e(i)}}\) and \(\bar{p}_{\mathrm{s(i)}}\), whose meaning will be apparent soon. Note that this critical pressure \(p_{c(i)}\) does not represent the entrance barrier pressure due to the potential barrier from atomic interactions [42].
The free energy landscape \(\Omega_{i}\) in Eq. (24) can be analytically calculated from Eqs. (3)-(5) and the non-dimensional free energy \(\omega_{i}\) is given by cubic polynomials of \(\bar{z}\)[39; 21]:
\[\omega_{\mathrm{C}}\left(\bar{z}\right) = \left(\bar{p}_{\mathrm{c(C)}}-\bar{p}\right)\bar{z}-\omega_{ \mathrm{C}}\left(\frac{\bar{p}_{\mathrm{c(C)}}}{2}-\bar{p}\right)\bar{z}^{2}- \frac{1}{3}\alpha_{\mathrm{C}}^{2}\bar{p}\bar{z}^{3}, \tag{32}\] \[\omega_{\mathrm{D}}\left(\bar{z}\right) = \left(\bar{p}_{\mathrm{c(D)}}-\bar{p}\right)\bar{z}+\omega_{ \mathrm{D}}\left(\frac{\bar{p}_{\mathrm{c(D)}}}{2}-\bar{p}\right)\bar{z}^{2}- \frac{1}{3}\alpha_{\mathrm{D}}^{2}\bar{p}\bar{z}^{3}\]
for the converging and the diverging capillary, where we have dropped the constant free energy from the liquid vapor surface tension when the meniscus is located at the inlet (\(\bar{z}=0\)). Therefore, the origin of the free energy is always zero at \(\bar{z}=0\)[39].
Figure 8 presents the free energy landscape of imbibition along the pathway from \(\bar{z}=0\) (inlet) to \(\bar{z}=1\) (outlet), where the free energies are further scaled as
\[\tilde{\omega}_{\mathrm{C}}\left(\bar{z}\right) = \omega_{\mathrm{C}}\left(\bar{z}\right) \tag{34}\] \[\tilde{\omega}_{\mathrm{D}}\left(\bar{z}\right) = \frac{R_{\mathrm{D}}^{2}(0)}{R_{\mathrm{C}}^{2}(0)}\omega_{ \mathrm{D}}\left(\bar{z}\right)=\left(1-\eta_{\mathrm{C}}\tan\phi\right)^{2} \omega_{\mathrm{D}}\left(\bar{z}\right) \tag{35}\]
to make the scale of vertical axis (energy) common to both the converging capillary and the diverging capillary (see Eq. (27)) since
\[\frac{R_{\mathrm{D}}^{2}(0)}{R_{\mathrm{C}}^{2}(0)}=\left(1-\eta_{\mathrm{C}} \tan\phi\right)^{2} \tag{36}\]
from Eq. (12).
In Fig. 8, we present the free energy landscapes at the selected pressures \(\tilde{p}=0\) (spontaneous imbibition), \(\bar{p}_{\mathrm{c(i)}}\), \(\tilde{p}_{\mathrm{e(i)}}\) and \(\bar{p}_{\mathrm{s(i)}}\). The first characteristic pressure \(\bar{p}_{\mathrm{c(i)}}\) given by Eqs. (30) and (31) is the critical pressure which characterizes the onset of imbibition at the inlet (\(d\phi/dz|_{z=0}=0\) at \(z=0\)). The second characteristic pressure \(\bar{p}_{\mathrm{e(i)}}\) are given by [39; 21]
\[\tilde{p}_{\mathrm{e(C)}} = \frac{3\left(2-\alpha_{\mathrm{C}}\right)}{2\left(3-3\alpha_{ \mathrm{C}}+\alpha_{\mathrm{C}}^{2}\right)}\tilde{p}_{\mathrm{e(C)}}. \tag{37}\] \[\tilde{p}_{\mathrm{e(D)}} = \frac{3\left(2+\alpha_{\mathrm{D}}\right)}{2\left(3+3\alpha_{ \mathrm{D}}+\alpha_{\mathrm{D}}^{2}\right)}\tilde{p}_{\mathrm{c(D)}}. \tag{38}\]
where the free energy of the completely empty state and that of the completely filled state becomes equal \(\tilde{\omega}\left(\bar{z}=0\right)=\tilde{\omega}\left(\bar{z}=1\right)=0\) (see Fig. 8). This condition is similar to the two-phase coexistence of first-order phase transition.
double conical capillary
The third characteristic pressure \(\tilde{p}_{\mathrm{s}(i)}\) given by [39; 21]
\[\tilde{p}_{\mathrm{s}(\mathrm{C})} = \frac{\tilde{p}_{\mathrm{c}(\mathrm{C})}}{1-\alpha_{\mathrm{C}}}, \tag{39}\] \[\tilde{p}_{\mathrm{s}(\mathrm{D})} = \frac{\tilde{p}_{\mathrm{c}(\mathrm{D})}}{1+\alpha_{\mathrm{D}}}, \tag{40}\]
characterize the stability limit of intruded liquid at the outlet (\(d\tilde{\omega}/d\tilde{z}|_{z=1}=0\) at \(z=H\)) when the liquid starts to flow out from the outlet. In fact, they correspond simply to the modified Laplace pressure at the outlet (\(z=H\)):
\[p_{\mathrm{s}(\mathrm{C})} = -p_{\mathrm{L}(\mathrm{C})}(H)=-\frac{2\gamma_{\mathrm{N}}\Pi_{ \mathrm{C}}\left(\theta_{\mathrm{Y}},\phi\right)}{R_{\mathrm{C}}(H)}, \tag{41}\] \[p_{\mathrm{s}(\mathrm{D})} = -p_{\mathrm{L}(\mathrm{D})}(H)=-\frac{2\gamma_{\mathrm{N}}\Pi_{ \mathrm{D}}\left(\theta_{\mathrm{Y}},\phi\right)}{R_{\mathrm{D}}(H)}, \tag{42}\]
in the original unit from Eqs, (10), (11) and (17). Therefore, the characteristic pressures \(\tilde{p}_{\mathrm{c}(i)}\) and \(\tilde{p}_{\mathrm{s}(i)}\) of the converging and the diverging capillary are related by
\[p_{\mathrm{s}(\mathrm{C})} = \frac{\Pi_{\mathrm{C}}\left(\theta_{\mathrm{Y}},\phi\right)}{\Pi_ {\mathrm{D}}\left(\theta_{\mathrm{Y}},\phi\right)}p_{\mathrm{c}(\mathrm{D})}. \tag{43}\] \[p_{\mathrm{s}(\mathrm{D})} = \frac{\Pi_{\mathrm{D}}\left(\theta_{\mathrm{Y}},\phi\right)}{\Pi _{\mathrm{C}}\left(\theta_{\mathrm{Y}},\phi\right)}p_{\mathrm{c}(\mathrm{C})}. \tag{44}\]
because \(R_{\mathrm{C}}(H)=R_{\mathrm{D}}(0)\) and \(R_{\mathrm{D}}(H)=R_{\mathrm{C}}(0)\) so that \(\tilde{p}_{\mathrm{s}(\mathrm{C})}\) and \(\tilde{p}_{\mathrm{c}(\mathrm{D})}\), and \(\tilde{p}_{\mathrm{s}(\mathrm{D})}\) and \(\tilde{p}_{\mathrm{c}(\mathrm{C})}\) run almost in parallel in Fig. 7.
When the external pressure \(\tilde{p}\) is between \(\tilde{p}_{\mathrm{c}(i)}\) and \(\tilde{p}_{\mathrm{s}(i)}\) (e.g. Fig. 8 for \(\tilde{p}=\tilde{p}_{\mathrm{e}(i)}\)), the free energy landscape exhibits an extremum at \(\tilde{z}_{\mathrm{ex}(i)}\) given by [39]
\[\tilde{z}_{\mathrm{ex}(\mathrm{C})} = \frac{1}{\alpha_{\mathrm{C}}}\left(1-\frac{\tilde{p}_{\mathrm{c} (\mathrm{C})}}{\tilde{p}}\right), \tag{45}\] \[\tilde{z}_{\mathrm{ex}(\mathrm{D})} = -\frac{1}{\alpha_{\mathrm{D}}}\left(1-\frac{\tilde{p}_{\mathrm{c} (\mathrm{D})}}{\tilde{p}}\right), \tag{46}\]
which move from \(\tilde{z}_{\mathrm{ex}(i)}=0\) at \(\tilde{p}=\tilde{p}_{\mathrm{c}(i)}\) to \(\tilde{z}_{\mathrm{ex}(i)}=1\) at \(\tilde{p}=\tilde{p}_{\mathrm{s}(i)}\) from Eqs. (39) and (40), and their free energies become [39]
\[\omega_{\mathrm{ex}(\mathrm{C})} = -\frac{\left(\tilde{p}-\tilde{p}_{\mathrm{c}(\mathrm{C})}\right)^ {2}\left(2\tilde{p}+\tilde{p}_{\mathrm{c}(\mathrm{C})}\right)}{6\alpha_{ \mathrm{C}}\tilde{p}^{2}}, \tag{47}\] \[\omega_{\mathrm{ex}(\mathrm{D})} = \frac{\left(\tilde{p}-\tilde{p}_{\mathrm{c}(\mathrm{D})}\right)^ {2}\left(2\tilde{p}+\tilde{p}_{\mathrm{c}(\mathrm{D})}\right)}{6\alpha_{ \mathrm{D}}\tilde{p}^{2}}, \tag{48}\]
from Eqs. (32) and (33), which correspond to a minimum when \(\omega_{\mathrm{ex}(i)}<0\) (Figs. 8(a) and (d)) and a maximum when \(\omega_{\mathrm{ex}(i)}>0\) (Figs. 8(b) and (c)). From Eqs. (27), (47) and (48), and noting \(\tilde{p}\sim O(1)\) from Tab. 2, we have roughly
\[\left|\Omega_{\mathrm{ex}(i)}\right|=\gamma_{\mathrm{N}}\pi R_{i}^{2}(0)\left| \omega_{\mathrm{ex}(i)}\right|\simeq\frac{\gamma_{\mathrm{N}}\pi R_{i}^{2}(0)} {\alpha_{i}}=\frac{\gamma_{\mathrm{N}}\pi R_{i}^{3}(0)}{H\tan\phi}, \tag{49}\]
which gives, for example, \(\Omega_{\mathrm{ex}(i)}\simeq 1.3\times 10^{-15}\) J \(\gg kT\sim 4.0\times 10^{-21}\) J for water (\(\gamma_{\mathrm{N}}=0.072\) J/m\({}^{2}\)) in \(R_{i}(0)=0.1\)\(\mu\)m, \(H=1.0\)\(\mu\)m, and \(\phi=10^{\circ}\) capillary. Of course, the thermal fluctuation \(kT\) may not be negligible when the size of capillary is reduced to 1/100 (\(R(0)=1\) nm and \(H=10\) nm).
In Fig. 8, the free energy landscapes of a converging capillary with \(\phi=10^{\circ}\) and \(\eta_{\mathrm{C}}=4.0\) are presented in Figs. 8(a) and (c), and those of a diverging capillary are presented in Figs. 8(b) and (d). The numerical values of \(\tilde{p}_{\mathrm{c}(i)}\), \(\tilde{p}_{\mathrm{c}(i)}\) and \(\tilde{p}_{\mathrm{s}(i)}\) used are tabulated in Tab. 2, and \(\alpha_{\mathrm{C}}\simeq 0.705\) and \(\alpha_{\mathrm{D}}\simeq 2.393\) from Eq. (29). In original unit, \(\rho_{\mathrm{ext}}=\gamma_{\mathrm{N}}\tilde{p}/H\) from Eq. (29). So, for example, \(\tilde{p}=1.0\) corresponds to \(p_{\mathrm{ext}}=7.2\times 10^{4}\) Pa for water in \(H=1.0\)\(\mu\)m capillary.
The wettability of capillary in Figs. 8(a) and (b) is hydrophobic with \(\theta_{\mathrm{Y}}=120^{\circ}\) so that the spontaneous liquid intrusion is prohibited. Hence, the landscapes presented in Figs. 8(a) and (b) represent the pathway of _liquid intrusion_ by positive applied pressures which are termed intrusion or infiltration pressure [40; 41; 42]. The wettability of capillary in Figs. 8(c) and (d) is hydrophilic with \(\theta_{\mathrm{Y}}=60^{\circ}\) so that the capillary is filled by spontaneously intruded liquid. Hence, the landscapes
presented in Figs. 8(c) and (d) represent the pathway of the _liquid extrusion_ or the _vapor intrusion_ when the applied pressure is _negative_[39]. That is when the liquid pressure is lower than the vapor pressure. The free energy landscape of the initial state at \(\bar{p}=0\) is that of the spontaneous imbibition and is monotonically increasing (Figs. 8(a) and (b)) or decreasing (Figs. 8(c) and (d)) function indicating the complete liquid extrusion or intrusion.
This free energy landscape of the intrusion and the extrusion in Fig. 8 can be interpreted as that of the wetting and the drying transition [56; 57] of complete wetting and drying in conical capillaries induced by the external applied pressure. Figure 9 presents the phase diagrams of a converging and a diverging conical capillary. The complete intrusion corresponds to the complete wetting and the complete extrusion corresponds to the complete drying. Therefore, the characteristic pressure \(\tilde{p}_{\rm e(\it i)}\) corresponds to the "binodal" and the two critical pressures \(\tilde{p}_{\rm c(\it j)}\) and \(\tilde{p}_{\rm s(\it j)}\) correspond to the upper (lower) and the lower (upper) "spinodals" in the language of wetting transition. The free energy landscape with a maximum, which acts as a free energy barrier, indicates the first order-like wetting and drying transition with pressure hysteresis and meniscus jumps, while that with a minimum indicates the second order-like transition with continuous change of the meniscus position. In the former case, the meniscus trapped at the inlet (complete drying state) or at the outlet (complete wetting state) separated by the free energy barrier can be in the "metastable" state so that it can be destroyed and transition (intrusion or extrusion) can be induced by any perturbation.
It is relatively easy to imagine the scenario of intrusion and extrusion from the phase diagram in Fig. 9. However, to understand more complex scenarios in double-conical capillaries in the next section, it is helpful to look into the details of the intrusion and the extrusion processes from the free energy landscapes in Fig. 8(a)-(d), which are summarize in Tab. 3(a)-(d) and are as follows:
1. Figure 8(a) presents the free energy landscape of the _liquid intrusion_ or _infiltration_[40; 42] in a hydrophobic (\(\theta_{\rm Y}>\theta_{\rm c(C)}\)) converging capillary. By increasing the (positive) applied pressure from \(\tilde{p}=0\) (a long thin solid down arrow in Fig. 8(a)), the intrusion starts at \(\tilde{p}_{\rm c(C)}\) (Eq. (30)) and is completed at \(\tilde{p}_{\rm s(C)}\) (Eq. (39)). When \(\tilde{p}_{\rm c(C)}\leq\tilde{p}\leq\tilde{p}_{\rm s(C)}\) (Fig. 9(a)), the landscape shows a _minimum_ (MIN), which acts as the _trap_ for the liquid-vapor meniscus. The location of MIN moves continuously from the inlet (\(\bar{z}=0\)) toward the outlet (\(\bar{z}=1\)). Therefore, the intrusion occurs _gradually_ (a short thick solid right arrow in Fig. 8(a)). In fact, such a gradual motion of meniscus is observed in the molecular dynamic simulation [40]. In addition, the reverse process of liquid extrusion (a long thin broken up arrow and a short thick broken left arrow in Fig. 8(a)) is also gradual and completed at \(\tilde{p}_{\rm c(C)}\) (see Tab. 3(a)).
2. Figure 8(b) presents the landscape of the _liquid intrusion_ in a hydrophobic (\(\theta_{\rm Y}>\theta_{\rm c(D)}\)) diverging capillary (Fig. 9(b)). The free energy landscape shows a _maximum_ (MAX), which acts as the _barrier_. As the pressure is increased from \(\tilde{p}_{\rm s(D)}\) towards \(\tilde{p}_{\rm c(D)}\) (a long thin solid down arrow in Fig. 8(b)), the meniscus is trapped at the inlet and is in metastable state, while the location of MAX moves from the outlet (\(\bar{z}=1\)) toward the inlet (\(\bar{z}=0\)). Therefore, the intrusion occurs _abruptly_ at \(\tilde{p}_{\rm c(D)}\) when the barrier reaches the inlet and disappears. The meniscus jumps from the inlet to the outlet at \(\tilde{p}_{\rm c(D)}\) (a short thick solid right arrow in Fig. 8(b)). Hence, this hydrophobic diverging capillary could act as a pneumatic switch [58]. Of course, any perturbations, such as mechanical vibration, thermal fluctuation may assist the meniscus to overcome the barrier estimated in Eq. (49). In the reverse process of depressing \(\tilde{p}\) from \(\tilde{p}_{\rm c(D)}\) to \(\tilde{p}_{\rm s(D)}\) (a long thin broken up arrow in Fig. 8(b)), the meniscus jumps from the outlet to the inlet at \(\tilde{p}_{\rm s(D)}\) (a short thick broken left arrow in Fig. 8(b)). Therefore, we will observe a pressure hysteresis between \(\tilde{p}_{\rm c(D)}\) and \(\tilde{p}_{\rm s(D)}\) (Tab. 3(b)).
Figure 9: Wetting phase diagrams based on Fig. 7 in (a) converging and (b) diverging conical capillaries with \(\phi=10^{\circ}\) and \(\eta_{\rm C}=4.0\). These geometrical parameters are the same as those used in Figs. 7 and 8. At high pressures, the liquid completely intrudes into the capillary (complete wetting), while at low pressures the liquid completely extrudes from the capillary (complete drying). The free energy maximum (MAX) which acts as the free energy barrier or the minimum (MIN) which acts as the free energy minimum appears between \(\tilde{p}_{\rm s(\it i)}\) and \(\tilde{p}_{\rm c(\it j)}\) (Fig. 8). The MAX suggests the pressure-induced first-order like wetting transition while the MIN suggests the second-order like continuous transition
3. Figure 8(c) presents the landscape of the _liquid extrusion_ or the _vapor intrusion_ in a hydrophilic (\(\theta_{\rm Y}<\theta_{\rm c(C)}\)) converging capillary. By increasing the absolute magnitude of negative applied pressure from \(\bar{p}=0\) (a long thin solid up arrow in Fig. 8(c)), the complete extrusion occurs at \(\bar{p}_{\rm s(C)}\) (<0, Tab. II, Fig. 9(a)). The free energy landscape shows a _maximum_ (MAX). Therefore, the extrusion occurs _abruptly_. Since the extrusion of liquid from a converging capillary is the intrusion of vapor into a diverging capillary, this process in Fig. 8(c) is similar to that in Fig. 8(b). The reverse process of the complete liquid intrusion by depression (a long thin broken down arrow in Fig. 8(c)) occurs also _abruptly_ (a short thick broken right arrow in Fig. 8(c)) at \(\bar{p}_{\rm c(C)}\) (<0). Again, we will observe a pressure hysteresis between \(\bar{p}_{\rm c(C)}\) and \(\bar{p}_{\rm s(C)}\) (Tab. III(c)).
4. Figure 8(d) presents the landscape of the _liquid extrusion_ in a hydrophilic (\(\theta_{\rm Y}<\theta_{\rm c(D)}\)) diverging capillary. The extrusion occurs _gradually_ as the landscape shows a _minimum_ (MIN), and is completed at \(\bar{p}_{\rm c(D)}\). This process in Fig. 8(d) is similar to that in Fig. 8(a). The reverse process of the complete liquid intrusion is also gradual and is completed at \(\bar{p}_{\rm s(D)}\) (Tab. III(d)).
The external pressure necessary to complete the intrusion and the extrusion can be predicted simply from the highest Laplace pressure, which is achieved either at the inlet or the outlet where the cross section is narrowest in conical capillaries. However, the intrusion and the extrusion process can be either _gradual_ or _abrupt_ depending on the shape of the free energy landscape. They occur _gradually_ if the free energy landscape has a _minimum_ (Figs. 8(a) and (d)), while they occur _abruptly_ if the landscape has a _maximum_ (Figs. 8(b) and (c)). We note again that this free energy maximum is not the nucleation barrier of critical droplet or bubble nucleated in the middle of capillaries [36; 35; 36].
The free energy maximum, which occurs in a hydrophobic diverging capillary (Fig. 8(b)) and a hydrophilic converging capillary (Fig. 8(c)), always accompanies a pressure hysteresis:
\[\Delta p_{\rm hyst} = p_{\rm c(C)}-p_{\rm s(C)},\ \ \ ({\rm converging}), \tag{50}\] \[= p_{\rm c(D)}-p_{\rm s(D)},\ \ \ ({\rm diverging}), \tag{51}\]
or
\[\Delta p_{\rm hyst}=2\gamma_{\rm W}\left|\Pi_{\rm i}\left(\theta_{\rm Y}, \phi\right)\right|\left(\frac{1}{R_{\rm C}(0)}-\frac{1}{R_{\rm D}(0)}\right) \tag{52}\]
from Eqs. (26), (41) and (42). Therefore, the pressure hysteresis is simply the difference between the modified Laplace pressure at the inlet and that at the outlet, or between the highest and the lowest modified Laplace pressure. Based on these scenarios in Fig. 8 and Tab. III, we will discuss the intrusion and the extrusion in double conical capillaries in the next section.
## III Imbibition in double conical capillaries
### Spontaneous imbibition
Based on the knowledge of the imbibition in a single conical capillary, we consider the intrusion and the extrusion in double conical capillaries. Specifically, we consider four capillaries: hourglass, diamond, sawtooth-1, and sawtooth-2 shaped capillaries illustrated in Fig. 1.
Figure 10 summarizes the scenarios of spontaneous intrusion in double conical capillaries. The intrusion occurs from the left liquid reservoir to the right vapor reservoir. Since the spontaneous intrusion can occur only in converging capillaries and not in diverging capillaries when \(\theta_{\rm c(C)}>\theta_{\rm Y}>\theta_{\rm c(D)}\), there are three scenarios: complete filling, half-filling, and complete empty. As we consider the quasi-static thermodynamic transient state, the dynamical effects, such as viscous resistance, pinning, vortex etc. at the junction and the entrance are neglected.
Figure 10(a) illustrates the three scenarios in hourglass shaped capillaries. The capillary is completely empty when \(\theta_{\rm Y}>\theta_{\rm c(D)}\). It is half-filled when \(\theta_{\rm c(C)}>\theta_{\rm Y}>\theta_{\rm c(D)}\), and it is completely filled when \(\theta_{\rm c(D)}>\theta_{\rm Y}\). These three scenarios can be confirmed from the free energy landscape in the next subsection. Figure 10(b) illustrates the two scenarios in diamond shaped capillaries: the capillary is completely empty when \(\theta_{\rm Y}>\theta_{\rm c(D)}\), and it is completely filled when \(\theta_{\rm c(D)}>\theta_{\rm Y}\).
In Figs. 10(c) and (d), we illustrate the scenarios in two sawtooth shaped capillaries. In these cases, a vertical wall at the junction (a shaded pierced-coin shaped vertical wall in Figs. 1(c) and (d)) will affect hydrodynamics of flow. Here, we neglect various hydrodynamic effects and concentrate on the results obtained purely from the thermodynamic free energy consideration.
Figure 10(c) illustrates the three scenarios in sawtooth-1 shaped capillaries (Fig. 1), where the capillary is completely empty when \(\theta_{\rm Y}>\theta_{\rm c(C)}\), half-filled when \(\theta_{\rm c(C)}>\theta_{\rm Y}>90^{\circ}\), and completely filled when \(\theta_{\rm c(D)}>\theta_{\rm Y}\). The half-filling arises because the vertical wall at the junction acts as a free energy
\begin{table}
\begin{tabular}{l|c|c} \hline \hline (a) & \multicolumn{2}{c}{(b)} \\ Intrusion Reverse & Intrusion Reverse & Reverse \\ \hline \(\bar{p}=0\) [E] & \(\bar{p}_{\rm c(C)}\) [E] & \(\bar{p}=0\) [E] & \(\bar{p}_{\rm s(D)}\) [E] \\ \(\downarrow\)g & \(\uparrow\)g & \(\downarrow\)a & \(\uparrow\)a \\ \(\bar{p}_{\rm s(C)}\) [F] & \(\bar{p}_{\rm s(C)}\) [F] & \(\bar{p}_{\rm s(D)}\) [F] & \(\bar{p}_{\rm s(D)}\) [F] \\ \hline \hline (c) & \multicolumn{2}{c}{(d)} \\ Extrusion Reverse & Extrusion Reverse & Extrusion Reverse \\ \hline \(\bar{p}_{\rm s(C)}\) [E] & \(\bar{p}_{\rm s(C)}\) [E] & \(\bar{p}_{\rm c(D)}\) [E] & \(\bar{p}_{\rm c(D)}\) [E] \\ \(\uparrow\)a & \(\downarrow\)a & \(\uparrow\)g & \(\downarrow\)g \\ \(\bar{p}=0\) [F] & \(\bar{p}_{\rm c(C)}\) [F] & \(\bar{p}=0\) [F] & \(\bar{p}_{\rm s(D)}\) [F] \\ \hline \hline \end{tabular}
\end{table}
Table 3: Imbibition processes in a converging and a diverging conical capillary in Fig. 8, which connects an empty state [E] and a filled state [F], where the down arrow indicates the compression and the up arrow indicates the depression. Numerical values of the characteristic pressures \(\bar{p}_{\rm c(C)}\), \(\bar{p}_{\rm c(D)}\), \(\bar{p}_{\rm s(C)}\) and \(\bar{p}_{\rm s(D)}\) are tabulated in Tab. II. Two letters “a” and “g” beside the arrows indicate that imbibition occurs _abruptly_ (a) or _gradually_ (g). Abrupt changes always accompany pressure hysteresis.
d double conical capillary
barrier or a hydrophobic gate for liquid intrusion when the vertical wall is hydrophobic (\(\theta_{\rm Y}>90^{\circ}\)), which will be discussed more quantitatively in the next subsection using the free energy landscape. Figure 10(d) illustrates the two scenarios in sawtooth-2 shaped capillary (Fig. 1), where the capillary is completely empty when \(\theta_{\rm Y}>\theta_{\rm c(D)}\) and is completely filled when \(\theta_{\rm c(D)}>\theta_{\rm Y}\). In contrast to the sawtooth-1 shaped capillary, the vertical wall at the junction does not act as the free energy barrier when \(\theta_{\rm c(D)}>\theta_{\rm Y}\) because the wall is hydrophilic as \(90^{\circ}>\theta_{\rm c(D)}\). Those scenarios will be the initial state of the forced imbibition which is the subject of the next subsection.
The scenarios illustrated in Figs. 10(c) and (d) are thermodynamic equilibrium states purely judged from the free energy minimum. There are also a completely filled and a half-filled _metastable_ state, which has the free energy higher than the equilibrium stable states. These metastable states can exist because they are separated from the equilibrium state by the free energy barrier at the junction. This can be clearly seen in the free energy landscape, which is the subject of the next subsection.
If a straight cylindrical capillary could be mechanically deformed [30; 31] into a converging-diverging hourglass or a diverging-converging diamond shaped capillary, we can imagine a mechanical switch illustrated in Fig. 11 when both ends (inlet and outlet) of the capillary are immersed in the liquid reservoir. When a hydrophobic straight cylinder is deformed into an hourglass shaped capillary and Young's contact angle satisfies \(\theta_{\rm c(C)}>\theta_{\rm Y}>90^{\circ}\), the liquid will intrude from both ends and fill the hourglass shaped capillary (Fig 11(a)). When a hydrophilic straight cylinder completely filled by liquid is deformed into a diamond shaped capillary and Young's contact angle satisfies \(\theta_{\rm c(D)}<\theta_{\rm Y}<90^{\circ}\), the liquid will extrude from the capillary and the capillary will be completely empty (Fig 11(b)). Of course, the effect of vapor should be negligible or dissolved into or released from the liquid inside the capillary
### Forced imbibition in double conical capillaries
To study the forced imbibition in double conical capillaries, we have to combine the free energy landscape of a single conical capillary considered in section II.3. The free energy landscape of a double conical capillary is simply a combination of that of a single converging capillary \(\tilde{\omega}_{\rm C}(\tilde{z})\) and a diverging capillary \(\tilde{\omega}_{\rm D}(\tilde{z})\). We present the free energy landscapes in Figs. 12 to 15. The parameters \(\phi=10^{\circ}\) and \(\eta_{\rm C}=4.0\) which characterize the converging and the diverging capillary are the same as those used in section II so that the critical Young's angles are \(\theta_{\rm c(C)}=90+10=100^{\circ}\) and \(\theta_{\rm c(D)}=90-10=80^{\circ}\), and the non-dimensional characteristic pressures are given in Tab. 2. The double conical capillaries are twice as long as a single conical capillary considered in section II. So, we consider the imbibition pathway along \(0\leq z\leq 2H\) or \(0\leq\tilde{z}\leq 2\) in the non-dimensional unit.
It is possible to superimpose Fig. 9(a) on 9(b) to make a combined phase diagram. However, it is not straight forward to imagine the imbibition process in double conical capillaries from such a combined phase diagram. We will continue to use the free energy landscape to discuss the details of the imbibition process. Most of the symbols and the arrows in Figs. 12 to 15 have the same meanings as those in Fig. 8.
Figure 11: Switching behavior by mechanical deformation of straight cylindrical capillary into (a) converging-diverging hourglass shaped capillary or (b) diverging-converging diamond shaped capillary.
Figure 10: Scenarios of the spontaneous intrusion in double conical capillaries. The dense shadow is liquid and the sparse shadow is vapor. Three scenarios are expected: completely filled, half-filled and completely empty.
double conical capillary
#### iii.3.1 Converging-diverging hourglass shaped capillary
The free energy landscapes of a converging-diverging (CD) hourglass shaped capillary (Fig. 1(a)) in the regions I, II and III in Figs. 6 and 7 are presented in Figs. 12(a) to (d), where the non-dimensional free energy \(\tilde{\omega}_{\rm CD}\) consists of that of a single converging capillary \(\tilde{\omega}_{\rm C}\left(\tilde{z}\right)\) and a diverging capillary \(\tilde{\omega}_{\rm O}\left(\tilde{z}\right)\), and is simply given by
\[\tilde{\omega}_{\rm CD}\left(\tilde{z}\right) = \tilde{\omega}_{\rm C}\left(\tilde{z}\right),\ \ \tilde{z}<1, \tag{53}\] \[= \tilde{\omega}_{\rm C}\left(\tilde{z}=1\right)+\tilde{\omega}_{ \rm D}\left(\tilde{z}-1\right),\ \ 1\leq\tilde{z}\leq 2. \tag{54}\]
The scenarios of the liquid intrusion and the extrusion in an hourglass shaped capillary predicted from the free energy landscape \(\tilde{\omega}_{\rm CD}\left(\tilde{z}\right)\) are summarized as follows (see also Tab. 4).
1. Figure 12(a) presents the free energy landscape of the _liquid intrusion_ in the region III (\(\theta_{\rm Y}>\theta_{\rm c(C)}\)) of a converging-diverging hourglass shaped capillary with \(\theta_{\rm Y}=120^{\circ}\). Initially (\(\tilde{p}=0\)) the capillary is completely empty ([E], see Tab. 4) as presented in Fig. 10(a) (see top line with \(\tilde{p}=0\)). By increasing the (positive) applied pressure from \(\tilde{p}=0\), the free energy landscape in the converging part (\(0\leq\tilde{z}\leq 1\))) is characterized by a minimum (MIN) and that in the diverging part (\(1\leq\tilde{z}\leq 2\))) by a maximum (MAX). Therefore, the intrusion occurs gradually in the converging part until \(\tilde{p}_{\rm s(C)}\) is reached. Then the meniscus is trapped by the free energy minimum at the junction (TRP) and the capillary is half-filled ([HF]). By further increasing the pressure, the meniscus jumps from the junction (\(\tilde{z}=1\)) to the outlet (\(\tilde{z}=2\)) at \(\tilde{p}_{\rm c(D)}\) and the capillary is completely filled ([F]). Therefore, the movement of the liquid-vapor meniscus is _gradual_ during the first half stage (\(0\leq\tilde{p}\leq\tilde{p}_{\rm s(C)}\)) and _abrupt_ in the second half stage (\(\tilde{p}_{\rm s(C)}<\tilde{p}\leq\tilde{p}_{\rm c(D)}\)) of the process. The half-filled state of this second half stage becomes thermodynamically metastable before reaching \(\tilde{p}_{\rm c(D)}\) In the reverse process of depression, the extrusion occurs _abrupt_ly at \(\tilde{p}_{\rm s(D)}\) where the meniscus jumps from the outlet to the inside of converging part so that the capillary is nearly half-filled ([InHF]). Again, the completely filled state becomes thermodynamically metastable before reaching \(\tilde{p}_{\rm s(D)}\). By further depression, the meniscus moves _gradually_ from the junction towards the inlet and reaches there at \(\tilde{p}_{\rm c(C)}\). Therefore, we will observe a pressure hysteresis only between \(\tilde{p}_{\rm s(D)}\) and \(\tilde{p}_{\rm c(D)}\) which is similar to that in a single diverging capillary in section II.3 (cf. Tab. 3(b) and 4(a)).
2. Figure 12(b) presents the _liquid intrusion_ into the diverging part of a converging-diverging double conical capillary in the region II (\(\theta_{\rm c(C)}>\theta_{\rm Y}>\theta_{\rm c(D)}\)) with \(\theta_{\rm Y}=95^{\circ}\). Initially, the capillary is half-filled ([HF]) (see middle line of Fig. 10(a)) because the liquid vapor meniscus is trapped by the free energy minimum at the junction (TRP). When the applied pressure is increased from \(\tilde{p}=0\), the meniscus remains trapped at the junction because the free energy maximum (MAX) exists in the diverging part (\(1\leq\tilde{z}\leq 2\)). The intrusion into the diverging part occurs _abruptly_ at \(\tilde{p}_{\rm c(D)}\) when the MAX disappears. Then the meniscus jumps from the junction to the outlet and the whole capillary is filled by liquid ([F]). In the reverse process of depression, the extrusion occurs _abruptly_ at \(\tilde{p}_{\rm s(D)}\): the meniscus jumps from the outlet to the junction and the capillary becomes half-filled ([HF]). By further depression (by negative pressure, a vertical arrow with a symbol "(c)" in Fig. 12(b)), the extrusion occurs _abruptly_ again at \(\tilde{p}_{\rm s(C)}\) (<0, see Tab. 1) as presented in Fig. 12(c). We will observe a double pressure hysteresis ranging from the positive \(\tilde{p}_{\rm c(D)}\) to the negative \(\tilde{p}_{\rm s(C)}\) (Tab. 4(b)).
3. Figure 12(c) presents the _liquid extrusion_ from (vapor intrusion into) the converging part of a converging-diverging double conical capillary in the region II (\(\theta_{\rm c(C)}>\theta_{\rm Y}>\theta_{\rm c(D)}\)) with \(\theta_{\rm Y}=95^{\circ}\). The initial state at \(\tilde{p}=0\) is the same ([HF]) as that in Fig. 12(b). The meniscus is trapped by the free energy minimum at the junction (TRP). By increasing the absolute magnitude of the (negative) applied pressure from \(\tilde{p}=0\), the complete extrusion ([E]) occurs _abruptly_ at \(\tilde{p}_{\rm s(C)}\) when the
Figure 12: The free energy landscapes of forced imbibition in a converging-diverging hourglass shaped capillary with \(\phi=10^{\circ}\) and \(\eta_{\rm C}=4.0\) for selected external pressures in Tab. 2. The meanings of the vertical (solid, broken, up and down) arrows in Fig. 12 to 15 are the same as those in Fig. 8. (a) Intrusion in the region III (\(\theta_{\rm Y}=120^{\circ}\)). (b) Intrusion in the region II (\(\theta_{\rm Y}=95^{\circ}\)). (c) Extrusion in the region I (\(\theta_{\rm Y}=95^{\circ}\)). (d) Extrusion in the region I (\(\theta_{\rm Y}=60^{\circ}\)).
free energy maximum (MAX) in the converging part (\(0\leq\bar{z}\leq 1\)) disappears. In the reverse process of compression, the intrusion into the converging part occurs _abruptly_ at \(\tilde{p}_{\rm c(C)}\), and the capillary is half-filled ([HF]) again. By further compression (a vertical arrow with a symbol "(b)" in Fig. 12(c)), the complete intrusion occurs _abruptly_ again at \(\tilde{p}_{\rm c(D)}\) (>0, see Tab. 2) as presented in Fig. 12(b). Again, we will observe a double pressure hysteresis ranging from the positive \(\tilde{p}_{\rm c(D)}\) to the negative \(\tilde{p}_{\rm s(C)}\) (cf. Tab. 4(c) and 4(b)).
4. Figure 12(d) presents the _liquid extrusion_ from the whole capillary in the region I (\(\theta_{\rm c(D)}>\theta_{\rm Y}\)) with \(\theta_{\rm Y}=60^{\circ}\). Initially the whole capillary is completely filled by liquid ([F]) at \(\tilde{p}=0\) (see bottom line of Fig. 10(a)). The extrusion occurs _gradually_ by increasing the absolute magnitude of the (negative) pressure in the first half stage due to the existence of minimum (MIN) in the diverging part. The extrusion stops at \(\tilde{p}_{\rm c(D)}\) and the meniscus is trapped by the free energy minimum (TRP) at the junction ([HF]). In the second half stage, the extrusion occurs _abruptly_ at \(\tilde{p}_{\rm s(C)}\) when the maximum (Max) in the converging part disappears, and the meniscus jumps from the junction to the inlet ([E]). In the reverse process of compression, the intrusion starts _abruptly_ at \(\tilde{p}_{\rm c(C)}\) and the meniscus jumps from the inlet to the inside of the diverging part so that the capillary is nearly half-filled ([nHF]). Then, the intrusion proceeds _gradually_ and the meniscus reaches the outlet at \(\tilde{p}_{\rm s(D)}\) ([F]). Similar to Fig. 12(a), we will observe a pressure hysteresis only between \(\tilde{p}_{\rm s(C)}\) and \(\tilde{p}_{\rm c(C)}\) which is similar but slightly more complex than that in a single converging conical capillary in section II.3 (cf. Tab. 3(d) and 4(d)).
#### iii.2.2 Diverging-converging diamond shaped capillary
The free energy landscapes of a diverging-converging (DC) diamond shaped capillary (Fig. 1(b)) in the regions I, II and III are presented in Figs. 13(a) to (d), where the free energy \(\tilde{\omega}_{\rm DC}\) is given by
\[\tilde{\omega}_{\rm DC}\left(\bar{z}\right) = \tilde{\omega}_{\rm D}\left(\bar{z}\right),\ \ \bar{z}<1, \tag{55}\] \[= \tilde{\omega}_{\rm D}\left(\bar{z}=1\right)+\tilde{\omega}_{\rm C }\left(\bar{z}-1\right),\ \ 1\leq\bar{z}\leq 2.. \tag{56}\]
The scenarios of liquid intrusion and extrusion in a diamond shaped capillary predicted from the free energy landscape \(\tilde{\omega}_{\rm DC}\left(\bar{z}\right)\) are summarized in Tab 5 and are as follows.
1. Figure 13(a) presents the free energy landscape of the _liquid intrusion_ in the region III of a diamond shaped capillary with \(\theta_{\rm Y}=120^{\circ}\). The intrusion occurs _abruptly_ as the free-energy maximum (MAX) exists in the diverging part (\(0\leq\bar{z}\leq 1\)). The meniscus jumps from the inlet to the outlet at \(\tilde{p}_{\rm c(D)}\). In the reverse process of depression, the extrusion in the converging part (\(1\leq\bar{z}\leq 2\)) occurs _gradually_ as the free energy minimum (MIN) exists in the converging part. As soon as the meniscus reaches the junction (\(\bar{z}=1\)) at \(\tilde{p}_{\rm c(C)}\), the meniscus jumps from the junction to the inlet _abruptly_ and the extrusion is completed. Therefore, we will observe a more complex pressure hysteresis than that in an hourglass shaped capillary (cf. Tab. 4(a) and 4(a)).
\begin{table}
\begin{tabular}{c c|c c} \hline \hline (a) & & \multicolumn{2}{c}{(b)} \\ Intrusion & Reverse & Intrusion & Reverse \\ \hline \hline & & & \(\tilde{p}_{\rm s(C)}\) [E] \\ & & & \(\uparrow\)a \\ \(\tilde{p}=0\) [E] & \(\tilde{p}_{\rm c(C)}\) [E] & \(\tilde{p}=0\) [HF] & \(\tilde{p}_{\rm s(D)}\) [HF] \\ \(\downarrow g\) & \(\uparrow\)g & \(\downarrow\)a & \(\uparrow\)a \\ \(\tilde{p}_{\rm s(C)}\)[HF] & \(\tilde{p}_{\rm s(D)}\)[nHF] & \(\tilde{p}_{\rm c(D)}\) [F] & \(\tilde{p}_{\rm c(D)}\) [F] \\ \(\downarrow a\) & \(\uparrow\)a & \(\uparrow\)a & \\ \(\tilde{p}_{\rm c(D)}\) [F] & \(\tilde{p}_{\rm c(D)}\) [F] & & \\ \hline \hline (c) & & (d) & \\ Extrusion & Reverse & Extrusion & Reverse \\ \hline \(\tilde{p}_{\rm s(C)}\) [E] & \(\tilde{p}_{\rm s(C)}\) [E] & \(\tilde{p}_{\rm s(C)}\) [E] & \(\tilde{p}_{\rm s(C)}\) [E] \\ \(\uparrow\)a & \(\downarrow\)a & \(\uparrow\)a & \(\downarrow\)a \\ \(\tilde{p}=0\) [HF] & \(\tilde{p}_{\rm c(C)}\) [HF] & \(\tilde{p}_{\rm c(D)}\) [HF] & \(\tilde{p}_{\rm c(C)}\) [nHF] \\ & \(\downarrow\)a & \(\uparrow\)g & \(\downarrow\)g \\ & \(\tilde{p}_{\rm c(D)}\) [F] & \(\tilde{p}=0\) [F] & \(\tilde{p}_{\rm s(D)}\) [F] \\ \hline \hline \end{tabular}
\end{table}
Table 4: Imbibition processes in an hourglass shaped capillary in Fig. 12, which connects the empty [E], the half-filled [HF], the filled [F] state in Fig. 10(a) and the nearly half-filled [nHF] state. The meanings of the other symbols are the same as those in Tab. 3.
double conical capillary
2. Figure 13(b) presents the _liquid intrusion_ in the region II with \(\theta_{\rm Y}=95^{\circ}\). Again, the intrusion occurs _abruptly_ at \(\tilde{p}_{\rm c(D)}\) as the free-energy maximum (MAX) appears in the diverging part (\(0\leq\tilde{z}\leq 1\)). In the reverse process of depression, the meniscus is pinned at the outlet (a black dot in Fig. 13(b)) as the free energy barrier (BRR) exists at the junction, and the whole capillary is filled by the metastable liquid ([MF]) even at \(\tilde{p}=0\). By further depression (negative pressure, a vertical arrow with a symbol "(c)" in Fig. 13(b)), the extrusion is completed ([E]) at \(\tilde{p}_{\rm s(C)}\) as presented in Fig. 13(c). Again, we will observe a complex and large pressure hysteresis involving a metastable state (Tab. V(b)).
3. Figure 13(c) presents the _liquid extrusion_ (vapor intrusion) in the region II with \(\theta_{\rm Y}=95^{\circ}\). The whole capillary is empty in the thermodynamic equilibrium. However, the meniscus could be pinned at the outlet (a black dot in Fig. 13(c)) e.g., by the reverse process in (b) and the whole capillary could be filled by the metastable liquid ([MF]) as the free energy barrier (BRR) exists at the junction. Extrusion of this metastable liquid occurs _abruptly_ at \(\tilde{p}_{\rm s(C)}\) as the free energy maximum (MAX) exists in the converging part. The reverse process of the intrusion occurs _abruptly_ not at \(\tilde{p}=0\) but at higher pressure \(\tilde{p}_{\rm c(D)}\) (a vertical arrow with a symbol "(b)" in Fig. 13(c)) as presented in Fig. 13(b). We will observe a complex and large pressure hysteresis similar to that in Fig. 13(b) (Tab. V(c)).
4. Figure 13(d) presents the _liquid extrusion_ in the region I with \(\theta_{\rm Y}=60^{\circ}\). Initially, the thermodynamically stable liquid occupies the whole capillary. Extrusion occurs _abruptly_ at \(\tilde{p}_{\rm s(C)}\) as there exists the free energy maximum (MAX) in the converging part. In the reverse process of compression, the intrusion into the diverging part occurs _gradually_ as there exist free energy minimum (MIN) in the diverging part. At \(\tilde{p}_{\rm s(D)}\) the meniscus reaches the junction. Then, it _abruptly_ jumps to the outlet. Therefore, we will observe a complex pressure hysteresis similar to that in Fig. 13(a) (Tab. V(d)).
Therefore, a subtle difference in the shape of the hourglass shaped and the diamond shaped capillary leads to a dramatic change in intrusion and extrusion behaviors.
#### iii.2.3 Converging-converging sawtooth-1 shaped capillary
The free energy landscapes of a converging-converging (CC) sawtooth-1 shaped capillary (Fig. 1(c)) in the regions I, II and III are presented in Figs. 14(a) to (d), where the free-energy \(\bar{\omega}_{\rm CC}\) is given by
\[\bar{\omega}_{\rm CC}\left(\tilde{z}\right) = \bar{\omega}_{\rm C}\left(\tilde{z}\right),\ \ \tilde{z}<1, \tag{57}\] \[= \Delta\bar{\omega}_{\rm wl}+\bar{\omega}_{\rm C}\left(\tilde{z} =1\right)+\bar{\omega}_{\rm C}\left(\tilde{z}-1\right),1\leq\tilde{z}\leq \tag{58}\]
and we have added an extra wall-liquid interaction energy written in original unit as
\[\Delta\bar{\omega}_{\rm wl}=-\pi\left(R_{\rm C}^{2}(0)-R_{\rm D}^{2}(0)\right) \gamma_{\rm w}\cos\theta_{\rm Y}, \tag{59}\]
which originates from the pierced-coin shaped vertical wall (the shaded wall at the junction in Figs. 1(c) and (d)) by appropriate scaling in Eq. (27). This wall acts as an up step with \(\Delta\omega_{\rm wl}>0\) if the wall is hydrophobic (\(\theta_{\rm Y}>90^{\circ}\)) or a down step with \(\Delta\bar{\omega}_{\rm wl}<0\) if the wall is hydrophilic (\(\theta_{\rm Y}<90^{\circ}\)). They may act as a barrier (BRR) for imbibition. Note that this barrier is not due to the hydrophobic barrier by the heterogeneous nucleation of bubble [3; 6; 36], but rather due to the simple free-energy or potential barrier [28]. The scenarios of liquid intrusion and extrusion in a sawtooth-1 shaped capillary are summarized in Tab 6 and are as follows.
1. Figure 14(a) presents the _liquid intrusion_ in the region III of a converging-converging sawtooth-1 shaped capillary with \(\theta_{\rm Y}=120^{\circ}\). The intrusion in the first converging part (\(0\leq\tilde{z}\leq 1\)) occurs _gradually_ as the free energy minimum (MIN) exists. Then, the intrusion stops at the junction (\(\tilde{z}=1\)) and the capillary is half-filled ([HF]) at \(\tilde{p}_{\rm s(C)}\) because of the free energy barrier (BRR) \(\Delta\bar{\omega}_{\rm wl}>0\) (Eq. (59)) by the hydrophobic vertical wall. In the reverse process, the extrusion from the first converging part occurs _gradually_ and is completed at \(\tilde{p}_{\rm c(C)}\) (Tab. VI(a)). Of course, a small perturbation such as mechanical vibration, thermal fluctuation etc. would help liquid to overcome the barrier and to spill out from the hole of the vertical wall. Then, the whole capillary would be filled by liquid.
2. Figure 14(b) presents the _liquid extrusion_ (vapor intrusion) in the region II with \(\theta_{\rm Y}=95^{\circ}\). Initially, only the first converging part is filled by liquid ([HF]) (see the middle line of Fig. 10(c)). The intrusion into the second converging part (\(1\leq\tilde{z}\leq 2\)) is prohibited due to the free-energy barrier (BRR) \(\Delta\bar{\omega}_{\rm wl}>0\) at the junction because \(\theta_{\rm c(C)}>\theta_{\rm Y}>90^{\circ}\). By increasing the magnitude of the negative applied pressure, the liquid extrusion from the first part occurs _abruptly_ at \(\tilde{p}_{\rm s(C)}\) because of the free energy maximum (MAX), which disappears
\begin{table}
\begin{tabular}{c c|c c} \hline \hline (a) & & (b) & \\ Intrusion & Reverse & Intrusion & Reverse \\ \hline \multirow{4}{*}{\(\tilde{p}=0\) [E]} & \multirow{4}{*}{\(\tilde{p}_{\rm s(C)}\) [E]} & \multirow{4}{*}{\(\tilde{p}_{\rm s(C)}\) [E]} \\ & & & \(\bar{\rho}_{\rm s(C)}\) [E] \\ \cline{1-1} & & & \(\bar{\gamma}_{\rm a}\) \\ \cline{1-1} & & & \(\bar{\gamma}_{\rm a}\) \\ \cline{1-1} & & & \(\bar{\gamma}_{\rm a}\) \\ \cline{1-1} & & & \\ \cline{1-1} & & & \\ \hline \hline (c) & & (d) & \\ Extrusion & Reverse & Extrusion & Reverse \\ \hline \(\tilde{p}_{\rm s(C)}\) [E] & \(\tilde{p}_{\rm s(C)}\) [E] & \(\tilde{p}_{\rm s(C)}\) [E] \\ \(\uparrow\)a & \(\downarrow\) & \(\uparrow\)a & \(\downarrow\)g+a \\ \(\bar{\rho}=0\) [MF] & \(\tilde{p}=0\) [E] & \(\tilde{p}=0\) [E] & \(\tilde{p}_{\rm s(D)}\) [F] \\ \(\downarrow\)a & & & \\ \cline{1-1} & & & \\ \cline{1-1} & & & \\ \cline{1-1} & & & \\ \hline \hline \end{tabular}
\end{table}
Table 5: Imbibition processes in a diamond shaped capillary in Fig. 13, which connects the empty [E], the metastable filled [MF], and the filled [F] state. Letters “a”, “g”, and “g+a” beside the arrows indicate that the imbibition occurs _abruptly_ (a), _gradually_ (g), and _gradually_ then _abruptly_ (g+a).
double conical capillary
at \(\tilde{p}_{\rm s(C)}\). Even if the second part is also filled by liquid, the extrusion from whole capillary occurs _abruptly_ also at \(\tilde{p}_{\rm s(C)}\) because the barrier at the junction is simply a descending step. In the reverse process of compression, the intrusion into the first part occurs abruptly at \(\tilde{p}_{\rm c(C)}\). Therefore, we will observe a pressure hysteresis which is similar to that in a single converging capillary in section II.3 (cf. Tab. 3(c) and 3(b)).
3. Figure 14(c) presents the _liquid extrusion_ in the region II with \(\theta_{\rm Y}=85^{\circ}\). Initially the whole capillary is filled ([F]) by liquid (see the bottom line of Fig. 10(c)) because the energy \(\Delta\omega_{\rm wl}<0\) is a descending step for \(90^{\circ}>\theta_{\rm Y}\). By increasing the magnitude of the negative applied pressure, the liquid extrusion in the second part (\(1\leq\tilde{z}\leq 2\)) occurs _abruptly_ at \(\tilde{p}_{\rm s(C)}\) because the free energy maximum (MAX) exists. However, the meniscus is pinned at the junction by the energy barrier (BRR) \(\Delta\omega_{\rm wl}<0\), and the sawtooth-1 shaped capillary will be half-filled ([HF]) by metastable liquid. If the meniscus can be freed from the BRR by some perturbations, the extrusion from the second part would be followed by the extrusion from the first part. In the reverse process of compression, the intrusion into the second part occurs abruptly at \(\tilde{p}_{\rm c(C)}\). Again, we will observe a pressure hysteresis which is similar to that in a single converging capillary in section II.3 (cf. Tab. 3(c) and 3(c)).
4. Figure 14(d) presents the _liquid extrusion_ in the region I with \(\theta_{\rm Y}=60^{\circ}\). Initially, the whole capillary is filled by liquid ([F]) (see the middle line of Fig. 10(c)). The extrusion in the second part occurs _abruptly_ at \(\tilde{p}_{\rm s(C)}\) as there exist the free-energy maximum (MAX). However, again, the meniscus is pinned at the junction due to the energy barrier (BRR) \(\Delta\omega_{\rm wl}<0\) and the capillary is half-filled ([HF]) by metastable liquid. In the reverse process of compression, the intrusion into the second part occurs _abruptly_ at \(\tilde{p}_{\rm c(C)}\) as the energy \(\Delta\omega_{\rm wl}<0\) acts as a descending step. Again, we will observe a pressure hysteresis (cf. Tab. 3(c) and 3(d)).
Therefore, ratchet-like character appears in converging-converging sawtooth-1 shaped capillaries: full extrusion of liquid from the whole capillary (empty state) is possible but the reverse process of full intrusion (filled state) is not possible in Figs. 14(a) and (b) (Tabs. 3(a) and (b)), while full extrusion (empty state) is not possible but the reverse process of full intrusion (filled state) is possible in Figs. 14(c) and (d)).
#### iii.2.4 Diverging-diverging sawtooth-2 shaped capillary
The free energy landscapes of a diverging-diverging (DD) sawtooth-2 capillary (Fig. 1(d)) in the regions I, II and III are presented in Figs. 15(a) to (d), where the free-energy \(\tilde{\omega}_{\rm DD}\) is given by
\[\tilde{\omega}_{\rm DD}\left(\tilde{z}\right) = \tilde{\omega}_{\rm D}\left(\tilde{z}\right),\;\;\;\tilde{z}<1, \tag{60}\] \[= \Delta\tilde{\omega}_{\rm wl}+\tilde{\omega}_{\rm D}\left(\tilde{ z}=1\right)+\tilde{\omega}_{\rm D}\left(\tilde{z}-1\right),1\leq\tilde{z}\leq \Delta\xi\xi \tag{61}\]
where \(\Delta\tilde{\omega}_{\rm wl}\) is the contribution form the vertical wall at the junction given by Eq. (59).
1. Figure 15(a) presents the free energy landscape of the _liquid intrusion_ in the region III of a diverging-diverging sawtooth-2 shaped capillary with \(\theta_{\rm Y}=120^{\circ}\). The intrusion in the first diverging part (\(0\leq\tilde{z}\leq 1\)
\begin{table}
\begin{tabular}{c c|c c} \hline \hline (a) & & (b) \\ Intrusion & Reverse & Extrusion & Reverse \\ \hline \(\tilde{p}=0\) [E] & \(\tilde{p}_{\rm c(C)}\) [E] & \(\tilde{p}_{\rm s(C)}\) [E] & \(\tilde{p}_{\rm s(C)}\) [E] \\ \(\downarrow\)g & \(\uparrow\)g & \(\uparrow\)a & \(\downarrow\)a \\ \(\tilde{p}_{\rm s(C)}\) [HF] & \(\tilde{p}_{\rm s(C)}\) [HF] & \(\tilde{p}=0\) [HF] & \(\tilde{p}_{\rm c(C)}\) [HF] \\ \hline \hline (c) & & (d) & \\ Extrusion & Reverse & Extrusion & Reverse \\ \hline \(\tilde{p}_{\rm s(C)}\) [HF] & \(\tilde{p}_{\rm s(C)}\) [HF] & \(\tilde{p}_{\rm s(C)}\) [HF] & \(\tilde{p}_{\rm s(C)}\) [HF] \\ \(\uparrow\)a & \(\downarrow\)a & \(\uparrow\)a & \(\downarrow\)a \\ \(\tilde{p}=0\) [F] & \(\tilde{p}_{\rm c(C)}\) [F] & \(\tilde{p}=0\) [F] & \(\tilde{p}_{\rm c(C)}\) [F] \\ \hline \hline \end{tabular}
\end{table}
Table 6: Imbibition processes in a sawtooth-1 shaped capillary in Fig. 14, which connects the empty [E], the half-filled [HF], and the filled [F] state in Fig. 10(c).
-double conical capillary
occurs _abruptly_ at \(\tilde{p}_{\rm c(D)}\) as the free-energy maximum (MAX) exists in the first part. Then, the intrusion in the second diverging part (\(1\leq\tilde{z}\leq 2\)) could follow. However, there exists the free energy barrier (BRR) \(\Delta\omega_{\rm ul}>0\), and the intrusion into the second diverging part is prohibited, and the capillary is half-filled ([HF]). In the reverse process of depression, the extrusion from the first part occurs abruptly at \(\tilde{p}_{\rm s(D)}\). Therefore, we will observe a pressure hysteresis which is similar to that in a single diverging capillary in section II.3 (cf. Tab. 3(b) and 6(a)). Again, a small perturbation would help the liquid to spill out from the junction (pierce of coin).
2. Figure 15(b) presents the _liquid intrusion_ in the region II with \(\theta_{\rm Y}=95^{\circ}\). The landscape is very similar to that in Fig. 15(a) except for a smaller energy barrier \(\Delta\omega_{\rm ul}>0\) at the junction; the intrusion and the extrusion is almost the same as that in (a) (Tab. 6(b)).
3. Figure 15(c) presents the _liquid intrusion_ in the region II with \(\theta_{\rm Y}=85^{\circ}\). Initially, the whole capillary is empty ([E]) as illustrated in Fig. 10(d) for \(\theta_{\rm c(D)}<\theta_{\rm Y}\). By increasing the magnitude of the positive applied pressure, the liquid intrusion into the whole capillary ([F]) occurs _abruptly_ at \(\tilde{p}_{\rm c(D)}\) as the free energy maximum (MAX) exist. Now, the energy step \(\Delta\omega_{\rm ul}<0\) at the junction plays no role. In the reverse process of depression, the extrusion from the second part (\(1\leq\tilde{z}\leq 2\)) occurs abruptly at \(\tilde{p}_{\rm s(D)}\), but the meniscus is pinned by the barrier (BRR) at the junction. The capillary is half-filled ([HF]) and the first part is filled by metastable liquid. Therefore, we will observe a complex pressure hysteresis which involves the completely empty ([E]), the completely filled ([F]), and the metastable half-filled ([HF]) state (Tab. 6(c)).
4. Figure 15(d) presents the _liquid extrusion_ in the region I with \(\theta_{\rm Y}=60^{\circ}\). Initially, the thermodynamically stable liquid occupies the whole capillary ([F]). The extrusion in the second part (\(1\leq\tilde{z}\leq 2\)) occurs _gradually_ as there exists the free-energy minimum (MIN). The extrusion in the second part is completed at \(\tilde{p}_{\rm c(D)}\) However, the subsequent extrusion from the first part (\(0\leq\tilde{z}\leq 1\)) cannot occur as the energy \(\Delta\omega_{\rm ul}<0\) acts as the barrier (BRR) and the meniscus is pinned at the junction. The first converging part is filled by metastable liquid ([HF]). In the reverse process of compression, the intrusion into the second part occurs _gradually_ at \(\tilde{p}_{\rm s(D)}\). We will observe a complex pressure hysteresis (Tab. 6(d)).
Again, the ratchet-like character appears in a diverging-diverging sawtooth-2 shaped capillary: the full extrusion of liquid from the whole capillary (empty state) is possible but the reverse process of full intrusion (filled state) is not possible in Figs. 15(a) and (b) (Tabs. 6(a) and (b)), while the full extrusion (empty state) is not possible but the reverse full intrusion (filled state) is possible in Figs. 14(c) and (d) (Tabs. 6(c) and (d)). In these two sawtooth shaped capillaries (sawtooth-1 and sawtooth-2), the pierced-coin shaped vertical wall at the junction could acts as a barrier in the free energy landscape because the wall would adsorb (hydrophilic) or repel (hydrophobic) liquid. Although we concentrated on the (static) thermodynamics and considered the imbibition process from the free energy landscape, this vertical wall will play very complex role in hydrodynamics.
So far, we have concentrated on the thermodynamic aspect of imbibition in conical and double-conical capillaries, and have not considered the kinetic and the hydrodynamic aspect. Once the imbibition starts and the steady flow is established,
Figure 15: Imbibition in a sawtooth-2 shaped capillary with \(\phi=10^{\circ}\) and \(\eta_{\rm C}=4.0\). (a) intrusion in the region III (\(\theta_{\rm Y}=120^{\circ}\)), (b) extrusion in the region II when the vertical wall at the junction is hydrophobic (\(\theta_{\rm Y}=95^{\circ}\)), (c) extrusion in the region II when the vertical wall is hydrophilic (\(\theta_{\rm Y}=85^{\circ}\)), and (d) extrusion in the region I (\(\theta_{\rm Y}=60^{\circ}\)).
\begin{table}
\begin{tabular}{c c|c c} \hline \hline (a) & & (b) \\ Intrusion & Reverse & Intrusion & Reverse \\ \hline \(\tilde{p}=0\) [E] & \(\tilde{p}_{\rm s(D)}\) [E] & \(\tilde{p}=0\) [E] & \(\tilde{p}_{\rm s(D)}\) [E] \\ \(\downarrow\)a & \(\uparrow\)a & \(\downarrow\)a & \(\uparrow\)a \\ \(\tilde{p}_{\rm c(D)}\) [HF] & \(\tilde{p}_{\rm c(D)}\) [HF] & \(\tilde{p}_{\rm c(D)}\) [HF] & \(\tilde{p}_{\rm c(D)}\) [HF] \\ \hline \hline (c) & & & (d) \\ Intrusion & Reverse & Extrusion & Reverse \\ \hline \(\tilde{p}=0\) [E] & \(\tilde{p}_{\rm s(D)}\) [HF] & \(\tilde{p}_{\rm c(D)}\) [HF] & \(\tilde{p}_{\rm c(D)}\) [HF] \\ \(\downarrow\)a & \(\uparrow\)a & \(\uparrow\)g & \(\downarrow\)g \\ \(\tilde{p}_{\rm c(D)}\) [F] & \(\tilde{p}_{\rm c(D)}\) [F] & \(\tilde{p}=0\) [F] & \(\tilde{p}_{\rm s(D)}\) [F] \\ \hline \hline \end{tabular}
\end{table}
Table 7: Imbibition process in a sawtooth-2 shaped capillary in Fig. 15, which connects the empty [E], the half-filled [HF], and the filled [F] state in Fig. 10(d).
we can consider the hydrodynamics. Our thermodynamic results predict the condition of the onset of the spontaneous and the forced imbibition. Therefore, our thermodynamic results would be the starting point for designing new experimental and numerical studies of hydrodynamics in realistic systems with structures similar to double conical structures even though there have already been some reports [25; 26; 27; 28; 29; 30; 31; 32; 33].
To study the hydrodynamics in double conical capillaries, a theoretical approach assuming the fully-developed laminar flow following Hagen-Poiseulle law, which has been used to study the steady flow in conical capillaries [20; 21; 44; 45; 46; 47; 59], would be possible. However, such an approach might not be reliable for double conical capillaries because the steady laminar flow may not be established [27] because of the existence of the junction of two conical capillaries. Furthermore, even in cylindrical and conical capillaries, the standard non-slip boundary condition [49] may not be applicable [25; 60; 61; 62] when the capillary radius becomes nanoscale. Furthermore, the dissipation at the inlet (entrance) or outlet (exit) [63; 64; 65; 66] could not be negligible. Nevertheless, our thermodynamic results would be the basis for the hydrodynamics studies of double conical capillaries.
## IV Conclusion
In this study, we considered the thermodynamics of spontaneous as well as forced imbibition of liquid in capillaries of double conical structures with hourglass, diamond, and sawtooth shapes, which are the prototypes of various natural as well as artificial micro and nano-fluidic systems. We found that the spontaneous intrusion of liquid can occur when Young's contact angle is smaller than the critical Young's contact angle determined from the modified Laplace pressure. The critical contact angle for the onset of spontaneous imbibition of the converging and the diverging capillary belong to the hydrophobic and the hydrophilic region, respectively, and they are determined from the tilt angle of the capillary wall. This asymmetry between the converging and the diverging capillary gives functionality not only to the single conical capillaries [20; 21] but to the double conical capillaries.
The free energy landscape of forced imbibition is studied by assuming the imbibition pathway with a constant Young's contact angle. Even though the condition of the onset of forced imbibition is simply given by the condition that the applied pressure overcomes the highest Laplace pressure at the inlet or the outlet where the capillary is narrowest, the free energy landscape is complex and exhibits either a maximum or a minimum, which suggests either an abrupt imbibition with a pressure hysteresis or a gradual and continuous imbibition. Furthermore, because of the four combinations of the converging and the diverging capillary of the double conical structures, various scenarios of the liquid intrusion and the liquid extrusion including the appearance of a metastable filled and a half-filled state are suggested from the free energy landscapes. These findings would be beneficial in elucidating various imbibition processes in nature and developing functioning micro- and nano-capillaries of artificial double conical structures.
## Author Declaration
### Conflict of interest
The author declares no conflict of interest.
## Data availability statement
The data that support the findings of this study are available from the author upon reasonable request.
## Data availability statement
The data that support the findings of this study are available from the author upon reasonable request.
|
2309.16548 | Band mixing in the quantum anomalous Hall regime of twisted
semiconductor bilayers | Remarkable recent experiments have observed fractional quantum anomalous Hall
(FQAH) effects at zero field and unusually high temperatures in twisted
semiconductor bilayer $t$MoTe$_2$. Intriguing observations in these experiments
such as the absence of integer Hall effects at twist angles where a fractional
Hall effect is observed, do however remain unexplained. The experimental phase
diagram as a function of twist angle remains to be established. By
comprehensive numerical study, we show that band mixing has large qualitative
and quantitative effects on the energetics of competing states and their energy
gaps throughout the twist angle range $\theta\leq 4^\circ$. This lays the
ground for the detailed realistic study of a rich variety of strongly
correlated twisted semiconductor multilayers and an understanding of the phase
diagram of these fascinating systems. | Ahmed Abouelkomsan, Aidan P. Reddy, Liang Fu, Emil J. Bergholtz | 2023-09-28T15:59:36Z | http://arxiv.org/abs/2309.16548v2 | # Band mixing in the quantum anomalous Hall regime of twisted semiconductor bilayers
###### Abstract
Remarkable recent experiments have observed fractional quantum anomalous Hall (FQAH) effects at zero field and unusually high temperatures in twisted semiconductor bilayer \(\mathrm{t}\mathrm{o}\mathrm{T}\mathrm{e}_{2}\). Intriguing observations in these experiments such as the absence of integer Hall effects at twist angles where a fractional Hall effect is observed, do however remain unexplained. The experimental phase diagram as a function of twist angle remains to be established. By comprehensive numerical study, we show that band mixing has large qualitative and quantitative effects on the energetics of competing states and their energy gaps throughout the twist angle range \(\theta\leq 4^{\circ}\). This lays the ground for the detailed realistic study of a rich variety of strongly correlated twisted semiconductor multilayers and an understanding of the phase diagram of these fascinating systems.
_Introduction._ Recent years have witnessed the advent of moire superlattices, made of atomically-thin transition metal dichalcogenides (TMDs), as highly tunable platforms where experiments have established the existence of a plethora of strongly correlated phases phenomena such as correlated insulators, ferromagnetism and quantum criticality [1; 2; 3; 4; 5; 6; 7].
Due to the emergence of narrow flat topological bands, moire superlattices have become an arena for an intricate interplay between topology and correlations. The hallmark example of topological phases is the quantum Hall effect [8; 9], both integer and fractional, occurring in Landau levels which arise when two dimensional electrons are subject to strong magnetic fields [10]. There has been numerous theoretical and experimental research into lattice analogues of the quantum Hall effect in the absence of external magnetic fields, known as the quantum anomalous Hall effect, as these might provide an experimentally feasible way to realize high temperature topological phases [11; 12; 13; 14; 15; 16; 17; 18; 19].
The integer quantum anomalous Hall effect (QAH) has been already experimentally observed in different moire systems [20; 21; 22]. In addition, local incompressibility measurements [23] on twisted bilayer graphene point towards the existence of fractional quantum Hall states that survive down to small magnetic fields (\(\sim 5\)T), below which, topologically trivial charge density wave (CDW) states are found instead.
In a very recent exciting development, the first evidence of fractional quantum anomalous Hall states (FQAH) has been observed in twisted \(\mathrm{o}\mathrm{T}\mathrm{e}_{2}\) based on both thermodynamic and transport measurements [24; 25; 26; 27]. Twisted transition metal dichalcogenide bilayers host topological moire bands with spin/valley contrasting Chern numbers [28]. At small twist angles, strong exchange and correlation effects in topological narrow bands [29] drive spontaneous spin/valley ferromagnetism and FQAH states [30; 31].
Motivated by the current state of art, we provide a detailed study of the many-body interacting problem of AA stacked twisted TMD homobilayers at both integer (\(n=1\)) and fractional (\(n=1/3\) and \(n=2/3\)) hole doping of the underlying moire valence bands. Using unbiased exact diagonalization techniques (ED) that includes multiple bands, we uncover novel significant effects of band mixing on various topological and correlated phases, which also enables a new electronic phase.
For integer filling \(n=1\), we demonstrate robust QAH states over a wide range of twist angles. However, for realistic interaction strength at dielectric constant \(\epsilon=5\), we find the interacting QAH energy gap as a function of twist angle to be strongly renormalized compared to the non-interacting case, highlighting ubiquitous correlation effects due to band mixing. Remarkably, we also find that strong interactions could drive spontaneous layer polarization (i.e. ferroelectricity) that is only enabled by multiband effects.
Moving to fractional fillings \(n=1/3\) and \(n=2/3\) where previous single-band projected ED studies [31; 32; 33; 34; 35] have predicted various correlated phases such as FQAH and CDW states, we further show that band mixing plays a crucial role in the stability of the aforementioned phases. In particular, band mixing weakens the FQAH state at \(n=1/3\) in favor of the competing CDW state, while at \(n=2/3\), FQAH states remain robustly present over a wide range of twist angles. We perform the entanglement spectrum study to characterize and distinguish FQAH and CDW states in finite-size systems.
_Model._ To model the \(\mathbf{K}\) valence bands of twisted bilayer TMDs, we use a continuum description [28]. The non-interacting Hamiltonian around a single valley (or spin due to spin-valley locking [36]) written in the basis of the two layers is given by
\[H_{\uparrow}=\begin{pmatrix}\frac{\hbar^{2}(-i\nabla-\kappa_{+})^{2}}{2m^{ \uparrow}}+V_{+}(\mathbf{r})&T(\mathbf{r})\\ T^{\dagger}(\mathbf{r})&\frac{\hbar^{2}(-i\nabla-\kappa_{-})^{2}}{2m^{*}}+V_{- }(\mathbf{r})\end{pmatrix} \tag{1}\]
Where \(V_{\pm}(\mathbf{r})\) captures the intralayer moire potential and \(T(\mathbf{r})\) represents the interlayer tunneling. Fourier expanding both functions to the lowest harmonics and taking
into account symmetry constraints restrict their form to take
\[\begin{split} V_{\pm}&=2V\sum_{i=1,3,5}\cos(\mathbf{G} _{i}\cdot\mathbf{r})\pm\phi)\\ T(\mathbf{r})&=w(1+e^{-i\mathbf{G}_{2}\cdot\mathbf{r }}+e^{-i\mathbf{G}_{3}\cdot\mathbf{r}}).\end{split} \tag{2}\]
In the Hamiltonian \(H_{\uparrow}\), the corners of the moire Brillouin zone are chosen to be \(\mathbf{\kappa}_{\pm}=\frac{4\pi}{3a_{M}}(-\sqrt{3}/2,\mp 1/2)\) and the moire reciprocal lattice vectors are given by \(\mathbf{G}_{i}=\frac{4\pi}{\sqrt{3}a_{M}}(\cos[(i-1)\pi/3],\sin[(i-1)\pi/3]\) for \(i=1,\cdots,6\) where \(a_{M}=a_{0}/(2\sin(\theta/2))\) is the moire lattice constant for twist angle \(\theta\) and \(a_{0}\) is the lattice constant of the monolayer. The Hamiltonian around the opposite valley \(H_{\downarrow}\) is related to \(H_{\uparrow}\) by a time-reversal transformation.
Focusing on twisted MoTe\({}_{2}\) (tMoTe\({}_{2}\)), we take \(a_{0}=3.52\) A and use the following parameters obtained from fitting the continuum model to DFT calculations [32],
\[(V,w,\phi,m^{*})=(11.2\;\mathrm{meV},-13.3\;\mathrm{meV},-91^{\circ},0.62m_{e}). \tag{3}\]
We are concerned with the problem of interacting holes in the moire valence bands of \(H_{\uparrow(\downarrow)}\). The full momentum space many-body Hamiltonian reads
\[H=H_{0}+H_{\mathrm{int}}\]
with
\[\begin{split} H_{0}&=\sum_{\mathbf{k}\alpha\sigma} \epsilon_{\alpha\sigma}(\mathbf{k})c^{\dagger}_{\mathbf{k}\alpha\sigma}c_{ \mathbf{k}\alpha\sigma}\\ H_{\mathrm{int}}&=\sum_{\begin{subarray}{c} \mathbf{k}_{1}\mathbf{k}_{2}\mathbf{k}_{3}\mathbf{k}_{4}\\ \alpha_{1}\alpha_{2}\alpha_{3}\alpha_{4}\end{subarray}}V^{\alpha_{1}\alpha_{ }\alpha_{2}\alpha_{3}\alpha_{4}}_{\mathbf{k}_{1}\mathbf{k}_{2}\mathbf{k}_{3} \mathbf{k}_{4}\sigma_{1}\sigma_{2}}c^{\dagger}_{\mathbf{k}\alpha_{1}\sigma_{1} }c^{\dagger}_{\mathbf{k}\alpha_{2}\sigma_{2}}c_{\mathbf{k}\alpha_{3}\sigma_{2} }c_{\mathbf{k}\alpha_{4}\sigma_{1}}\end{subarray} \tag{4}\]
where \(c^{\dagger}_{\mathbf{k}\alpha\sigma}(c_{\mathbf{k}\alpha\sigma})\) are creation (annihilation) operators of holes in a Bloch state \(\ket{\mathbf{k}\alpha\sigma}\) where \(\alpha\) and \(\sigma=\uparrow,\downarrow\) are band and spin indices respectively. \(\epsilon_{\alpha\sigma}(\mathbf{k})\) is the single-particle energies and \(V_{\mathbf{k}_{1}\mathbf{k}_{2}\mathbf{k}_{3}\mathbf{k}_{4}\mathbf{\sigma}_{ 1}\sigma_{2}}^{\alpha_{1}\alpha_{2}\alpha_{3}\alpha_{4}}=\bra{\mathbf{k}_{1} \alpha_{1}\sigma_{1};\mathbf{k}_{2}\alpha_{2}\sigma_{2}}\ket{V\ket{\mathbf{k}_ {4}\alpha_{4}\sigma_{1};\mathbf{k}_{3}\alpha_{3}\sigma_{2}}}\) are the two-body interaction matrix elements between the different Bloch states. Our two-body interaction is the dual-gated screened Coulomb interaction \(V(\mathbf{q})=2\pi e^{2}\tanh d_{g}|\mathbf{q}|/(\epsilon|\mathbf{q}|)\) for dielectric constant \(\epsilon\) which controls the interaction strength. Unless stated otherwise, we choose the distance \(d_{g}\) from the sample to the gates to be \(d_{g}=5\,\mathrm{nm}\).
We diagonalize the Hamiltonian (4) on different equal-aspect-ratio clusters [37] keeping a finite number \(N_{b}\) of valence bands. By the variational principle, our calculations provide an upper bound on the exact ground state energy of Eq. 4 that becomes tighter upon increasing \(N_{b}\). We utilize generic twisted boundary conditions, parameterized by \((\theta_{1}\in[0,2\pi),\theta_{2}\in[0,2\pi))\) along the two axes of the cluster. Robust ferromagnetism across a broad range of filling factors \(n\leq 1\) has been observed experimentally [24; 25] and in previous numerical studies [30; 32]. With this motivation, we perform all calculations in the full spin-valley sector. However, the presence of full spin/valley polarization in ground and low-lying excited states throughout the entire twist angle range studied in this work should not be taken for granted and requires further study.
_Results_. We begin our analysis by investigating integer hole filling \(n=1\) where previous studies have mainly used self-consistent Hartree-Fock methods [38; 39]. For weak interaction strength \(\epsilon^{-1}=0.1\), we find the many-body ground state to be a QAH state for twist angles spanning from \(\theta=2.0^{\circ}\) to \(\theta=4.0^{\circ}\). The interacting QAH state is smoothly connected to the non-interacting limit with holes completely filling the first valence band which has a Chern number \(|C|=1\). The many-body spectrum shown in Fig 1(c) at \(\theta=2.5^{\circ}\) features a single many-body ground state at total momentum \(K=[0,0]\) as expected from a Slater determinant state \(\ket{\psi}_{\mathrm{GS}}=\Pi_{\mathbf{k}_{i}}c^{\dagger}_{\mathbf{k}\alpha \sigma}\ket{0}\) for a certain band \(\alpha\) and spin \(\sigma\). However, as shown in Fig 1(a), the energy gap of the interacting QAH state \(E_{\mathrm{gap}}=E_{2}-E_{1}\) generically differs from the non-interacting energy gap and can be either smaller or larger depending on the twist angle.
As the interaction strength is increased to the realistic
value \(\epsilon^{-1}=0.2\), we observe the appearance of a ferroelectric phase which exhibits strong layer polarization. The signs of ferroelectricity stem from the existence of two-degenerate many-body ground states (Fig. 1(d)) which are manifestation of spontaneously breaking \(C_{2y}\) symmetry and realize two layer-polarized states on either the top or the bottom layer.
The ferroelectric phase dominates over the QAH for smaller twist angles. In Fig. 1(b), we plot \(E_{\rm gap}=E_{3}-E_{2}\) as a function of twist angle for realistic interaction strength \(\epsilon^{-1}=0.2\) and find that it is non-vanishing up until \(\theta\approx 2.8^{\circ}\), after which, the system undergoes a transition to a QAH state.
Similar to [40]. which focused on smaller twist angles, the emergence of layer polarization can be intuitvely understood from a real-space picture also in the regime that we consider. Here, the first three valence bands have a total Chern number \(C=0\) and admit a real-space description in terms of three Wannier orbitals which are maximally localized on the high symmetry positions, \(R_{X}^{M}\), \(R_{M}^{X}\) and \(R_{M}^{M}\) forming a triangular lattice of three sublattices [39]. \(R_{\beta}^{\alpha}\) denotes atomic positions in the moire unit cell where the \(\alpha\) atom (metal \(M\) or chalcogen \(X\)) of the top layer is aligned with the \(\beta\) atom. The two Wannier orbitals centered on the \(R_{X}^{M}\) and \(R_{X}^{M}\) sites are mainly localized in the top and bottom layers respectively while the orbital at the \(R_{M}^{M}\) site carries equal weight in both layers. When the interaction strength is strong, it becomes energetically favorable for the holes to minimize repulsion by localizing on one of the two layer-polarized sublattices.
The emergence of ferroelectric phase strongly affects the phase space of QAH state. For realistic interaction strength \(\epsilon^{-1}=0.2\), the QAH state now appears at \(\theta>2.8^{\circ}\). Interestingly, its topological gap (within the fully polarized sector) increases monotonously (see Fig. 1(b)) with the twist angle, at least up to \(\theta=4^{\circ}\). This contrasts sharply with the noninteracting case where the band gap \(\Delta_{12}\) decreases in this angle range. For comparison, we also performed ED calculation using a different set of continuum model parameters in the literature [33], and found that at the QAH phase \(n=1\) only appears in a narrow twist angle range for \(\epsilon^{-1}=0.1\), and is entirely absent for \(\epsilon^{-1}=0.2\)[37].
Next, we consider fractional hole fillings \(n=1/3\) and \(n=2/3\). For \(n=1/3\), single-band exact diagonalization studies [34; 32; 35] have revealed an intriguing interplay between FQAH and CDW states as a function of twist angle while more robust FQAH states for a wider range of twist angles were found at \(n=2/3\).
Figure 2: ED results at twist angle \(\theta=2.5^{\circ}\) and \(\epsilon^{-1}=0.2\). (a) and (c) show the many-body spectrum in the CDW and FQAH phases respectively while (b) and (d) show the PES and the projected (onto the first valence band) PES respectively obtained by keeping \(N_{A}=2\) particles. There are 18 states and 42 states below the line in (b) and (d), consistent with a CDW and a FQAH state respectively. Calculations were done on the 12 site cluster [37] using periodic boundary conditions \((\theta_{1},\theta_{2})=(0,0)\) in (a)-(b) and anti-periodic boundary conditions, \((\theta_{1},\theta_{2})=(\pi,\pi)\) in (c)-(d). The index \(\kappa\) labels the different momentum points.
Figure 3: \(E_{\rm gap}=E_{4}-E_{3}\) as a function of twist angle \(\theta\) at \(n=1/3\) and \(n=2/3\) for \(\epsilon^{-1}=0.1\) and \(\epsilon^{-1}=0.2\). Calculations were done on the 12 site cluster [37] using periodic boundary conditions \((\theta_{1},\theta_{2})=(0,0)\) in (a)-(b) and anti-periodic boundary conditions, \((\theta_{1},\theta_{2})=(\pi,\pi)\) in (c)-(d).
To contrast the two fillings, we focus first on twist angle \(\theta=2.5^{\circ}\) and \(\epsilon^{-1}=0.2\). As shown in Fig. 2(a) and 2(c), the many-body spectrum in both cases displays three quasi-degenerate ground states at three distinct total momentum sectors. In this cluster geometry, the ground state momentum sectors expected of FQAH and CDW states happen to be identical and, therefore, distinguishing between the two candidate phases requires further analysis.
In order to pin-point the underlying phase, we calculate the particle entanglement spectrum (PES) [41; 42] of the quasi-degenerate states. The density matrix is defined as \(\rho=\frac{1}{3}\sum_{i=1,2,3}\left|\Psi_{i}\right\rangle\left\langle\Psi_{i}\right|\) where \(\left|\Psi_{i}\right\rangle\) (\(i=1,2,3\)) denotes the three quasi-degenerate ground states. We perform a cut in the particle-space corresponding to _minority_ particles in a single band, holes for \(n=1/3\) and electrons for \(n=2/3\). By tracing out \(N_{B}\) particles and keeping \(N_{A}\) ones, the entanglement spectrum consists of the eigenvalues \(\{\xi_{i}\}\) of \(\exp(-\rho_{A})\) where \(\rho_{A}=\text{Tr}_{N_{B}}\rho\) is the reduced density matrix. In order to trace out electrons from the first valence band and avoid additional entanglement structure from the filled higher bands, we calculate the entanglement spectrum \(\{\widetilde{\xi}_{i}\}\) obtained from projecting the density matrix \(\rho\) onto the first valence band.
As evident from Fig. 2(b) and 2(d), the PES exhibits a well-separated low-lying spectrum at both fillings. For \(n=2/3\), we find the number of low-lying states to be consistent with a Laughlin-like FQAH which satisfies a counting rule (1,3) of the number of admissible configurations (at most 1 particle in each 3 consecutive orbitals [43; 44; 45]). In contrast, the counting of the fewer low-lying states at \(n=1/3\) is consistent with a CDW state [46].
Moreover, we find the CDW state at \(n=1/3\) and the FQAH state at \(n=2/3\) to exist for a wide range of twist angles. In Fig. 3, we plot the energy gap \(E_{\text{gap}}=E_{4}-E_{3}\) as a function of the twist angle \(\theta\) for both \(\epsilon^{-1}=0.1\) and \(\epsilon^{-1}=0.2\). In addition to signatures of a weak FQAH state at \(n=1/3\) for \(\theta<2.0^{\circ}\)[37], we observe robust CDW and FQAH states at fillings \(n=1/3\) and \(n=2/3\) respectively for \(\theta\geq 2.0^{\circ}\), using the realistic interaction strength \(\epsilon^{-1}=0.2\). For weak interaction \(\epsilon^{-1}=0.1\), we find the system at \(n=2/3\) to become metallic for \(\theta\geq 3.5^{\circ}\).
We also study the stability of the discussed phases for various twisted boundary conditions. While, we find the phases at \(n=1\) and \(n=1/3\) to be insensitive to the choice of boundary conditions, we observe significant effects of twisting the boundary conditions at \(n=2/3\), see Fig. 4 where we fix \(\theta=3.6^{\circ}\). Although FQAH states should be insensitive to changes in the boundary conditions in the thermodynamic limit, these states may exhibit a strong dependence in the small systems available to exact diagonalization. At zero flux (standard periodic boundary conditions) we find an apparent two-fold ground state degeneracy, seemingly at odds with FQAH expectations, see Fig. 4(a). Crucially, however, we notice that a three-fold set of states originating from the predicted FQAH momenta transform into each other under twisting boundary conditions while evolving essentially separately from the rest of the spectrum as shown in Fig. 4(b). Moreover, with anti-periodic boundary conditions in one direction (\(\theta_{2}=\pi\)), these three states are separated from the rest of the spectrum and their corresponding PES shows a well developed entanglement gap with the predicted FQAH counting below the gap as shown in Fig. 4(d). In fact, this non-trivial counting persists if we consider the PES resulting only the two low lying states at \((\theta_{1},\theta_{2})=(0,0)\) as evident from Fig. 4(c). We take this as strong evidence for a \(n=2/3\) FQAH state in the large system limit.
_Discussion_. Inspired by recent experiments we have shown that multiband effects are crucial for understanding integer and fractional QAH states as well as their competitors at \(n\leq 1\) in twisted semiconductor bilayer \(t\)MoTe\({}_{2}\).
Our results have several important implications. First, it follows that the optimum twist angle for QAH states is filling dependent, i.e. there is no unique magic angle for all fillings. For instance, at twist angles \(\theta\lesssim 2.8^{\circ}\)
Figure 4: Effect of twisting the boundary conditions for \(\theta=3.6^{\circ}\), \(n=2/3\), \(N_{b}=2\) and \(\epsilon^{-1}=0.2\). (a) The many-body spectrum with periodic boundary conditions, \((\theta_{1},\theta_{2})=(0,0)\). (b) The evolution of the many-body spectrum upon twisting the boundary condition in one direction (\(\theta_{2}\)) and keeping the other direction unchanged (\(\theta_{1}=0\)). The colors blue, green and orange label the three different momentum sectors where FQAH are expected while the color red labels the rest of the sectors. (c) The projected particle entanglement spectrum at \((\theta_{1},\theta_{2})=(0,0)\) calculated from the two quasi-degenerate ground states (see (a)). (d) The projected particle entanglement spectrum at \((\theta_{1},\theta_{2})=(0,\pi)\) calculated from the three quasi-degenerate ground states. \(N_{A}=2\) particles are kept in (c) and (d). There are 42 states below the line in (c) and (d), consistent with a FQAH state.
an integer QAH states is missing at \(n=1\) (Fig. 1(b)) while a fractional FAQH state at \(n=2/3\) prevails (Fig. 3(d)). Second, a new intriguing phase is enabled by band mixing: spontaneous layer polarized state at \(n=1\) (Fig. 1(b),(d)). Third, in addition to the particle-hole symmetry breaking within a band, multiband effects provide a second key ingredient in understanding why twisted bilayer \(t\)MoTe\({}_{2}\) may exhibit the FAQH effect at \(n=2/3\) but not at \(n=1/3\) (Figs. 2 and 3). Fourth, the theoretical multiband effects uncovered here, and their relation to experimental results provide important means of distinguishing between the greatly varying available sets of model parameters obtained via first principles calculations.
We expect that these effects of multiband mixing will carry over _mutatis mutandis_ to related setups. This thus provides key input for future studies of a rich variety of strongly correlated twisted semiconductor multilayers.
_Note added_. During the preparation of this work, a related study of twisted bilayer MoTe\({}_{2}\) reported two-band ED results at \(n=2/3\)[47]. Upon submission a related preprint appeared [48].
_Acknowledgements_. A.A. and E.J.B. were supported by the Swedish Research Council (VR, grant 2018-00313), the Wallenberg Academy Fellows program (2018.0460) and and the Goran Gustafsson Foundation for Research in Natural Sciences and Medicine. A.A. is also supported by the Wallenberg scholarship program (2022.0348). The work at Massachusetts Institute of Technology is supported by the U.S. Army DEVCOM ARL Army Research Office through the MIT Institute for Soldier Nanotechnologies under Cooperative Agreement number W911NF-23-2-0121 and the Simons Foundation.
|
2310.20199 | In Search of Lost Online Test-time Adaptation: A Survey | This article presents a comprehensive survey of online test-time adaptation
(OTTA), focusing on effectively adapting machine learning models to
distributionally different target data upon batch arrival. Despite the recent
proliferation of OTTA methods, conclusions from previous studies are
inconsistent due to ambiguous settings, outdated backbones, and inconsistent
hyperparameter tuning, which obscure core challenges and hinder
reproducibility. To enhance clarity and enable rigorous comparison, we classify
OTTA techniques into three primary categories and benchmark them using a modern
backbone, the Vision Transformer (ViT). Our benchmarks cover conventional
corrupted datasets such as CIFAR-10/100-C and ImageNet-C, as well as real-world
shifts represented by CIFAR-10.1, OfficeHome, and CIFAR-10-Warehouse. The
CIFAR-10-Warehouse dataset includes a variety of variations from different
search engines and synthesized data generated through diffusion models. To
measure efficiency in online scenarios, we introduce novel evaluation metrics,
including GFLOPs, wall clock time, and GPU memory usage, providing a clearer
picture of the trade-offs between adaptation accuracy and computational
overhead. Our findings diverge from existing literature, revealing that (1)
transformers demonstrate heightened resilience to diverse domain shifts, (2)
the efficacy of many OTTA methods relies on large batch sizes, and (3)
stability in optimization and resistance to perturbations are crucial during
adaptation, particularly when the batch size is 1. Based on these insights, we
highlight promising directions for future research. Our benchmarking toolkit
and source code are available at https://github.com/Jo-wang/OTTA_ViT_survey. | Zixin Wang, Yadan Luo, Liang Zheng, Zhuoxiao Chen, Sen Wang, Zi Huang | 2023-10-31T05:47:33Z | http://arxiv.org/abs/2310.20199v3 | # In Search of Lost Online Test-time Adaptation: A Survey
###### Abstract
In this paper, we present a comprehensive survey on online test-time adaptation (OTTA), a paradigm focused on adapting machine learning models to novel data distributions upon batch arrival. Despite the proliferation of OTTA methods recently, the field is mired in issues like ambiguous settings, antiquated backbones, and inconsistent hyperparameter tuning, obfuscating the real challenges and making reproducibility elusive. For clarity and a rigorous comparison, we classify OTTA techniques into three primary categories and subject them to benchmarks using the potent Vision Transformer (ViT) backbone to discover genuinely effective strategies. Our benchmarks span not only conventional corrupted datasets such as CIFAR-10/100-C and ImageNet-C but also real-world shifts embodied in CIFAR-10.1 and CIFAR-10-Warehouse, encapsulating variations across search engines and synthesized data by diffusion models. To gauge efficiency in online scenarios, we introduce novel evaluation metrics, inclusive of FLOPs, shedding light on the trade-offs between adaptation accuracy and computational overhead. Our findings diverge from existing literature, indicating: (1) transformers exhibit heightened resilience to diverse domain shifts, (2) the efficacy of many OTTA methods hinges on ample batch sizes, and (3) stability in optimization and resistance to perturbations are critical during adaptation, especially when the batch size is 1. Motivated by these insights, we pointed out promising directions for future research. The source code will be made available.
Keywords:Online Test-time Adaptation Transfer Leanring
## 1 Introduction
The presence of dataset shift (Quinonero-Candela et al. 2008) poses a notable challenge for machine learning. Models often experience significant performance drop when they confront test data characterized by conspicuous _distribution differences_ from training. Such differences might come from changes in style and lighting conditions, and various forms of corruption, making test data deviate from the data upon which these models were initially trained. To mitigate the performance degradation during inference, test-time adapta
Figure 1: An illustrative figure of our survey. We comprehensively summarized the advanced OTTA techniques under the categories of optimization, model, and data. The main research questions are: Is OTTA still effective when (1) integrated with Vision Transformer? (2) encountered with real-world domain shift?
tion (TTA) has emerged as a promising solution. TTA aims to rectify the dataset shift issue by acclimating the model to novel distributions using unlabeled test data (Liang et al., 2023). Different from the traditional paradigm of domain adaptation (Ganin and Lempitsky, 2015; Wang and Deng, 2018), TTA does not require access to source data for distribution alignment. Commonly used strategies in TTA include unsupervised proxy objectives, spanning techniques such as pseudo-labeling Liang et al. (2020), graph-based learning (Luo et al., 2023), and contrastive learning Chen et al. (2022), applied across the **entire expanse** of test data through multiple training epochs to enhance model accuracy. Nevertheless, requiring access to the complete test set may not always align with practical use. In many applications such as object detection, adaptation is restricted to using only the **current test batch** processed in a streaming manner. Such operational restrictions make it untenable for TTA to require the full test set tenable.
In this study, our focus is on a specific line of TTA methods, _i.e._, online test-time adaptation (OTTA), which aims to accommodate _real-time changes_ in the test data distribution. We provide a comprehensive overview of existing OTTA studies and evaluate the efficiency and effectiveness of its individual components. To facilitate a structured comprehension of the OTTA landscape, below we categorize existing approaches into three groups: data-centric OTTA, model-based OTTA, and optimization-oriented OTTA.
* **Data-centric OTTA** maximizes prediction consistency across diversified test data. Diversification strategies use auxiliary data, improved data augmentation methods, and diffusion techniques, and create a saving queue for test data, _etc_.
* **Model-based OTTA** makes changes to the original backbone, such as modifying specific layers or their mechanisms, adding supplementary branches, and incorporating prompts.
* **Optimization-oriented OTTA** focuses on various optimization methods. Examples are designing new loss functions, updating BatchNorm layers during testing, using pseudo-labeling strategies, teacher-student frameworks, and contrastive learning-based approaches, _etc_.
It is potentially useful to combine methods from different categories for further improvement. An in-depth analysis on this strategy is presented in Section 3.
**Differences from an existing survey.** Liang et al. (2023) provides a comprehensive overview of the vast topic of test-time adaptation (TTA), discussing TTAs in diverse configurations and their applicability in vision, natural language processing (NLP), and graph data analysis. One limitation is that the survey does not provide experimental comparisons of existing methods. In comparison, our survey focuses on online TTA methods and provides interesting insights from experimental comparisons with consideration in hyperparameter selection and backbone influence (Zhao et al., 2023).
**Contributions.** In particular, with the ascent of vision transformer (ViT) architectures in various machine learning domains, this survey studies _whether OTTA strategies developed for ResNet structures maintain their effectiveness after being integrated into ViT instead_. To this end, we benchmark seven state-of-the-art OTTA algorithms on a wide range of distribution shifts under a new set of evaluation metrics. Below, we summarize the key contribution of this survey.
* **[A focused OTTA survey]** To the best of our knowledge, this is the first focused survey on online test-time adaptation, which provides a thorough understanding of three main working mechanisms. Wide experimental investigations are conducted in a fair comparison setting.
* **[Benchmarking OTTA strategies with ViT]** We reimplemented representative OTTA baselines under the ViT architecture and testified their performance against five benchmark datasets. We drive a set of replacement rules that adapt the existing OTTA methods to accommodate the new backbone.
* **[Both accuracy and efficiency as evaluation Metrics]** Apart from using the traditional recognition accuracy metric, we further provide insights into various facets of computational efficiency by Giga floating-point operations per second (GFLOPs). These metrics are important in real-time streaming applications.
* **[Real-world testbeds]** While existing literature extensively explore OTTA methods on corruption datasets like CIFAR-10-C, CIFAR-100-C, and ImageNet-C, our interests are more fall into their capability to navigate real-world dataset shifts. Specifically, we assess OTTA performance on CIFAR-10-Warehouse, a newly introduced, expansive test set of CIFAR10. Our empirical analysis and evaluation lead to different conclusions from findings in the existing survey (Liang et al., 2023).
This work aims to summarize the numerous OTTA methods with the aforementioned three categorization criteria and analyze each approach using empirical results. Moreover, to assess real-world potential, we conduct comparative experiments to explore the portability, robustness, and environment sensitivity of the OTTA components. We expect this survey to offer a systematic perspective in navigating OTTA's intricate and diverse settings, enabling a clear identification of
effective components. We also present new challenges as potential future research directions.
**Organization of the survey.** The rest of this survey will be organized as follows. Section 2 presents the problem definition and introduces widely used datasets, metrics and applications. Using the taxonomy shown in Fig. 2, Section 3 provides a comprehensive review of existing OTTA methods. Then, using transformer backbones, Section 4 empirically analyzes seven state-of-the-art methods based on a set of new evaluation metrics on both corrupted and real-world distribution shifts. We conclude the survey in Section 6.
## 2 Problem Overview
Online Test-time Adaptation (OTTA), with its online and on-time characteristics, represent a critical line of methods in test-time adaptation. This section provides a formal definition of OTTA and delves into its fundamental attributes. Further, we explore widely used datasets and evaluation methods and examine the potential application scenarios of OTTA. To ensure a clear understanding, A comparative analysis is undertaken to differentiate OTTA from other settings that bear resemblance.
### Problem Definition
In OTTA, we assume access to a trained source model and adapt the model at test time over the test input before making the final prediction. The given source model \(f_{\theta^{S}}\) parameterized by \(\theta^{S}\) is pre-trained on a labeled source domain \(\mathcal{D}_{S}=\{(\mathbf{x}^{S},\mathbf{y}^{S})\}\), which is formed by i.i.d. sampling from the source distribution \(p_{S}\). Unlabeled test data come in batches: \(\mathcal{D}_{T}=\{\mathbf{x}_{1}^{T},\mathbf{x}_{2}^{T},\mathbf{x}_{t}^{T},\ldots,\mathbf{x}_{ n}^{T}\}\), where \(t\) indicates the time step. Test data often come from one or multiple different distributions \((\mathbf{x}_{t}^{T},\mathbf{y}_{t}^{T})\sim p_{T}\), where \(p_{S}(\mathbf{x},y)\neq p_{T}(\mathbf{x},y)\) under the covariate shift assumption (Huang et al., 2006). During TTA, we update the model parameters at each time step, resulting in an adapted model \(f_{\theta^{t}}\). The pre-trained model is expected to retain its original architecture, including the backbone, without modifying its layers or introducing new model branches during training. Additionally, the model is restricted to observing the test data only once and must produce predictions promptly. By refining the definition of OTTA in this manner, we aim to minimize limitations associated with its application in real-world settings. Note that following adaptation to a specific domain, the model is reset to its original pretrained state. _i.e._, \(\underline{f_{\theta^{S}}}\rightarrow\underline{f_{\theta^{0}}}\rightarrow \underline{f_{\theta^{S}}}\rightarrow\underline{f_{\theta^{1}}}\rightarrow \underline{f_{\theta^{S}}}\rightarrow\cdots f_{\theta^{t}}\).
Due to the covariate shift between the source and test data, adapting the source model in the absence of the source data poses a significant challenge. Since there is no way to align these two sets together, one may ask, what kind of optimization objective could work under such a limited environment? Meanwhile, as the test data come at a fixed pace, how many of them could be a desirable amount to best fix the test-time adaptation? Or even, will the adaptation still work in the new era of backbones (e.g., ViTs)? Does "Test-time Adaptation" become a false proposition with the backbone upgrading? With these concerns, we unfold the OTTA methods by their datasets, evaluations, and applications and decouple their strategies, aiming to discover which one and why one component would work with the update of existing backbones.
### Datasets
This survey mainly summarizes datasets in image classification, a fundamental problem in computer vision, while recognizing that OTTA has been applied to many downstream tasks (Ma et al., 2022; Ding et al., 2023; Saltori et al., 2022). Testbeds in OTTA usually seek to facilitate adaptation from natural images to corrupted ones. The latter are created by perturbations such as Gaussian noise and defocus blur. Despite the inclusion of corruptions at varying severities, these synthetically induced corruptions may not sufficiently mirror the authentic domain shift encountered in real-world scenarios. In our work, we use both corruption and real-world shift datasets, summarized in Table 1. Details of each testbed are described below.
* **CIFAR-10-C** is a standard benchmark for image classification. It contains 60,000 color images, each of 32x32 pixels, spanning 10 distinct classes. CIFAR
Figure 2: Taxonomy of existing OTTA methods. The categories, i.e., optimization-based, data-based, and model-based, inform three mainstream working mechanisms. Additional smaller categories are based on prompt tuning and input adaptation.
10 Corrupted retains the class structure of CIFAR10 but incorporates 15 diverse corruption styles, with severities ranging from levels 1 to 5. This corrupted variant aims to simulate realistic image distortions or corruptions that might arise during processes like image acquisition, storage, or transmission.
* **CIFAR-100-C** has 60,000 colored images with dimensions 32x32 pixels, uniformly distributed across 100 unique classes, resulting in 600 images per class. The CIFAR100 Corrupted dataset, analogous to CIFAR10 Corrupted, integrates artificial corruptions into the canonical CIFAR100 images.
* **ImageNet-C** is a corrupted version of ImageNet 1k (Krizhevsky et al., 2012).
While testbed with corruptions have been widely used, they represent artificially created domain differences that may not fully capture the complexities in real-world scenarios. In fact, experimental benchmarking in this context is still lacking. Hence, this paper also evaluates OTTA on real-world test sets including CIFAR-10-Warehouse and CIFAR10.1 to address this limitation.
* **CIFAR-10.1**(Recht et al., 2018) is an out-of-distribution test set with the same label space as CIFAR10. It contains roughly 2,000 images sampled from Tiny Image dataset (Yang et al., 2016).
* **CIFAR-10-Warehouse** (CIFAR-10-W) integrates images from both diffusion models, specifically Stable-diffusion-2-1 (Rombach et al., 2022), and targeted keyword searches across seven popular search engines. Comprising 37 generated and 143 real-world datasets, each subset has 300 to 8,000 images, revealing noticeable within-class variations across different search criteria.
### Evaluation
For online test-time adaptation, efficiency is an important consideration besides accuracy. This survey uses the following evaluation metrics.
**Mean error (mErr)** is one of the most commmonly used metrics to assess model accuracy. It computes the average error rate across all corruption types or domains. irrespective of the class to which the data points belong. The formula for calculating the mean error is given as:
\[\mathrm{mErr}=\frac{1}{n}\left(1-\mathrm{acc}\right), \tag{1}\]
where \(n\) is the number of test domains, and acc represents model accuracy. While being useful most of the time, this metric does not provide class-specific insights, which might be important in some applications.
**Floating point operations per second (FLOPS)** quantifies the number of floating-point calculations a model performs in a second. A model with lower FLOPS is more computationally efficient.
**Number of updated parameters** provides insights into the complexity of the adaptation process. A model that requires a large number of parameters may not be practical for online adaptation.
### Applications
OTTA is applied in a wide range of tasks such as autonomous vehicle detection (Hegde et al., 2021), pose estimation (Lee et al., 2023), video depth prediction (Liu et al., 2023), and frame interpolation (Choi et al., 2021),. Regarding medical diagnosis, there is also a high demand for diagnostic models trained on a specific group of patient data to be adapted to new data (Ma et al., 2022; Wang et al., 2022; Saltori et al., 2022).
### Relationship with Other Tasks
**Offline test-time adaptation (TTA)**, also called source-free domain adaptation, is a technique to adapt a source pre-trained model to the target (_i.e._, test) set. This task assumes that the model can access the entire dataset. This differs from online test-time adaptation, where the test data is given in batches.
**Continual TTA** While OTTA requires resetting the adapted model back to the source pre-trained one for every distinct corruption type, continual TTA (_e.g._,
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Datasets & \# domains & \# test images & \# classes & corrupted? & image size \\ \hline \hline CIFAR-10-C (Hendrycks and Dietterich, 2018) & 19 & 950,000 & 10 & Yes & \(32\times 32\) \\ CIFAR-100-C (Hendrycks and Dietterich, 2018) & 19 & 950,000 & 100 & Yes & \(32\times 32\) \\ ImageNet-C (Hendrycks and Dietterich, 2018) & 75 & 3750,000 & 1000 & Yes & \(224\times 224\) \\ \hline CIFAR-10.1 (Recht et al., 2018) & 1 & 2,000 & 10 & No & \(32\times 32\) \\ CIFAR-10-Warehouse (Sun et al., 2023) & 180 & 608,691 & 10 & No & \(224\times 224\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Datasets used in this survey**. We list their key statistics.(Sun et al., 2023) Note that only a subset of each below datasets will be used.
(Wang et al. 2022a)) does not allow any reset mechanism if it is based on prior domain information.
**Gradual TTA** tackles real-world scenarios where domain shifts are gradually introduced through incoming test samples (Marsden et al. 2022). An example is the gradual anc continuous change in weather conditions. For corruption datasets, existing gradual TTA approaches assume that test data transition from severity level 1 to level 2, and then progress slowly towards the highest level. Note that both continual and gradual TTA methods could also support online episodic learning.
**Test-time Training (TTT)** introduces an auxiliary task for both training and adaptation (Sun et al. 2020; Gandelsman et al. 2022). In the training phase, the original architecture, such as ResNet101, is modified into a "Y"-shaped structure, where one task is image classification, and the other could be rotation prediction. During adaptation, the auxiliary task continues to be trained in a supervised manner so that model parameters are updated. The classification head output serves as the final prediction for each test sample.
**Test-time augmentation (TTAug)** applies data augmentations to input data during inference, resulting in multiple variations of the same test sample, from which predictions are obtained (Shanmugam et al. 2021; Kimura 2021). The final prediction typically aggregates predictions of these augmented samples through averaging or majority voting. TTAug enhances model robustness and generalization by providing a range of data views. This technique can be applied to various tasks, including domain adaptation, offline TTA, and even OTTA. TTAug and TTA are fundamentally incomparable as they pertain to distinct methodologies.
**Domain generalization** aims to train models that can perform effectively across multiple distinct domains without specific adaptation to any domain (Zhou et al. 2023). It assumes the model learns domain-invariant features that are applicable across diverse datasets. While OTTA emphasizes dynamic adaptation to specific domains over time, domain generalization seeks to establish domain-agnostic representations. The choice between these strategies depends on the specific problem considered.
## 3 Online test-time adaptation
Given the divergence of online data from the distribution of source training data, OTTA techniques are broadly classified into three categories. These categorizations hinge on their responses to two primary concerns: managing online data and mitigating performance drops due to distribution shifts. Optimization-centric methods anchored in designing unsupervised objectives typically lean towards adjusting or enhancing pre-trained models. **Model-centric** approaches look to modify particular layers or even overhaul the architecture. On the other hand, **data-centric** methods aim to expand data diversity, either to amplify model generalization or to harmonize consistency across varying data views. According to this taxonomy, we sort out existing approaches in Table 1 and review them in detail as follows.
### Optimization-based OTTA
Optimization-based OTTA methods consist of three subcategories: (1) recalibration statistics in normalization layers, (2) enhancing optimization stability with the mean-teacher model, and (3) designing unsupervised loss functions. A timeline is illustrated in Fig. 3.
#### 3.1.1 Normalization Calibration
In deep learning, a normalization layer refers to a specialized architectural component that is integrated within neural network architectures. Its primary function is to ameliorate the training process and enhance the generalization capacity of deep neural networks by regulating the statistical properties of activations within a given layer. These layers, which include Batch Normalization (BatchNorm) (Ioffe and Szegedy 2015), Layer Normalization LayerNorm (Ba et al. 2016), and Instance Normalization (Ulyanov et al. 2016), among others, operate by standardizing the mean and variance of activations, thereby diminishing the likelihood of vanishing or exploding gradients during the training process. A similar idea of the normalization layer is feature whitening, which also adjusts the feature right after the activation layer. Both of them are commonly used in domain adaptation tasks to alleviate domain shift(Roy et al. 2019; Carlucci et al. 2017).
**Example.** Take the most commonly used BatchNorm as an example. Let \(\mathbf{x}_{i}\) represent the activation for feature channel \(i\) in a mini-batch. The BatchNorm layer will first calculate the batch-level mean \(\mathbf{\mu}\) and variance \(\mathbf{\sigma}^{2}\) by:
\[\mathbf{\mu}=\frac{1}{m}\sum_{i=1}^{m}\mathbf{x}_{i},\quad\mathbf{\sigma}^{2}=\frac{1}{m} \sum_{i=1}^{m}(\mathbf{x}_{i}-\mathbf{\mu})^{2}, \tag{2}\]
where m is the mini-batch size. Then, the calculated statistics will be applied to standardize the inputs:
\[\hat{\mathbf{x}}_{i}=\frac{\mathbf{x}_{i}-\mathbf{\mu}}{\sqrt{\mathbf{\sigma}^{2}+\epsilon}}, \quad\mathbf{y}_{i}=\gamma\hat{\mathbf{x}}_{i}+\beta, \tag{3}\]
where \(y_{i}\) is the final output of the \(i\)-th channel from this batch normalization layer, with the adjustment of two learnable affine parameters, \(\gamma\), and \(\beta\). For the update, the running mean \(\mathbf{\mu}^{\mathrm{run}}\) and variance \(\sigma^{\mathrm{run}}\) are computed as a moving average (EMA) of the mean and variance over all batches seen during training, with a momentum factor \(\alpha\):
\[\mathbf{\mu}^{\mathrm{run}}=\alpha\mathbf{\mu}+(1-\alpha)\mathbf{\mu}^{\mathrm{run}},\quad \mathbf{\sigma}^{\mathrm{run}}=\alpha\mathbf{\sigma}+(1-\alpha)\mathbf{\sigma}^{\mathrm{ run}} \tag{4}\]
**Motivation.** In the case of domain adaptation, aligning batch normalization statistics has been proven to mitigate the performance degradation brought by covariate shifts. The hypothesis suggests that information pertaining to labels is encoded within the weight matrices of each layer, while knowledge related to specific domains is conveyed through the statistics of BatchNorm layer. Consequently, we can seamlessly adapt the pre-trained model to a different domain by adjusting the statistics within the BN layer (Li et al., 2017). Similar ideas can be borrowed in online test-time adaptation.
**Assumption 1** (Normalization Calibration OTTA): _Given a neural network \(f\) trained on a source dataset \(\mathcal{D}_{S}\) with **normalization parameters \(\beta\)** and \(\gamma\), updating \(\{\gamma,\beta\}\) based on the test data \(\mathbf{x}_{i}^{T}\) at each time step \(t\) will bolster \(f\)'s robustness on the corresponding domain._
Building upon Assumption 1, initial investigations in OTTA predominantly revolved around fine-tuning a designated loss function, focusing on updating the normalization layers exclusively. This deliberate approach was employed to mitigate the performance degradation resulting from shifts in data distributions, thereby ensuring the preservation of the fundamental learned representations of the pre-trained model. Within this over-arching strategy, several discernible variations can be observed. A common practice entails the adjustment of statistics and affine parameters within the BatchNorm layer while leaving other model parameters unchanged. Additionally, the choice of normalization techniques, such as LayerNorm or GroupNorm, may vary based on architectural intricacies and specific optimization objectives. Furthermore, selecting appropriate loss functions may differ, reflecting distinct optimization goals.
Tent(Wang et al., 2021), and its subsequent endeavors, such as (Niu et al., 2022; Jang et al., 2023), are representative approaches within this paradigm. They expedited model adaptation to the current batch data by concentrating on the statistics of the batch at hand and exclusively updating the affine parameters of Batch Normalization. However, the effectiveness of batch-level updates, as seen in Tent, is contingent upon data quality within each batch, introducing potential performance
Figure 4: Comparisons among different normalization layers (Wu and He, 2020).
Figure 3: Timeline of optimization-based OTTA methods.
fluctuations. For example, the noise data or poisoned data with extremely biased statistics will significantly influence the BatchNorm statistics. To mitigate potential performance instability arising from batch-level statistics, a line of advanced techniques has been proposed:
**Stabilization via Dataset-level Estimate.**Gradient Preserving Batch Normalization GpreBN (Yang et al., 2022) is proposed to allow cross instance gradient back-propagation by modifying the BatchNorm standardization:
\[\hat{\mathbf{y}}_{i}=\frac{\frac{\mathbf{\hat{x}}_{i}-\mathbf{\mu}_{c}}{\mathbf{\sigma}_{c}} \mathbf{\bar{\sigma}}_{c}+\mathbf{\hat{\mu}}_{c}-\mathbf{\mu}}{\mathbf{\sigma}}\gamma+\beta \tag{5}\]
where \(\frac{\mathbf{\hat{x}}_{i}-\mathbf{\mu}_{c}}{\mathbf{\sigma}_{c}}\) is the standardized input feature \(\hat{\mathbf{x}}_{i}\) as in Eq. (3). GpreBN further normalizes \(\hat{\mathbf{x}}_{i}\) by stopping gradient backpropagation \(\mathbf{\hat{\mu}}_{c}\)\(\mathbf{\bar{\sigma}}_{c}\) and arbitrary non-learnable parameters \(\mathbf{\mu}\) and \(\mathbf{\sigma}\). MixNorm(Hu et al., 2021) introduces mixing up the statistics (produced by the augmented sample inputs) of the current batch with the global-level statistics by EMA. Then, the combined information from both historical and augmented batch-level statistics could effectively bridge the gap between historical context and real-time fluctuations, enhancing model performance. Exploring this line of research has led to alternative proposals, such as RBN(Yuan et al., 2023), which considers global robust statistics from a well-maintained memory bank with a fixed momentum of EMA when updating statistics, to ensure a good statistic quality. Core(You et al., 2021) incorporate the mixup factor and introduce strategies to fuse source and test set statistics, reinforcing model discriminability.
**Stabilization via Dynamic Momentum.** The primary focus of the following works centers on the ratio at which historical and current statistics are combined. Instead of adhering to a fixed momentum factor for EMA, as suggested by DUA (Mirza et al., 2022), a dynamic approach is adopted. This dynamic approach determines the momentum based on a decay factor. Operating under the assumption that the model's performance deteriorates over time, the decay factor should progressively incorporate more information from the current batch as time advances to avoid severe biased learning by low-quality BatchNorm updates. Consequently, this mitigates noise accumulation and its adverse impact on adaptation.
**Stabilization via Renormalization.** Nonetheless, an exclusive emphasis on moving averages might undermine the inherent characteristics of gradient optimization and normalization when it comes to updating BatchNorm layers. As previously noted by (Huang et al., 2018), BatchNorm primarily standardizes activations, centering and scaling them without addressing their correlations. In contrast, the Test-time Batch Renormalization (TBR) in DELTA(Zhao et al., 2023) addresses this limitation through a renormalization process involving the adjustment of standardized outputs using two newly introduced parameters, denoted as \(r\) and \(d\). \(r=\frac{sg\left(\hat{\mathbf{\sigma}}^{\text{batch}}\right)}{\mathbf{\hat{\sigma}}^{ \text{min}}}\) and \(d=\frac{sg\left(\hat{\mathbf{\sigma}}^{\text{batch}}\right)-\hat{\mathbf{\mu}}^{\text {ens}}}{\mathbf{\hat{\sigma}}^{\text{ens}}}\), where \(sg(\cdot)\) is stop gradient. Then \(\hat{\mathbf{x}}_{i}\) is normalized further by \(\hat{\mathbf{x}}_{i}=\hat{\mathbf{x}}_{i}\cdot r+d\). These parameters are computed using batch and global moving statistics, ushering in a novel approach to maintaining stable batch statistics updating strategies (inspired by (Ioffe, 2017)). While the above methods are based on OTTA under the allowance of resetting the model for each domain, it limits the model to be applied to real-world scenarios that commonly have no domain information. NOTE(Gong et al., 2022), focusing on dealing with continual OTTA under temporal correlation (i.g., distribution changes over time \(t\): \((\mathbf{x}_{t},\mathbf{y}_{t})\sim P_{\mathcal{T}}(\mathbf{x},\mathbf{y}\mid t)\)), proposes instance-level BatchNorm to avoid potential instance-wise variations in a domain non-identifiable paradigm.
**Stabilization via Enlarge Batches.** To ensure the stability of adaptation, an essential factor is utilizing large batch sizes. This is because including a greater volume of statistical data from a larger batch contributes to the robustness of the overall statistical information. Consequently, it is notable that nearly all methods based on Batch Normalization (BatchNorm) employ substantial batch sizes, for instance, setting the batch size to 200 when dealing with the CIFAR10 corrupted dataset. However, this practice can impose limitations on OTTA, particularly when dealing with scenarios where data arrives in smaller quantities due to hardware constraints, such as GPU memory limitations, especially in edge devices.
**Alternatives to Batchnorm.** To avoid using large batch size, updating GroupNorm(Mummadi et al., 2021) could benefit the adaptation. LayerNorm could also be one option, especially in transformer-based tasks (Kojima et al., 2022). At the same time, the batch-agnostic norm layer LayerNorm (as Fig.4 shows) is proved to be a solution of unstable adaptation under the assistance of sharpness-aware entropy minimization (Niu et al., 2023). Followed by the above situation in scenarios where computational resources are limited, MECTA(Hong et al., 2023) introduced an innovative approach by replacing the conventional BatchNorm layer with a customized MECTA norm. This strategic change effectively mitigated memory usage concerns during adaptation, reducing the memory overhead typically associated with large batch sizes, extensive channel dimensions, and numerous layers requiring updates. Taking a different tack, EcoTTA(Song et al., 2023) incorporated and exclusively updated meta networks, including BatchNorm layers.
This approach also effectively curtailed computational expenses while upholding robust test-time performance. Furthermore, to address the performance challenges associated with smaller batch sizes, TIPI (Nguyen et al., 2023) introduced additional BatchNorm layers in conjunction with the existing ones. This configuration inherently maintains two distinct sets of data statistics and leverages shared affine parameters to enhance consistency across different views of test data.
#### 3.1.2 Mean Teacher Optimization
As mentioned in the previous subsection, stability in optimization is a core question to study. In the realm of OTTA, the Mean Teacher Model (Tarvainen and Valpola, 2017) stands out as a formidable strategy. Central to the Mean Teacher framework is the incorporation of a consistency loss. This loss narrows the gap between the predictions of two intertwined models: the student and the teacher. Notably, the parameters of the teacher model \(\theta_{t}\) are derived as an exponential moving average (EMA) of those of the student model \(\theta_{s}\), where the student model is initialized by source pretrained weights \(\theta^{S}\). A foundational assumption underpinning this approach is the principle of **consistency regularization**. It posits that the resultant outputs should manifest congruence when two perturbed versions of the same input -- whether modified through data augmentation or noise introduction -- are processed through the model.
In light of the foregoing discussion, integrating the mean-teacher model within the context of OTTA, coupled with incorporating data-driven (Section 3.2) or model-driven (Section 3.3) methodologies, holds the promise of enhancing the stability of predictions. This heightened stability arises from the model's reduced sensitivity to diverse augmented perspectives of unlabeled test data, potentially engendering favorable characteristics derived from other constituent components within the comprehensive framework.
**Divergence in Updating Strategies.**CoTTA (Wang et al., 2022) is a representative method under the mean-teacher framework, which employs a conventional updating strategy that hinges on consistency loss between strongly and weakly augmented data samples drawn from the teacher and student networks. To determine which layers to update and avoid catastrophic forgetting, CoTTA incorporates a **stochastic parameter reset** strategy. This strategy involves utilizing a predetermined reset factor, whereby the model selectively and randomly resets a fixed number of parameters to their source pre-trained states in each iteration. This precautionary measure aids in preserving the model's learned knowledge while guarding against the potentially detrimental consequences of misinformed updates. Furthermore, in terms of generating different views of test data, CoTTA employs commonly utilized data augmentation techniques, including adjustments in contrast, brightness, and other relevant transformations, to create strongly augmented views. In contrast to CoTTA's updating rules, RoTTA (Yuan et al., 2023) adopts a distinct updating strategy. Specifically, RoTTA focuses its model updates solely on the customized Batch Normalization layer (RBN) within the student model rather than modifying all parameters of the student model indiscriminately. This strategy serves a dual purpose: firstly, it allows RoTTA to harness the advantages of consistency regularization, and secondly, it also bolsters the model's capacity to effectively manage out-of-distribution data by integrating distribution statistics into its learning process.
**Divergence in Augmentations.** Drawing inspiration from the teacher-student knowledge distillation, TeSLA (Tomar et al., 2023) aims to improve the student model's performance on challenging (high entropy) test images. To achieve this, they advocate for the use of augmentations adversarial to the current teacher model, which act as a means to emulate images in the feature space's uncertain areas. The model is subsequently refined by the student's update to bridge the consistency gap between predictions on these high entropy augmented images and their corresponding soft-pseudo labels from non-augmented counterparts. As a result, the model undergoes self-distillation on "Easy" test images possessing confident soft-pseudo labels. Conversely, updates based on "Hard" test images are omitted, which enhances the separation of features on a class-by-class basis.
#### 3.1.3 Optimization Objective
Designing a proper optimization objective is half the success of learning a good machine learning model. However, it turns hard in the scenario of test-time adaptation under limited information availability. Commonly used objectives for OTTA are summarized in Figure 5. Previous papers tend to solve the OTTA problem by considering three main challenges.
**Uncertainty Reduction**: A covariate shift can significantly impair the performance of source models on test samples. When a model yields highly uncertain predictions, it signals a lack of confidence, reflecting a potential misalignment with the novel data distribution. Continuous high uncertainty across successive test data denotes the model cannot capture the nuances of the new data representation. In light of these considera
tions, the fundamental aim of Online Test-time Adaptation (OTTA) becomes evident: to instill confidence in the model's predictions for test data in an online and unsupervised manner.
Entropy-based uncertainty: Shannon entropy (Shannon, 1948), in the form of
\[H(\hat{y})=-\sum_{c}p\left(\hat{y}_{c}\right)\log p\left(\hat{y}_{c}\right), \tag{6}\]
is originally designed for information theory. Its basic idea is that the "informational value" of a communicated message depends on the degree to which the content of the message is surprising. If a highly likely event occurs, the message carries very little information. On the other hand, if a highly unlikely event occurs, the message is much more informative. By this intuition, entropy minimization is borrowed in OTTA. While no labeled data exist during adaptation, reducing its entropy could make the model certain for predicting the current data sample.
Tent(Wang et al., 2021), as the representative of this stream, using the soft entropy in Eq. (6) to update the BatchNorm layers (Sec. 3.1.1) in an online manner. Its overall objective aims to fit the model to the test data by 1). reducing uncertainty (i.e., encouraging high prediction confidence) and 2). updating the BatchNorm layer to suit the test data distribution without label information. Follow-up works adopt a similar strategy, including EATA(Niu et al., 2022) which uses a sample selection score as the weight to measure if one sample is reliable and not redundant, TTPR(Sivaprasad et al., 2021), which combines entropy minimization with a consistency loss across different views of the test image, and, SAR(Niu et al., 2023), which explicitly considers to seeking for a flat minimum as in Eq. (8) for optimization when updating the model by the soft entropy.
While entropy minimization seems a satisfactory solution for OTTA, it raises some issues: instability and early convergence. This makes the prediction may fall into biases. SLR(Mummadi et al., 2021) proposes to tackle these issues by replacing the entropy with a non-saturating surrogate and adding a diversity regularizer based on batch-wise entropy maximization that prevents convergence to trivial collapsed solutions. SLR proposes to minimize the negative log-likelihood ratio between classes rather than focusing on minimizing entropy loss. To encourage the diverse prediction, the diverse regularizer (i.e., KL divergence between model prediction distribution with a uniform distribution) is then applied and combined with the non-saturating surrogate.
The cooperation between a teacher and a student offers yet another realm for exploration. PETAL, CoTTA, and RoTTA(Yuan et al., 2023) employs the cross-entropy loss, where the teacher's outputs supervise the student's predictions. Meanwhile, RoTTA further considers the age of the samples to be updated in the memory bank as a weight of the loss, making the model prefer to learn more about the newly added data samples in the memory bank. This further avoids the model (biased) overfitting to the old samples in the memory bank. Similarly, PETAL uses cross-entropy in its mean-teacher framework
Bayesian Active Learning by Disagreements (BALD):
Bayesian Active Learning (Bayesian AL) offers a comprehensive probabilistic perspective on estimating model uncertainty (Cao and Tsang, 2021). In essence, Bayesian AL aims to maximize information gain by assessing predictive entropy, leading to the concept of Bayesian active learning by disagreements (BALD). BALD, in its essence, focuses on two primary aspects: estimating model uncertainty and constructing coresets (Campbell and Broderick, 2018). A greedy approach is often employed to evaluate a model's uncertainty, targeting data points that highlight discrepancies between a model's current parameters and subsequent updates. BALD could identify the high uncertainty (or disagreement) sample in online test-time adaptation. By focusing on these samples, the model will likely make more meaningful updates that address the underlying shift in the test data distribution. One classical method that falls into this category is MEMO(Zhang et al., 2022). It proposes to adapt the model using the entropy of its marginal output distribution over augmentations. It aims to match the individual sample distribution with the expected distribution estimated by Monte Carlo dropout. By integrating the concept of BALD with multiple data augmentations, MEMO sets out to accomplish two primary objectives: first, to ensure the model will optimize the uncertainty and informative samples by aligning with expected marginal distribution, and thus, to bolster the
Figure 5: Common optimization objectives in OTTA.
model's confidence in its predictions by reducing the uncertainty.
Similarly, TeSLA borrows the concept of BALD to design its objective: the soft pseudo-label loss, wherein the student's output provides the supervision feedback. Minimizing this loss between the student model's predictions and the soft pseudo-labels -- combined with the negative entropy of its marginalized predictions over test images -- can be interpreted as mutual information maximization. The overall objective could be seen as follows:
\[\begin{split}&\mathcal{L}_{\text{pl}}(\mathbf{X},\hat{\mathbf{Y}})=- \frac{1}{B}\sum_{i=1}^{B}\sum_{k=1}^{K}f_{s}\left(\mathbf{x}_{i}\right)_{k}\log \left(\left(\hat{\mathbf{y}}_{i}\right)_{k}\right)\\ &+\sum_{k=1}^{K}\hat{f}_{s}(\mathbf{X})_{k}\log\left(\hat{f}_{s}(\mathbf{X })\right)_{k},\end{split} \tag{7}\]
where \(\hat{f}_{s}(X)=\frac{1}{B}\sum_{i=1}^{B}f_{s}\left(\mathbf{x}_{i}\right)\) is the marginal class distribution over the batch. This could, as well, enable the model to focus more on the "useful" data and enhance the prediction confidence at the same time.
Prototype-based Optimization: In OTTA, where the absence of test labels poses a challenge, prototype-based learning emerges as a commonly employed strategy to mitigate the impact of potentially noisy predictions. While common methods are solely based on the source-trained linear classifier, this, instead, shifts the focus to predicting labels in accordance with class prototypes. In recent developments, exemplified by TSD(Wang et al., 2023), the emphasis lies in guiding test data features towards their closest prototypes while preserving the uniformity of predictions. To achieve this, the prediction for test data is determined by measuring the cosine similarity between the feature representation of the test data and the corresponding prototype. A Shannon entropy-based filter is introduced to counteract noisy predictions during the initial model adaptation phase. In addition, since the predictions from different classifiers should be consistent, a new filter is designed. This consistency filter allows model optimization only when predictions exhibit consistency, thereby reducing noisy labels' influence.
**Maximize Generalizability**: Generalizability, in the context of machine learning, refers to a model's capability to perform effectively on new, previously unobserved data, regardless of its distribution. In the context of OTTA, this concept can be extended to entail the adaptation of the model, not only to perform well on the current data samples but also to ensure its readiness for forthcoming test samples.
To enhance the generalization capacity, inspired by Sharpness-aware Minimization (SAM) (Foret et al., 2021). SAR(Niu et al., 2023) formulates its objectives to find out the flat minima to avoid violent fluctuations during optimization:
\[\min_{\Theta}S(\mathbf{x})E^{SA}(\mathbf{x};\Theta) \tag{8}\]
Here, \(S(\mathbf{x})\) represents an entropy-based indicator function that filters out unreliable predictions based on a predefined threshold. Additionally, \(E^{SA}(\mathbf{x};\Theta)\) is defined as:
\[E^{SA}(\mathbf{x};\Theta)\triangleq\max_{\|\epsilon\|_{2}\leq\rho}E(\mathbf{x };\Theta+\mathbf{\epsilon}) \tag{9}\]
This term aims to identify a weight perturbation \(\epsilon\) within a Euclidean ball of radius \(\rho\) that maximizes entropy. It quantifies sharpness by measuring the maximal change in entropy between \(\Theta\) and \(\Theta+\rho\). The ultimate objective involves the joint minimization of entropy and sharpness of entropy loss, creating a bi-level optimization problem that encourages the discovery of flat minima and bolsters the model's generalization capabilities.
From the perspective of the data, a biased test set could significantly degrade the generalizability of the adaptation model (Liang et al., 2020). Derived from class-wise re-weighting (Cui et al., 2019), Dynamic online re-weighting (DOT) in DELTA(Zhao et al., 2023) offers a dynamic approach to handle fluctuating and previously unknown class frequencies. Instead of directly determining class frequencies, DOT uses a momentum-updated class-frequency vector. Initialized with equal weights, the vector is updated at every inference step based on the current sample's pseudo-label and existing weight. A significant weight (or frequency) for a particular class prompts DOT to diminish its contribution during subsequent adaptations. This methodology is pivotal in countering biased optimizations that might arise from inherent model training processes or even from situations like high inter-class similarities, thus maximizing the model's generalizability.
Conjugate to Source: Previous OTTA methods, e.g., Tent, focus on entropy minimization to allow the adoption of the model with the absence of the test label. This naturally raises the question: What makes soft entropy a preferred choice? Conjugate PL (Goyal et al., 2022) has been proposed to shed light on this subject. To unearth the underlying mechanisms of the loss function, It begins by designing a meta-network to parameterize it. Observations indicate that the meta-output might echo the temperature-scaled softmax output of a given model. They further prove by conjugate adaptation loss that if the cross-entropy loss is applied in the source model, the soft entropy loss could represent the most proper loss during adaptation. This finding resonates with decisions made by Tent.
**Feature representation learning**: During adaptation in OTTA, there is no data label available. Therefore, the objective can be considered as a self-supervised learning task. AdaContrast is based on the concept of contrastive learning, which leverages positive pairs (different views of the same image) and negative pairs (features from different images). The idea is to bring positive pairs closer while pushing negative pairs away. Furthermore, AdaContrast modifies the InfoNCE loss (He et al.2020) with the help of a memory queue to prevent samples from the same class from being pushed away.
In OTTA models, the choice of an optimization objective holds significant importance. It is evident that there is a prevailing trend toward the development of increasingly complex loss functions, as indicated by the growing number of baseline works. However, it is worth highlighting that the landscape of loss functions with superior adaptability and insensitivity to architectural variations remains somewhat limited, often characterized by their simplicity in terms of logical structure. These select loss functions, such as soft entropy, play a valuable role when constructing more intricate loss functions tailored to complex application scenarios.
#### 3.1.4 Pseudo-labeling
As the mainstream of self-training, pseudo-labeling shows its significance in domain adaptation or semi-supervised learning tasks. However, things turn harder in OTTA due to the absence of the whole source set.
While only the test data is available during the current batch, **batch-level** PL emerges as a pragmatic approach. In alignment with standard BatchNorm optimization frameworks, MuSLA (Kingetsu et al.2022) employs pseudo-labeling as post-optimization of the current batch to refine the decision boundary. While promising, this strategy introduces computational overhead, necessitating the processing of data batches multiple times, thereby prolonging adaptation phases and imposing increased memory requirements. Besides, the commonly used pseudo-labelling strategy in the teacher-student framework (e.g., CoTTA (Wang et al.2022), RoTTA (Yuan et al.2023)) identifies the student outputs of weakly augmented views as soft pseudo-labels. This could bring a stable optimization process during adaptation. Another category assumes the source pretrained model has discriminability for confident test samples. Therefore, a threshold with pseudo-labeling for every single batch of data could be applied (Kingetsu et al.2022).
**Reliable PL.** In obtaining reliable pseudo-labeling, a noteworthy challenge is the continuous data streams without the opportunity for review. Furthermore, the covariate shift introduced between the source and test set could significantly degrade pseudo-labeling reliability. Under these circumstances, the establishment of a dependable pseudo-label becomes paramount. An avenue to address this challenge is presented by TAST, as introduced in (Jang et al.2023). Unlike conventional thresholding methods that rely on the output of the current updated model, TAST adopts a prototype-based pseudo-labeling strategy. This approach derives pseudo-labels through the cosine distance between the model's predictions and prototype representations. Importantly, these prototypes are constructed based on confident support samples in proximity, rendering them trustworthy compared to predefined thresholding approaches based solely on the linear layer output. This enhances pseudo-labels' reliability in the face of dynamic model updates during online test-time adaptation. A similar idea is AdaContrast (Chen et al.2022), which borrows the idea from (Mitchell and Schaefer2001). It proposes soft K nearest neighbors voting in the feature space to generate significantly more correct pseudo labels for each target sample.
To obtain a reliable pseudo-label, another idea is to utilize multiple augmentations. For example, majority voting (Wu et al.2021) could directly bring reliable pseudo-labels with multi-augmentation assistance.
**Complementary PL.** Yet, as the discourse extends, an underlying challenge surfaces: one-hot pseudo-labels that focus singularly on the class with maximal confidence often incur significant information losses, particularly under the domain shifts. An alternative, ECL (Han et al.2023), promotes a nuanced stance. This method doesn't merely rely on maximum predictions for pseudo-labeling but factors in predictions beneath a set confidence threshold, thus introducing complementary labels to which the prediction does not belong. This intricate approach aspires to diminish the pitfalls linked with negative learning (Kim et al.2019).
**Discussion.** In summary, the challenges associated with pseudo-labeling extend beyond label selection criteria and encompass broader considerations, including environmental constraints and operational efficiency. While pseudo-labeling shows promise in specific contexts like domain adaptation and offline test-time adaptation, its suitability diminishes in real-time prediction tasks. This highlights the need for further research and innovative solutions to address these operational challenges effectively.
#### 3.1.5 Other Approaches
OTTA methods are developed for adapting models to new scenarios, but applying them in uncertain real-world situations requires careful consideration.
Venturing from the conventional path of adapting pre-trained model parameters, LAME(Boudiaf et al., 2022) presents an innovative approach. Instead of updating the model, LAME focuses on refining the model's output. This is achieved by identifying latent assignments that maximize the manifold-regularized likelihood of the dataset. In a bid to further refine the output, Laplacian regularization is incorporated into the framework. Subsequently, a concave-convex procedure (CCCP) is formulated, augmenting the overall optimization process.
#### 3.1.6 Summary
The optimization-based approach represents the most comprehensive category within online test-time adaptation. This category is underpinned by the fundamental notion that every machine learning model necessitates optimization, making it a universally applicable concept transcending architectural specifics. In the context of OTTA, the adaptation model's optimization primarily revolves around attaining optimization consistency, stability, and robustness. However, it is crucial to highlight that the effectiveness of optimization hinges on data availability. Given that test data can only be adapted at the batch level, whether the model can be optimized to encapsulate the characteristics of the global test set remains an unaddressed aspect. To this end, the subsequent section will delve into data-driven methodologies, elucidating how data can play a pivotal role in OTTA.
### Data-based OTTA
Data-based OTTA is motivated by the intricacies introduced by streaming data. While there's a prevailing notion of fine-tuning models, it has become evident that mere optimization may not sufficiently address the complexities of diverse test datasets due to their batch-level updates. With the limited number of samples in each batch, encountering test samples with unexpected changes is inevitable. Recognizing the **pivotal role of data** for a reliable test data prediction could potentially fill this gap:
* Enhancing the model's generalizability.
* Tailoring the model's discriminative capacity to the current data batch.
While the broader landscape of OTTA methodologies encompasses a multitude of solutions, this section focuses specifically on data-centric strategies within OTTA approaches, highlighting the indispensable role of data by either diversifying the batch data (Sec 3.2.1) or preserving high-quality global-level data information (Sec 3.2.2) during test-time adaptation.
#### 3.2.1 Data Augmentation
Data augmentation is crucial in domain adaptation (Wang and Deng, 2018) and generalization (Zhou et al., 2023), primarily amplifying model transferability and generalizability. These traits are indispensable for test-time adaptation, especially when dealing with online test data.
Random Augmentation Strategies
**Predefined augmentations.** Prevailing data augmentation techniques such as cropping, blurring, and flipping are seamlessly integrated into certain OTTA methodologies, demonstrating their capacity to emulate specific data shifts. Aiming to obtain reliable predictions, a paradigmatic instance of this integration can be observed in OTTA techniques rooted in the mean-teacher model framework. These methods employ a weak-strong augmentation consistency approach. In ensuring prediction consistency across augmented views, these models facilitate extracting a generalized sample representation within the current batch. Notably, while PAD and TTPR(Sivaprasad et al., 2021) do not strictly adhere to the mean-teacher framework, both of them using common augmentation strategies (32 and 2 augmentations correspondingly) to support a consistent prediction, which bolsters model's generalizability while maintains its discriminability. A distinct approach is delineated in MEM0, employing a comprehensive augmentation strategy, AugMix(Hendrycks et al., 2020), for every test image. For each individual data point, an array (typically encompassing 32 or 64) of augmentations from the AugMix set \(\mathcal{A}\) are produced to cultivate a resilient model.
**Selective Augmentations.** In the previous section, OTTA approaches often predetermine augmentation policies without addressing potential distribution shifts during inference. Given that test distributions can undergo substantial variations in continuously evolving environments, there exists a risk that such fixed augmentation policies may become ineffectual. In CoTTA (Wang et al., 2022), rather than augmenting every test sample by a fixed strategy, augmentations are judiciously applied only when pronounced domain differences are detected, mitigating the risk of accumulating errors. PETAL(Brahma and Rai, 2023) follows the idea of CoTTA
and applies augmentations to test samples for teacher input only if the domain difference is substantial.
Adversarial Augmentation
Traditional augmentation methodologies offer partial remedies but often fall short due to intra-domain disparities in test data, such as distribution variations and the data's intrinsic learning value. TeSLA, as referenced by (Tomar et al., 2023), diverges from relying on pre-determined or randomly chosen augmentations. Instead, it capitalizes on adversarial data augmentation to ascertain the optimal augmentation strategy. This approach establishes a policy search space \(\mathcal{O}\), comprising a gamut of augmentation strategies, each associated with a specific magnitude parameter, denoted as \(m\). TeSLA then introduces a sub-policy, \(\rho\), constituted of a pre-determined number of augmentations paired with their pertinent magnitudes. The teacher model then undergoes entropy maximization loss to assimilate the policy \(\rho\). The underlying rationale is that entropy maximization discerns the augmentation policy that exhibits the most pronounced discrepancy in teacher model predictions. Refining the model with such pronounced perturbations facilitates an enhanced comprehension of the consistency inherent to individual test data.
#### 3.2.2 Memory Bank
Surpassing the utilization of random, selective, or adversarial augmentations to diversify test data within the current batch, a memory bank emerges as an efficacious mechanism to select representative samples and preserve them for memory replay. This strategy can maintain the global statistics of the test data without imposing additional computational burden on OTTA. The establishment of a memory bank entails addressing two fundamental considerations:
* **Selection Criteria for the Memory Bank**: Deciding what should be stored in the memory bank is the first crucial factor. This involves identifying which instances should be preserved for potential replay during testing.
* **Memory Bank Management**: The second factor pertains to how to maintain the memory bank. Specifically, it raises questions about the design of strategies for inserting new instances into the memory bank and deleting existing ones. These decisions are guided by considerations of effectiveness and necessity.
Considering these factors, memory bank strategies can be classified into two discrete categories: the time-uniform memory bank and the class-balanced memory bank. It's worth noting that numerous approaches opt to include both types simultaneously.
In addressing the challenges posed by both temporally correlated distributions and class-imbalanced issues, NOTE (Gong et al., 2022) introduces the Prediction-Balanced Reservoir Sampling (PBRS) to save sample-prediction pairs. The ingenuity of PBRS lies in its fusion of two distinct sampling strategies: time-uniform and prediction-uniform. The time-uniform approach, reservoir sampling (RS), aims to obtain uniform data over a temporal stream. To be detailed, for a sample \(x\) being predicted as class \(k\), we randomly sample a value \(p\) from a uniform distribution \([0,1]\). Then, if \(p\) is smaller than the proportion of class \(k\) in whole memory bank samples, pick one randomly from the same class and replace it with the new one \(x\). At the same time, the prediction-uniform part prioritizes the predicted labels to ascertain
Figure 6: Timeline of Data-based OTTA methods.
the majority class within the memory. Upon identification, it supplants a randomly selected instance from the majority class with a fresh data sample, thereby ensuring a balanced representation. The design of PBRS ensures a more harmonized distribution of samples across both time and class dimensions, fortifying the model's adaptation capabilities.
Similarly, RoTTA, introduced by (Yuan et al., 2023), offers a Category-balanced sampling with timeliness and uncertainty (CSTU) module, dealing with the shifted label distribution. In CSTU, the author proposes a category-balanced memory bank \(M\) with a capacity of \(N\), considering the timeliness and uncertainty of samples when updating. Data samples \(x\) in CSTU are stored alongside their predicted labels \(\hat{y}\), a heuristic score \(\mathcal{H}\), and uncertainty metrics \(\mathcal{U}\). The heuristic score is calculated by:
\[\mathcal{H}=\lambda_{t}\frac{1}{1+\exp(-\mathcal{A}/\mathcal{N})}+\lambda_{u} \frac{\mathcal{U}}{\log\mathcal{C}} \tag{10}\]
where \(\lambda_{t}\) and \(\lambda_{u}\) is the tradeoff between time and uncertainty, \(\mathcal{A}\) is the age (i.e., how many iterations this sample has been stored in the memory bank) of a sample stored in the memory bank. \(\mathcal{C}\) is the number of classes, \(\mathcal{N}\) is the capacity of the memory bank, and \(\mathcal{U}\) is the uncertainty measurement, which is implemented as the entropy of the sample prediction. This heuristic score \(\mathcal{H}\) is then used to decide whether a test sample should be saved into the memory bank for each class. As the lower heuristic score is always preferred, its intuition is that CSTU aims to maintain fresh (i.e., lower age \(\mathcal{A}\)), balanced, and certain (i.e., lower \(\mathcal{U}\)) test samples. thereby enhancing adaptability during online operations.
Considering class-balanced data preservation, TeSLA (Tomar et al., 2023) introduces a fixed-size online queue. The underlying concept involves the preservation of pairs comprising weakly augmented refined sample features and their corresponding pseudo-labels. This preservation is conducted considering their predicted class assignments and adheres to the principle of first-in, first-out. Subsequently, these preserved pairs enhance pseudo-label predictions through interactions with their nearest neighbors within the queue.
TSD(Wang et al., 2023) distinguishes itself as an outlier among memory bank-based methodologies, as it does not neatly fit into any existing category. Its objective centers on preserving complete information of test samples, to suit the online setting. In the context of TSD, as new test samples are encountered, the sample's embedding and associated logit will be saved in a memory bank. These stored representations are subsequently utilized for prototype-based classification. Notably, TSD adopts a strategy inspired by T3A(Iwasawa and Matsuo, 2021) to initialize its memory bank by the weights of the source pre-trained linear classifier.
An additional departure from the conventional approaches is found in the work by ECL(Han et al., 2023). ECL introduces a fixed-length memory bank to store output distributions. This memory bank serves as prior information for thresholding complementary labels, which, in turn, is utilized to refine the model's updates. Subsequently, the memory bank undergoes a refreshment process facilitated by the updated model parameters.
AdaContrast also uses a memory queue to avoid push-away pairs from the same class in contrastive learning. The memory queue is designed for saving all previous key features with corresponding pseudo-labels.
#### 3.2.3 Summary
Utilizing data-based techniques can prove advantageous in scenarios where a test set exhibits bias or is constrained by specific stylistic limitations. Nonetheless, this approach introduces an additional computational burden, which can be unanticipated in online settings. In the subsequent section, we will explore an alternative perspective, examining how architecture changes can yield benefits in OTTA.
### Model-based OTTA
In contrast to data-centric OTTA, which emphasizes predictive consistency regularization amidst diversified or restored samples, model-centric OTTA approaches alter the model architecture to mitigate distribution shifts. Specifically, architectural modifications are generally classified into two predominant categories: i) the incorporation of novel branches, and ii) the substitution of selected layers. Recently, a novel paradigm of test-time prompting has been introduced, particularly tailored for transformer-based models, which adroitly learn a concise set of parameters to adeptly accommodate the emergent distribution.
#### 3.3.1 Module Addition
**Input Transformation.** Unlike the commonly used optimization methods, SLR(Mummadi et al., 2021) introduces an input transformation module (i.e., \(g=f\circ d\), where \(f\) is the source model) to cancel the test-time distribution shifts partially. This configuration pre-appends a trainable network, \(d\), to the source model \(f\), enhancing its capacity to mitigate domain shifts. The transformation function, \(d(x)\), is designed as a convex combination of the original input and a transformed variant,
ensuring both versatility and maintainability. Interestingly, at initialization, this ensures \(d(x)=x\) and \(g=f\), providing a stable starting point. Crucially, the design objective of \(d\) is to partially address domain discrepancies without relying on source domain samples during the full test-time adaptation phase. As the adaptation targets only a subset of parameters, the method retains the innate features of the pre-trained model, especially the convolutional kernels.
**Adaptation Module.** Rather than directly inserting a specific layer, TAST (Jang et al., 2023), incorporates a uniquely initialized adaptation module atop the feature extractor. The primary task of this module during test time is to train using prototype-based class distributions to predict labels for the test data. Despite the potential benefits, one intrinsic challenge is the potential performance degradation arising from the random initialization of the adaptation module. To navigate around this issue, ensemble strategies have been proposed, where the adaptation modules are trained individually. Specifically, BatchEnsemble(Wen et al., 2020) is applied as the adaptation module to benefit information exchange between modules. By amalgamating predictions from these independently trained modules, the objective is to achieve more accurate and robust label predictions, presenting a promising frontier for enhancing test-time adaptation performance.
**Classifier.** Cosine similarity-based classifiers (Chen et al., 2009) have emerged as a popular choice, notably outperforming traditional linear classifiers. By prioritizing the angle between vectors, cosine similarity ensures the accurate classification of data samples and delivers a deeper understanding of the inherent semantic relationships. Building on the robust foundation of cosine similarity-based classifiers, TAST(Jang et al., 2023) introduces a cosine distance-based classifier. By averaging across all adaptation module predictions, TAST achieves a more reliable prediction.
Similarly, TSD(Wang et al., 2023) opts for a cosine similarity-based classifier, deriving predictions from the features of the current sample in conjunction with its K-nearest neighboring features in the memory bank. Similar to TAST and TSD, PAD(Wu et al., 2021) also employs a cosine similarity-based classifier. However, there's a key difference in how each approach maintains its prototype information. While TAST and TSD necessitate the upkeep of a queue to save test data information, PAD takes a more streamlined approach to save source information. According to its illustration, the weights updated through backpropagation in the trained cosine classifier can naturally function as the prototypes for each source class. This eliminates the need for maintaining a separate set of class prototypes or features, offering a more efficient way to capture the essence of each class.
#### 3.3.2 Module Substitution
Layer substitution typically refers to swapping out an existing layer for a new one. Yet, when viewed through the lens of updating BatchNorm statistics, any method related to BatchNorm--barring those straightforwardly optimized by a set loss--can be considered under this umbrella. Examples of this include MECTA norm (Hong et al., 2023), MixNorm(Hu et al., 2021), RBN(Yuan et al., 2023), and GpreBN(Yang et al., 2022), etc. We won't delve into these methods again in this section to prevent redundancy.
#### 3.3.3 Prompt-based Method
Another new category to address test data drift is the so-called **test time prompt Tuning** originating from
Figure 7: Timeline of Model-based OTTA methods.
CLIP (Radford et al., 2021). Its basic idea is instead of fine-tuning the vision representation (which may cause loss of model generalizability), it fine-tunes the prompting, which modifies the context of the model input and thus does not distort pre-trained features. One representative of time-time prompt tuning is \(\tt{TPT}\)(Shu et al., 2022), which aims to promote the consistency of the model's predictions across different test image augmentations. Specifically, for each test image, \(N\) randomly augmented views from an augmentation set are generated, and the prompting parameter is further updated by minimizing the entropy of the averaged prediction probability distribution across these augmented views. It is worth noting that a confidence selection strategy is proposed to filter out the output with high entropy to avoid noisy updating brought by augmentations.
The usage of prompts could also benefit test-time adaptation performance. A prime instance is "Decorate the Newcomers" (DN) (Gan et al., 2023), Rooted in prompt learning, DN leverages the mean-teacher model in conjunction with the frozen source pre-trained model to acquire domain-specific and domain-agnostic prompts. It adheres to the same framework as the mean-teacher model to elucidate, receiving weak and strong augmented images as input. To decouple the invariance and specifics of the test image, DN employs two distinct loss functions. Domain-specific information is acquired through the cross-entropy loss between teacher and student model outputs. Furthermore, a parameter insensitivity loss is introduced to penalize parameters susceptible to domain shifts. This, in turn, ensures that the updated domain-insensitive parameters can effectively consolidate domain-agnostic knowledge. The learned prompt is appended to the test image during testing, and predictions are finally obtained from the frozen source model.
Except for designing a new module for learning the prompt, DePT (Gao et al., 2022) proposes to prepend prompts into the transformer architecture for test-time adaptation. The transformer model is first split into multiple stages. Then, for each stage, a learnable prompt is added, which makes the prompt learning natural to be a pure test-time adaptation task. During adaptation, only learnable prompts and the classifier is updated. Similar to DN, DePT applies a mean-teacher model with weak-strong augmentations, with the difference that the student model will allow both weak and strong augmentations. With the help of a preserved memory bank, a cross-entropy loss is applied between the memory bank-guided pseudo label and the output of the strong augmented test sample from the student model. The DINO(Caron et al., 2021) loss with prompt regularization is then added to the final objective to fine-tune the prompt.
#### 3.3.4 Summary
While model-based OTTA methods exhibit effectiveness, their prevalence is overshadowed by the former two groups, primarily due to their inherent dependence on specific backbone architectures. Notably, during layer substitution and addition, even incremental modifications can yield substantial improvements in test-time performance, often obviating the need for alterations in original training paradigms. Such adaptability mechanisms, reminiscent of the "hot-swapping" concept, offer a novel viewpoint for tackling diverse data distributions. A cornerstone of the model-based paradigm is its synergistic relationship with prompting, a strategy empirically validated for enabling zero-shot inference. This synergy affords the model an ability to adapt to test instances in a coherent and reliable fashion, positioning it as a promising route for augmenting adaptability and trustworthiness in OTTA contexts.
## 4 Empirical Study
In this empirical study, we will focus on upgrading existing OTTA methods to accommodate the ViT model from the new era. Our primary objective is to investigate the transferability of the proposed idea that could be migrated to transformers. Additionally, we will provide solutions for substituting the existing components.
In order to comprehensively evaluate the performance of OTTA methods, we conducted a rigorous assessment of seven well-established OTTA algorithms. To ensure fairness and impartiality, we followed a standardized testing protocol. We selected a diverse set of datasets, including four artificial and two real-world shifted data sets. The CIFAR warehouse dataset repository played a central role in our evaluation, offering a wide range of subset, including real-world variations sourced from different search engines and images generated through diffusion-based processes. Specifically, we focused on two subsets of the CIFAR warehouse dataset in our survey: the Google image subset and the Diffusion image subset. These subsets were chosen to facilitate simulations of both real-world and artificial data shifts, allowing for a comprehensive assessment of OTTA methods.
### Implementation Details
**Optimization Details.** In our experimental setup, we employed the PyTorch framework for implementation. The foundational architecture for all approaches is ViT base patch 16 (\(224*224\)) (Dosovitskiy et al., 2021),
serving as the network backbone. This backbone architecture is sourced from the Timm repository 1. Our training regimen for the source model on the CIFAR-10 dataset comprised 8,000 iterations, including a warm-up phase spanning 1,600 iterations. The training was conducted with a batch size of 64, employing the Stochastic Gradient Descent (SGD) optimization algorithm with a learning rate of 0.03. For the CIFAR-100 dataset, we retained an identical configuration, with an extended training duration of \(16,000\) iterations and a warm-up period spanning 4,000 iterations. The source model for the ImageNet-1k dataset 2 was directly acquired from the Timm repository. Additionally, we applied basic data augmentation techniques, namely random resizing and cropping to 224 pixels, consistently across all methods. To ensure consistency in the adaptation approach, specific settings are uniformly applied. The Adam optimizer with a momentum term \(\beta\) of 0.9 and a learning rate of 1e-3 is employed to optimize the model. For all datasets, resizing and cropping techniques are applied as a default preprocessing step. During testing, a uniform normalization setting of (0.5, 0.5, 0.5) is adopted to mitigate potential performance fluctuations that may arise from external factors beyond the algorithm's core operations.
Footnote 1: [https://github.com/huggingface/pytorch-image-models](https://github.com/huggingface/pytorch-image-models)
**Component substitution.** To facilitate the adaptation of core methods for application on the Vision Transformer (ViT), we have devised a set of fundamental strategies:
* **Transition to LayerNorm**: In light of the absence of a Batch Normalization (BatchNorm) layer in ViT, one key approach involves substituting all BatchNorm updates with LayerNorm updates.
* **Mixup Factors Adjustment**: It is imperative to eliminate any mixup factors applied to original methods based on BatchNorm, as LayerNorm does not consider statistics derived from other data samples.
* **Exclusion of Source Statistics**: Given that LayerNorm does not maintain running statistics, we exclude any components responsible for updating statistics related to the source data.
* **Feature Representation Adjustment**: In scenarios where feature representation is a prerequisite in the OTTA methods, a viable alternative is to replace it with the class embedding (i.e., the first dimension of the feature representation from ViT feature extractor), as this feature constitutes the foundation upon which final predictions are predicated.
* **Pruning of Incompatible Components**: Lastly, it is imperative to disregard any components that cannot be effectively implemented on the ViT backbone, ensuring compatibility with the model's architecture.
These strategies serve as the groundwork for enabling the seamless integration of core methods with ViT, thereby extending the applicability of these methods to the Vision Transformer framework.
**Baselines**: To systematically assess and investigate the adaptability of OTTA methods to new backbone architectures, we have judiciously curated a set of seven general and versatile methods for our study. These methods include:
1. [leftmargin=*]
2. **Tent**: A classic OTTA method rooted in BatchNorm updates. To reproduce it on ViTs, we have replaced its BatchNorm updates with a LayerNorm updating
Figure 8: The exemplars of the adopted datasets. The dataset shift includes variations in colors, synthetics data and types of corruptions.
strategy, utilizing soft entropy as the optimization objective.
2. CoTTA: A representative approach that employs the mean-teacher model, parameter reset, and selective augmentation strategy. While it typically necessitates updating the entire student network, we have further assessed the LayNorm updating strategy on the student model in our experimental design. Additionally, we have deconstructed CoTTA based on its parameter reset strategy, resulting in four variants: (1) Parameter reset with LayerNorm updating, (2) Parameter reset with full parameter updating, (3) Updating LayerNorm without parameter reset, and (4) Full network updating without parameter reset.
3. SAR: This method utilizes Sharpness-aware Minimization (SAM) to seek flat minima, employing the soft entropy loss. We investigate its effectiveness on the ViT model, incorporating the defined Adam optimizer.
4. Conjugate PL: As we train the source model by cross-entropy loss, it is similar to Tent but with the distinction of allowing the model to interact with the data twice: once for updating and once for prediction.
5. MEMO: Two versions of MEMO are considered for benchmarking OTTA on ViT: one with full model updating and another with LayerNorm updating. We have removed all data normalizations from its augmentation set to maintain consistency and prevent unexpected performance variations.
6. RoTTA: RoTTA encompasses ideas from all three OTTA categories. During our experiments, we exclude its RBN module, as LayerNorm in ViT is designed to handle data at the sample level, rendering the RBN module inapplicable.
7. TAST: TAST spans three OTTA categories and necessitates feature representation for distance-based classification. In our evaluation, we employ the class
Figure 9: Comparison of the OTTA performance on the CIFAR-10-C and CIFAR-10.1. The upper/bottom plots show the experiments conducted with batch size 16 / 1 based on the LayerNorm updating strategy, respectively
embedding in the first dimension as the samples' feature representation, aligning with ViT's use of the classifier's input.
Despite the many available OTTA methods, thoroughly evaluating this selected subset will yield valuable insights commensurate with our expectations. In this survey, we answer the following research questions:
### Is OTTA still working with ViT?
For an assessment of the chosen OTTA methods' efficacy, we compare them against the source-only baseline, which denotes inference devoid of adaptation. We address this question individually for each dataset in the subsequent sections.
#### 4.2.1 On CIFAR-10-C and CIFAR-10.1 Benchmarks
As illustrated in Fig. 9, We fully evaluate the CIFAR10-C and CIFAR-10.1 datasets, with batch sizes 1 and 16, to simulate the streaming data and limited computational environment. To clearly understand the pattern of predictions, we categorize our observations into three aspects: 1) variation in types of corruption, 2) variation in batch sizes, and 3) variation in adaptation strategies.
* **Noise-based corruptions.** As depicted in Fig. 16, we observe a significant issue with noise corruptions where most methods, including direct inference, experience high error rates regardless of their batch size. This could be attributed to the substantial domain shift that most methods are unable to handle. Adapting to noise corruption is a challenging task for entropy-based methods and MEM0, regardless of their batch size. This could be attributed to the significant domain gap discussed earlier. While uncertainty based optimization aims at increasing the model's confidence, it is unable to correct wrong predictions directly. MEM0 also relies on predictions to a certain extent, and therefore, exhibits similar patterns. It is observed that consistent results are obtained irrespective of the batch size in the defocus blur (defocus), pixelate (pixel), and brightness (bright) domains. This may be due to the slight domain gap between these two corruptions and the source image. However, it is noted that the performance of Tent is significantly inferior to the baseline direct inference, which is consistent with other domains when the batch size is 1.
* **Batch size.** Changes in the batch size will not change too much in terms of the mean error, except for those soft entropy-based methods. This could refer to Sec. 4.4, where a large batch size could stabilize the optimization process. Similar observation could be found in CIFAR10.1, where two soft entropy-based methods (Tent and Conj-CE) fail at small batch sizes.
* **Adaptation strategy.** SAR and RoTTA perform stable regardless of the domains and batch size. RoTTA is insensitive to batch size as it maintains a memory bank to save fresh and reliable data. This additional high-quality data information can mitigate the performance degradation of small batch sizes. On the other hand, SAR aims to achieve flat minima, ensuring that the model is optimized for stability preventing biased learning during adaptation. MEM0 performs well on certain domains, even when the batch size is as small as 1. This is a deliberate feature of MEM0, which was designed to support extremely small batch sizes. However, the difference in performance between various domains using MEM0 highlights that some marginal distributions are difficult to estimate accurately or are flawed, resulting in significant disparities among domains.
#### 4.2.2 On CIFAR-100-C Benchmark
The pattern of errors in CIFAR-100-C is similar to that of CIFAR-10-C. However, the performance of OTTA methods is worse when the batch size is set to 16. To avoid redundancy, we only discuss the different patterns observed in CIFAR-100-C.
* CoTTA performs worse for some corruption domains when the batch size is set to 16. This could be due to the fact that as the number of classes increases (from 10 to 100), the adaptation becomes harder. Additionally, the parameter reset strategy aims to select a fixed ratio of model parameters and reset them to the source pre-trained one, leading to an unlearning process. In simpler terms, this means that the model has not learned much knowledge on the new domains and its existing knowledge has already been partially deleted.
* In certain domains, SAR begins to exhibit limitations, particularly in the presence of noise distortions such as Gaussian noise, shot noise, and impulse noise. One possible explanation for this is that, rather than seeking a flat minimum by a constant ratio as in SAM, the optimization of CIFAR100C may not align in the correct direction.
* Regardless of the batch size, the degradation of the contrast corruption domain worsens over time. As indicated in Figure 10, only RoTTA can consistently exceed direct inference in the contrast column. This
implies that preserving valuable sample information is crucial, particularly for challenging domains.
#### 4.2.3 On Imagenet-C Benchmark
Compared with CIFAR10C and CIFAR100C, ImageNet-C shows more difference patterns. As illustrated in Figure 11, when the batch size is 16, only SAR, Conj-CE, and RoTTA perform better than the source model inference in terms of the mean error. On the other hand, Tent, MEMO, and CoTTA exhibit significantly poor results. The error rate within each domain follows a similar pattern. There could be several potential reasons for this. Firstly, there might be an intra-domain gap in the ImageNet dataset, which could be diverse within each class. This situation causes two gaps to exist simultaneously, thereby hindering the adaptation potential. Secondly, it is possible that the model is struggling to learn certain knowledge, which is similar to the phenomenon described in Section 4.2.2. In this case, a parameter reset in CoTTA could further deteriorate the model's discriminability.
In terms of batch size 1, only RoTTA survives. It emphasizes the importance of informative data saving, indicating that when facing severe shifts, some specific design is necessary.
#### 4.2.4 On CIFAR-Warehouse
We assess OTTA techniques on the CIFAR-10-warehouse dataset, encapsulating real-world shifts across 10 classes consistent with CIFAR. For clarity, we choose two representative domains to highlight real-world shift and diffusion synthesis shift.
The **Google split** within the CIFAR-warehouse comprises images sourced from the Google search engine, serving as a testament to contemporary OTTA meth
Figure 10: Comparison of the OTTA performance on the CIFAR100C. The upper/bottom plots show the experiments conducted with batch size 16 / 1 based on the LayerNorm updating strategy, respectively
ods' aptitude in managing real-world distributional deviations.
Based on the results presented in Figure 12, we have the following observations regarding the Google split.
* When the batch size is 16, most of the examined OTTA methods perform similarly or even better than direct inference. The top figure of Fig. 12 shows that the baseline performance (represented by the dotted line) has a higher error rate than the OTTA methods in most subdomains. This suggests that the existing OTTA methods are, at least, unlikely to deceive the source pre-trained model.
* In certain domains, uncertainty reduction techniques may not work as expected. Specifically, in domain G-09, most uncertainty-based methods tend to fail. This could be due to an unexpectedly large gap in those domains. However, other methods, such as RoTTA and SAR, still manage to perform well by either preserving high-quality data information or seeking flat optimization to mitigate domain shift.
When the batch size is 1, the pattern is clearer, as shown in Fig. 12.
* Uncertainty-based methods cause performance degradation in nearly all domains, which may be due to instability with extremely small batches of data. This even becomes a must in Tent.
* RoTTA and SAR exhibit exceptional stability regardless of batch sizes, as observed in Sec. 4.2.1
Regardless of the batch size, there are some observations that can be made. Firstly, for G-07, all OTTA methods and direct inference perform similarly, irrespective of batch size. Interestingly, even when the batch size is just 1, most methods achieve better results than a batch size of 16. This is probably due to the fact that each test sample is informative enough but also diverse from one another. When mixed together, it confuses
Figure 11: Comparison of the OTTA performance on the ImageNet-C. The upper/bottom plots show the experiments conducted with batch size 16 / 1 based on the LayerNorm updating strategy, respectively
the model rather than stabilizing the adaptation, as the model doesn't have a clear direction to follow.
Secondly, G-09 fails for most of the cases, regardless of batch size. Unlike the above observation, this is potentially because of the large domain gap, where rare methods could potentially mitigate it.
**Diffusion split.** We also evaluated OTTA methods within the diffusion domain of the CIFAR-warehouse dataset. The results, as depicted in Fig. 13, reveal a consistent pattern in the performance of all methods across 12 subdomains, except for domain DM-05.
* Tent and Conjugate PL methods, which follow the uncertainty reduction objective described in Section 3.1.3, show worse performance on DM-05 than pure inference. This anomaly can be attributed to a significant domain shift, resulting in a biased understanding of the information. The incorrect indicator learning in the uncertainty-based loss function could further lead to model biases.
* On the contrary, these uncertainty-based algorithms show superior performance among other domains, further indicating the hard level for each diffusion-based subdomain.
* Besides these intuitions, particularly noteworthy is the stable performance of CoTTA, SAR, and RoTTA. Firstly, since SAR borrows the idea from SAM, it is seeking a flat minimum. The model will be optimized to a data-insensitive area, which allows not only a certain prediction delivered by its soft entropy loss, but also, most importantly, stable predictions. With the parameter reset strategy, CoTTA could control the biased adapting, allowing the partial recovery of the knowledge from the source domain. This could be seen as an important factor in CoTTA's stable performance, even in the DM
Figure 12: Comparison of the OTTA performance on the **Google split** of CIFAR-10-Warehouse. The upper/bottom plots show the experiments conducted with batch size 16 / 1 based on the LayerNorm updating strategy, respectively
05 subdomain. RoTTA, by maintaining a well-designed memory bank, also achieves similar results as in CoTTA. These two algorithms could both be treated as a solution by maintaining useful information but different strategies.
**Conslusion.** Based on the experiments conducted, we discovered that most methods follow a similar pattern across different datasets. This demonstrates that the current OTTA method has the potential to tackle various covariate shifts. Notably, RoTTA and SAR consistently performed well and remained stable across different domains. This is particularly significant since we observed that different domain types could significantly impact the model adaptation process. The characteristics exhibited by these two methods could potentially contribute to future research in this area.
### Is OTTA efficient?
In order to evaluate the effectiveness of certain OTTA algorithms in practical applications, especially when there are hardware limitations, we propose using GFLOPs as the measure of efficiency. In Fig. 14, we can observe the relationship between the performance of OTTA models and their corresponding GFLOPs for batch size 1 and 16 cases. Our main objective is to investigate how the computational expense of a model is related to its performance.
Figure 14: Mean error vs Giga Floating Point Operations Per Second (GFLOPs). We plot the correlation between them under batch size 16/1 settings for the CIFAR-10-C dataset.
Figure 13: Comparison of the OTTA performance on the **diffusion split** of CIFAR-10-Warehouse. All the experiments are conducted with batch size 16 based on the LayerNorm updating strategy. The dotted line indicates the baseline performance without any adaptation.
When it comes to evaluating performance, lower GFLOPs and mean error is always preferred. Based on this criterion, we have found that MEMO prioritizes high performance at the cost of computational efficiency. This is due to the BALD optimization strategy for the averaged marginal distribution employed by MEMO. However, some OTTA methods can balance the trade-off between these two evaluation metrics. Specifically, when the batch size is 16, RoTTA can achieve low error rates while also providing effective model updates.
As shown on the right-hand side of the figure, with a batch size of 1, most OTTA methods offer satisfactory performance while requiring a low computational cost. RoTTA, in particular, demonstrates superior hardware requirements and performance, even when compared to Tent with a batch size of 16. This first indicates that choosing a small batch size can help reduce the hardware requirements. However, it's important to note that using an extremely small batch size, while beneficial for hardware capacity, may also reduce the accuracy of the model to some extent. This is particularly true when compared to a batch size of 16. Secondly, choosing a proper model can improve performance based on two evaluation metrics, regardless of batch size, but may not be the best when evaluating separately.
In conclusion, reducing the batch size is a straightforward and practical solution to manage the hardware capacity. However, it is important to keep in mind that this may impact the model's accuracy. A careful balance should be struck between these two factors based on the particular deployment scenario.
### SOTA sensitive to Hyperparameter Selection?
Batch Size still Matters!... but not too much. One unneglectable factor in the OTTA setting is its batch size. Since the data is coming and being processed at the batch level, performance sensitivity in terms of its batch size shows an important examination factor.
As shown in Fig. 16, we study the impact of varying batch sizes on Tent's performance when used with the CIFAR-10-C dataset. The results indicate that the performance of Tent is heavily influenced by the batch size within the range of 1 to 16, for most corruptions. However, the correlation between the model's performance and its batch size is significantly reduced for batch sizes 16, 32, 64, and 128, indicating less correlation between LayerNorm updating and batch size when compared to the original batch size settings based on BatchNorm and ResNet. Similar patterns are also observed among other datasets, as shown in Fig. 15.
However, one question may occur: why does batch size matter between 0 and 16? It turns out that although LayerNorm can eliminate batch size influence, a **stable loss** still requires a proper batch size. This could also be evident from Fig. 12 in the Google split of the CIFAR warehouse, where the results for batch size 16 surpass the case of batch size 1 by a large gap for uncertainty-based optimization methods (since these methods trying to improve the test-time performance purely by the loss optimization).
Besides, since the batch size is defined to stabilize the loss, we can find some interesting patterns. For example, as shown in Fig. 15, the test sets such as CIFAR-10.1 and CIFAR-10-W show stable performance when batch size varies between 16 and 64. As the source-only performance is already satisfactory, these datasets could be treated as relatively simple tasks. However, large batch size still matters when facing harder cases, as the performance on CIFAR-100-C and ImageNet-C. This verifies the above idea and brings us a new way of setting up a proper batch size when facing different data domains.
As shown in Fig. 16, there is another interesting trend where large batch sizes cannot mitigate the knowledge gap when faced with hard-to-learn corruption domains like Gaussian noise and Shot noise. This highlights the bottleneck in setting batch sizes.
Optimization Layer Matters! As the above experiments were all conducted on top of the LayerNorm updating strategy, in this section, we want to examine if LayerNorm is inevitable. For better illustration, we summarize the LayerNorm-based and full model optimization in Table. 2
To be detailed, we conduct this ablation study mainly on CoTTA and MEMO. While both CoTTA and MEMO originally required a full model update in their paper, we would like to examine if updating LayerNorm will lead to better adaptation results.
Figure 15: Impact of varying batch sizes on CIFAR-10.1, CIFAR-10-W, CIFAR-100-C, ImageNet-C. The base model is Tent optimized on LayerNorm.
In our empirical study, we conducted a comprehensive analysis across multiple datasets, including CIFAR-10-C, CIFAR-100-C, ImageNet-C, CIFAR-Warehouse, and CIFAR10.1, to assess the impact of the LayerNorm updating strategy in CoTTA and MEMO models. Notably, for the CIFAR-10-C dataset, we observed a substantial performance improvement of 52.91% when utilizing LayerNorm update compared to updating the entire set of student model parameters. This result underscores the pivotal role of the LayerNorm updating strategy. Similar trends of improved performance were consistently observed in all other datasets, manifesting as \(\Delta\) Err improvements of 35.06%, 27.62%, 15.02%, and 34.99%, respectively. Remarkably, when examining CoTTA* (CoTTA without parameter reset), we found that updating LayerNorm alone can lead to significant \(\Delta\) Err improvements.
Furthermore, in the MEMO model, the utilization of the LayerNorm update strategy yielded remarkable enhancements, with the most notable improvement recorded in CIFAR10.1, where the \(\Delta\) Err demonstrated an impressive 91.33% increase.
In summary, our empirical findings unequivocally demonstrate that the LayerNorm update strategy plays a pivotal and highly beneficial role in ViT-based OTTA methods, showcasing its significance as a critical component in enhancing model performance across various datasets.
## 5 Future Directions
While our initial evaluations shed light on the performance of current OTTA methods using the Vision Transformer (ViT) backbone, it's evident that many were not tailored explicitly for ViT, resulting in suboptimal adaptation outcomes. Below, we delineate the attributes of an ideal OTTA for forthcoming research.
* **OTTA for Large-scale Models**: There is a pressing need to explore this avenue further. It necessitates the inception and empirical validation of novel algorithms under standardized benchmarks, constrained batch sizes, and contemporary backbones or expansive pre-trained models. With the emergence of substantial Visual Language Models, such as CLIP, the demand for adapting multi-modal models to specialized datasets exhibiting dynamic shifts may intensify.
* **Hot-swappable OTTA**: Given the rapid evolution of backbone architectures, forthcoming OTTA methods should emphasize extensibility and generalizability in the face of architectural alterations. The crux of the challenge will be devising unsuper
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{CIFAR-10-C} & \multicolumn{3}{c}{CIFAR-10-C} & \multicolumn{3}{c}{CIFAR-100-C} & \multicolumn{3}{c}{ImageNet-C} & \multicolumn{3}{c}{CIFAR-Warehouse} \\ \cline{2-13} & LN & ALL & \(\Delta\) Err & LN & ALL & \(\Delta\) Err & LN & ALL & \(\Delta\) Err & LN & ALL & \(\Delta\) Err \\ \hline CoTTA & 0.2471 & 0.5247 & -52.915 & 0.0750 & 0.1155 & -35.06\% & 0.4663 & 0.6442 & -27.62\% & 0.7237 & 0.8516 & -15.02\% & 0.1823 & 0.2804 & -34.99\% \\ CoTTA* & 0.2468 & 0.7708 & -67.985 & 0.0750 & 0.1640 & -54.27\% & 0.4682 & 0.8961 & -47.75\% & 0.7229 & 0.9054 & -20.16\% & 0.1820 & 0.5643 & -66.68\% \\ \hline MEMO & 0.2116 & 0.8999 & -76.49\% & 0.0780 & 0.9000 & -91.33\% & 0.4338 & 0.9900 & -56.18\% & 0.9032 & 0.9995 & -96.35\% & 0.1746 & 0.8955 & -80.50\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: The Comparison of model update strategies across 5 different benchmarks: Layernorm only v.s. full model. \({}^{*}\) indicates the variant of COTTA with no parameter reset mechanism.
Figure 16: Impact of varying batch sizes on CIFAR-10-C. The base model is Tent optimized on LayerNorm.
vised proxies that can aptly simulate prospective, unseen shifts.
* **Stable and Robust Optimization for OTTA**: In alignment with our empirical findings, the top-performing methods predominantly leverage the mean teacher architecture or employ a flatness-aware optimizer to effectively tackle a spectrum of distribution shifts. While increasing the batch size offers a partial remedy, this solution isn't always feasible. Hence, it becomes imperative to formulate strategies that enable a stable and robust gradient descent even with a limited set of unlabeled test samples.
## 6 Conclusion
Online Test-Time Adaptation (OTTA) has emerged as a crucial area of research in the realm of deep learning. In this paper, we present a comprehensive review of OTTA, offering a detailed explanation of existing methodologies, available datasets, evaluation benchmarks, and practical applications. Additionally, we conduct extensive experiments of the various OTTA methods currently in use, for both effectiveness and efficiency. Drawing from this analysis, we identify several key research challenges that have the potential to shape the future of OTTA research. We anticipate that this survey will serve as a valuable resource for further exploration.
**Data Availability Statement:**
The data that support the findings of this study are openly available as summarized in Table 1.
This open-sourced repository will be released and include all the implementation code and experimental configurations of this work.
## 7 Appendix
Below, we include comprehensive Tables for ImageNet-C and CIFAR-10, including parameter size and amount of trainable parameters for reference.
|
2309.05252 | A severe challenge to the MOND phenomenology in our Galaxy | Modified Newtonian Dynamics (MOND) is one of the most popular alternative
theories of dark matter to explain the missing mass problem in galaxies.
Although it remains controversial regarding MOND as a fundamental theory, MOND
phenomenology has been shown to widely apply in different galaxies, which gives
challenges to the standard $\Lambda$ cold dark matter model. In this article,
we derive analytically the galactic rotation curve gradient in the MOND
framework and present a rigorous analysis to examine the MOND phenomenology in
our Galaxy. By assuming a benchmark baryonic disk density profile and two
popular families of MOND interpolating functions, we show for the first time
that the recent discovery of the declining Galactic rotation curve in the outer
region ($R \approx 17-23$ kpc) can almost rule out the MOND phenomenology at
more than $5\sigma$. This strongly supports some of the previous studies
claiming that MOND is neither a fundamental theory nor a universal description
of galactic properties. | Man Ho Chan, Ka Chung Law | 2023-09-11T06:04:18Z | http://arxiv.org/abs/2309.05252v1 | # A severe challenge to the MOND phenomenology in our Galaxy
###### Abstract
Modified Newtonian Dynamics (MOND) is one of the most popular alternative theories of dark matter to explain the missing mass problem in galaxies. Although it remains controversial regarding MOND as a fundamental theory, MOND phenomenology has been shown to widely apply in different galaxies, which gives challenges to the standard \(\Lambda\) cold dark matter model. In this article, we derive analytically the galactic rotation curve gradient in the MOND framework and present a rigorous analysis to examine the MOND phenomenology in our Galaxy. By assuming a benchmark baryonic disk density profile and two popular families of MOND interpolating functions, we show for the first time that the recent discovery of the declining Galactic rotation curve in the outer region (\(R\approx 17-23\) kpc) can almost rule out the MOND phenomenology at more than \(5\sigma\). This strongly supports some of the previous studies claiming that MOND is neither a fundamental theory nor a universal description of galactic properties.
Modified Gravity, Galaxy
## 1 Introduction
Dark matter theory and modified gravity theory are two kinds of competing theories to explain the missing mass problem in galaxies and galaxy clusters (Bertone & Tait, 2018). The controversy between these two theories is still on-going (Martens et al., 2022). The successful predictions based on the standard \(\Lambda\) cold dark matter (\(\Lambda\)CDM) model in large-scale structures have gained important credence on dark matter theory (Croft et al., 1999; Spergel & Steinhardt, 2000; Peacock et al., 2001) while the galactic relations (e.g. baryonic Tully-Fisher relation, radial acceleration relation) show a certain support to modified gravity theory (McGaugh, 2020; Islam & Dutta, 2020).
As one of the earliest versions of modified gravity theory, Modified Newtonian Dynamics (MOND) has been examined for four decades. It is well-known that MOND works very well in galaxies but it gives poor fits with the data of galaxy clusters (Sanders & McGaugh, 2002; Chan & Del Popolo, 2020). Although it is dubious for MOND to be a fundamental theory (Chan, 2013), the phenomenological description using MOND agrees with the data in galaxies. For example, the baryonic Tully-Fisher relation (Lelli et al., 2019) and the radial acceleration relation (McGaugh, Lelli & Schombert, 2016) in galaxies give excellent agreement with the MOND's predictions (McGaugh, 2020; Islam & Dutta, 2020). Also, the apparent flat rotation curves (RCs) in most of the galaxies match the MOND behavior at large distance from the galactic centres (Sanders & McGaugh, 2002; Gentile, 2008). Therefore, these potentially suggest that MOND can be regarded as a universal phenomenological description in galaxies (the MOND phenomenology) (Gentile, 2008; Dutton et al., 2019). Many studies have used the MOND phenomenology to study galactic dark matter and galactic scaling relations (Gentile et al., 2009; Ho, Minic & Ng, 2010). However, most of the galactic data have large uncertainties. For example, the high-quality rotation curve data (SPARC) used in supporting MOND have an average of 7.1% observational uncertainty (not including systematic uncertainties) (Lelli, McGaugh & Schombert, 2016). Therefore, it is very difficult for us to rigorously verify or falsify MOND as a universal phenomenological description on galactic scale.
On the other hand, recent high-quality observations of the Milky Way rotation curve (MWRC) data in the outer region (\(R>17\) kpc) give very small marginal errors (with an average of 2-3% observational uncertainty only) in the rotation curve fittings (Eilers et al., 2019; Wang, Hammer & Yang, 2022; Labini et al., 2023; Ou et al., 2023). The MWRC shows a very clear decreasing trend from \(V\approx 220\) km/s to \(V\approx 170\) km/s in the range of \(R=12-27\) kpc (Labini et al., 2023; Ou et al., 2023). The decreasing RC may even extend to 200 kpc based on the data of the Milky Way satellites, although the decreasing rate is relatively smaller than that in the range of \(R=12-27\) kpc (Vasiliev, Belokurov & Erkal, 2021; Wang, Hammer & Yang, 2022). Since MOND generally suggests a flat rotation curve for large \(R\), such a decline in MWRC is not trivial for MOND. However, statistical margins might still allow an almost flat rotation curve to fit the decline in MWRC due to the measurement uncertainties.
In view of this problem, a more precise way is to focus particularly on the RC gradient \(dV/dR\) to investigate MOND. Earlier measurements give \(dV/dR=-(1.7\pm 0.1)\) km/s/kpc at large \(R\)(Eilers et al., 2019), which can still be explained by the MOND behavior (McGaugh, 2018, 2023). However, more recent measurements and analyses give \(dV/dR=-(2.3\pm 0.2)\) km/s/kpc (Wang et al., 2023) and even \(dV/dR<-3\) km/s/kpc (Labini et al., 2023; Ou et al., 2023) at large \(R\), which might give challenges to the MOND phenomenological
description.
In this article, we present a rigorous analysis to understand how large of \(dV/dR\) can MOND tolerate in galaxies. We specifically use the MWRC data to examine the MOND phenomenology using two major families of MOND interpolating functions. By assuming a benchmark baryonic disk density profile, we show that the MWRC data can rule out MOND phenomenology for the two major families of interpolating functions at more than \(5\sigma\), which can almost exhaustively represent all possible variations of MOND. This suggests that there is likely no room for MOND phenomenology in our Galaxy, which provides the first severe challenge to the alleged universal nature of MOND phenomenology on galactic scale.
## 2 The rotation curve behavior in the MOND framework
MOND suggests a modification of gravity if the test particle is moving with an acceleration smaller than a critical value \(a_{0}\). When the normal Newtonian gravitational acceleration \(g_{N}\ll a_{0}\) (i.e. the deep-MOND regime), the gravity would be modified to (Sanders & McGaugh, 2002)
\[g=\sqrt{g_{N}a_{0}}. \tag{1}\]
When \(g_{N}\gg a_{0}\) (i.e. the Newtonian regime), the gravity would restore back to the Newtonian description: \(g=g_{N}\). One can imagine that there is a transition from \(g=\sqrt{g_{N}a_{0}}\) to \(g=g_{N}\) when \(g_{N}\) is changing from a very small value to a large value. In the MOND framework, the transition is described by the interpolating function \(\nu\)(Famaey & McGaugh, 2012). Therefore, the modification of gravity can be generally re-written as
\[g=\nu(g_{N}/a_{0})g_{N}, \tag{2}\]
where \(\nu(z)=1\) when \(z\rightarrow\infty\) and \(\nu(z)=\sqrt{1/z}\) when \(z\to 0\). Although there is no theoretical prediction of \(\nu\) based on MOND, there are some suggested families of functions which can satisfy the required transition (Famaey & McGaugh, 2012):
\[\nu(z)=\left[\frac{1+(1+4z^{-p})^{1/2}}{2}\right]^{1/p}, \tag{3}\]
\[\nu(z)=[1-\exp(-z^{\delta/2})]^{-1/\delta}. \tag{4}\]
Here, the parameters \(p\) and \(\delta\) are positive real numbers. For the \(p\)-family, \(p=1\) and \(p=2\) are commonly known as the simple interpolating function and the standard interpolating function respectively (Famaey & McGaugh, 2012; Dutton et al., 2019). For the \(\delta\)-family, \(\delta=1\) is the well-known function which provides the best description of the radial acceleration
relation in galaxies (McGaugh, Lelli & Schombert, 2016; Dutton et al., 2019). In any case, different possible values of \(p\) and \(\delta\) can almost exhaustively represent all possible ways of transition between the deep-MOND and Newtonian regimes (Dutton et al., 2019). Although most of the previous studies took \(p\geq 1\) and \(\delta\geq 1\)(McGaugh, 2008; Dutton et al., 2019), we generally allow that \(p\) and \(\delta\) can be any positive real numbers first.
If dark matter does not exist, the gravity is fully contributed by the baryonic matter. In a typical galaxy, the baryonic matter is dominated by the bulge component and the disk component. Since we are concerning about the outer region of a galaxy (i.e. large \(R\)), the bulge component can be well-approximated by a point-mass function at the Galactic centre with the bulge mass \(M_{b}\). For the disk component, early observations have shown that the disk surface brightness of many galaxies can be well-approximated by an exponential function, including our Galaxy (Freeman, 1970; Juric et al., 2008; de Jong et al., 2010; McGaugh, 2016). Therefore, most of the past studies and simulations have used an exponential function to model the surface mass density of the disk component in our Galaxy (Misiriotis et al., 2006; Sofue, 2012; Bovy & Rix, 2013; Licquia & Newman, 2015):
\[\Sigma=\Sigma_{0}e^{-R/a_{d}}, \tag{5}\]
where \(a_{d}\) is the disk scale length and \(\Sigma_{0}\) is the central surface mass density. The total disk mass can be given by \(M_{d}=2\pi a_{d}^{2}\Sigma_{0}\). The Newtonian baryonic RC (without MOND transformation) is given by (Freeman, 1970; Sofue, 2012)
\[V_{\rm bar}(x)=\sqrt{Ax^{2}\left[I_{0}(x)K_{0}(x)-I_{1}(x)K_{1}(x)\right]+ \frac{B}{x}}, \tag{6}\]
where \(A=2GM_{d}/a_{d}\), \(B=GM_{b}/(2a_{d})\), \(x=R/(2a_{d})\), \(I_{n}\) and \(K_{n}\) are the modified Bessel functions of the \(n^{\rm th}\) kind.
To make the analysis more comprehensive, apart from following the benchmark exponential function to model the disk component, we will consider an extreme case in which all of the baryonic disk mass is concentrated in \(R<17\) kpc (i.e. the concentrated baryonic disk model). This would represent a Keplerian decrease in baryonic RC contribution and hence contribute to a more negative RC slope. For this extreme scenario, the Newtonian baryonic RC without MOND transformation is
\[V_{\rm bar}(x)=\sqrt{\frac{A}{8x}+\frac{B}{x}}. \tag{7}\]
Under the framework of MOND, we can transform the Newtonian baryonic RC to the apparent RC \(V(R)\) (i.e. the actual rotation curve observed) by using Eq. (2) as
\[V(x)=\sqrt{\nu\left(\frac{V_{\rm bar}^{2}(x)}{2a_{0}a_{d}x}\right)}V_{\rm bar }(x). \tag{8}\]
To understand the behavior of the RC, we take the derivative on \(V(x)\) to get the analytic forms of the RC gradient. For the \(p\)-family, by substituting Eq. (3) into Eq. (8), we can get
\[V(x)=2^{-1/2p}V_{\rm bar}\left(1+\sqrt{1+4y^{p}}\right)^{1/2p}, \tag{9}\]
where \(y=2a_{0}a_{d}x/V_{\rm bar}^{2}\). Taking the derivative on both sides, we can get the RC gradient for the \(p\)-family:
\[\frac{dV}{dx}=\frac{(\sqrt{1+4y^{p}}+1+2y^{p})V_{\rm bar}^{\prime}+2y^{p-1}a_{ 0}a_{d}V_{\rm bar}^{-1}}{\sqrt{2^{1/p}(1+\sqrt{1+4y^{p}})^{2-1/p}(1+4y^{p})}}. \tag{10}\]
For the \(\delta\)-family, we can write the apparent RC explicitly:
\[V(x)=\sqrt{\left[1-\exp\left(-y^{-\delta/2}\right)\right]^{-1/\delta}}V_{\rm bar}. \tag{11}\]
Similarly, by taking the derivative on both sides, we get
\[\frac{dV}{dx}=\left[1-\exp\left(-y^{-\delta/2}\right)\right]^{-1/2\delta} \left[V_{\rm bar}^{\prime}+\frac{V_{\rm bar}^{2}y^{1-\delta/2}}{4\exp(y^{- \delta/2})-4}\times\frac{V_{\rm bar}-2xV_{\rm bar}^{\prime}}{2a_{0}a_{d}x^{2}} \right]. \tag{12}\]
Here, \(V_{\rm bar}^{\prime}\) is the derivative on the Newtonian baryonic RC. For the benchmark and the concentrated baryonic disk models, we have
\[V_{\rm bar}^{\prime} = \frac{1}{2V_{\rm bar}(x)}\left[2Ax\left(I_{0}(x)K_{0}(x)+xI_{1}( x)K_{0}(x)\right.\right. \tag{13}\] \[\left.\left.-xI_{0}(x)K_{1}(x)\right)-\frac{B}{x^{2}}\right],\]
and
\[V_{\rm bar}^{\prime}=-\frac{1}{2V_{\rm bar}}\left(\frac{A}{8x^{2}}+\frac{B}{x^ {2}}\right) \tag{14}\]
respectively. After grouping the appropriate terms in Eq. (12), we get the RC gradient for the \(\delta\)-family:
\[\frac{dV}{dx}=\frac{[\exp(y^{-\delta/2})-1-0.5y^{-\delta/2}]V_{\rm bar}^{ \prime}+0.25y^{-\delta/2}V_{\rm bar}x^{-1}}{[1-\exp(-y^{-\delta/2})]^{1/2 \delta}[\exp(y^{-\delta/2})-1]}. \tag{15}\]
## 3 Data analysis
We test the MOND RC behavior by using the MWRC data. We focus on the outer region \(R\sim 20\) kpc of our Galaxy because MOND effect is significant when \(R\) is large. Recent accurate measurements of the MWRC at \(R=17-27\) kpc can give a rigorous
examination of MOND. The most recent robust analysis of the MWRC combining the data of APOGEE DR17, GAIA, 2MASS and WISE gives a large decline in the RC gradient (Ou et al., 2023). However, since the systematic uncertainties of the data are very large (\(\sim 15\%\)) for \(R>22.5\) kpc (Ou et al., 2023), we will only analyze the data within the region \(R=17.21-22.27\) kpc (systematic uncertainties \(\sim 1-5\%\) only). Simple regression of the RC data points gives \(dV/dR=(-4.567\pm 0.532)\) km/s/kpc for \(R=17.21-22.27\) kpc. If we include the consideration of the small observational uncertainties for each RC data point, the best-fit RC slope with \(5\sigma\) margins is \(dV/dR=-5.07^{+2.51}_{-2.49}\) km/s/kpc (99.99994% C.L.) for \(R=17.21-22.27\) kpc (see Fig. 1). The \(5\sigma\) margins are calculated using the \(\chi^{2}\) method. By examining different values of \(dV/dR\) in the fits, the values of the \(5\sigma\) margins can be determined when the \(\chi^{2}\) value is larger than the \(5\sigma\) critical value \(\chi^{2}_{\rm crit}=48\) (degrees of freedom = 10). Note that we did not include the systematic uncertainties of the RC data points in determining the RC gradient. The actual effects of the systematic uncertainties on the RC gradient depend on the involved factors. We will discuss these effects in the discussion section.
From Eq. (10) and Eq. (15), we can see that the RC gradient depends on the baryonic parameters (\(M_{b}\), \(M_{d}\) and \(a_{d}\)) and the MOND parameters (\(a_{0}\), \(p\) and \(\delta\)). In the followings, we take the standard value of \(a_{0}=1.2\times 10^{-8}\) cm/s\({}^{2}\) to perform our analysis (McGaugh, Lelli & Schombert, 2016). For the baryonic parameters, we first assume the fiducial values adopted in Ou et al. (2023): \(M_{b}=1.55\times 10^{10}M_{\odot}\), \(M_{d}=3.65\times 10^{10}M_{\odot}\) and \(a_{d}=2.35\) kpc. These values are determined by the direct baryonic observations (Misiriotis et al., 2006), which are somewhat representative and good for performing the preliminary analysis. We will also test a reasonable range of each parameter in the next step.
In Fig. 2, we plot \(dV/dR\) as a function of \(p\) and \(\delta\) for the \(p\)-family and \(\delta\)-family interpolating functions respectively at \(R=19.71\) kpc. We have taken the data point at \(R=19.71\) as the central reference position because it is the median \(R\) and the mean value of \(R\) among the data points within the range \(R=17.21-22.27\) kpc. Generally speaking, our results do not depend on the choice of the central reference position. We can see that larger values of \(dV/dR\) (more positive or less negative) would result when \(p\) and \(\delta\) are larger. For the benchmark baryonic disk model, the \(5\sigma\) allowed ranges of \(p\) and \(\delta\) are \(p=0.19-0.46\) and \(\delta=0.17-0.43\) respectively. However, the Mercury precession data have constrained \(p\geq 1.22\) and \(\delta\geq 0.33\)(Chan & Lee, 2023). Therefore, a large range of \(\delta\) and all values of \(p\) are ruled out based on the Mercury precession data and MWRC data. In particular, the cases with \(p=1\), \(p=2\) and \(\delta=1\) can give good agreements with the data of galactic rotation curves (McGaugh, Lelli & Schombert, 2016; McGaugh, 2008; Dutton et al., 2019) and velocity dispersion profiles of elliptical galaxies (Chae et al., 2020). However, these benchmark values
are ruled out at \(5\sigma\) based on the MWRC slope with the fiducial values. Besides, even if we consider the concentrated baryonic disk model, the values for \(p\geq 1\) and \(\delta\geq 1\) are still ruled out at more than \(5\sigma\) (see Fig. 2).
Apart from using the fiducial values of the baryonic parameters, we consider conservative ranges of \(M_{d}\) and \(a_{d}\) for a more rigorous investigation. Note that the room for varying the value of \(M_{b}\) is very small because it is mainly determined by the very inner RC data, which is almost independent of MOND or any particular dark matter model. Therefore, we will still adopt \(M_{b}=1.55\times 10^{10}M_{\odot}\). Besides, previous studies have shown that MOND favors small values of \(a_{d}\)(Gerhard, 2002; McGaugh, 2008). The RC data suggest \(a_{d}\sim 2-4\) kpc and \(M_{d}\sim(2.9-5.4)\times 10^{10}M_{\odot}\) under the MOND framework (McGaugh, 2008). These ranges are generally consistent with the mass model predictions in many studies (Sofue, 2012; McGaugh, 2016; Ou et al., 2023). To perform a conservative test, we allow wider ranges of \(a_{d}\) and \(M_{d}\): \(a_{d}=1-10\) kpc and \(M_{d}=(1-10)\times 10^{10}M_{\odot}\).
In Fig. 3, we plot the RC gradient \(dV/dR\) as a function of \(a_{d}\) for different \(M_{d}\). Since most of the previous studies assume \(p\geq 1\) and \(\delta\geq 1\)(Dutton et al., 2019; McGaugh, 2008), we only consider the simplest form (the benchmark values) of \(p=1\) and \(\delta=1\) for both interpolating families because larger values of \(p\) and \(\delta\) would make the RC gradient larger. We find that almost all RC gradients are larger than the \(5\sigma\) upper bound of the observed RC gradients, except for the \(p\)-family with the extreme parameter \(M_{d}=10^{11}M_{\odot}\). This indicates that a large parameter space of \(a_{d}\) and \(M_{d}\) in the conservative ranges are ruled out at \(5\sigma\) for \(p\geq 1\) and \(\delta\geq 1\).
## 4 Discussion
In this article, we present the RC gradient analysis for MOND using the MWRC data at large \(R\). Thanks to the high-quality measurements, the MWRC data have very small error bars at large \(R\), which give an excellent platform to test modified gravity theories or constrain dark matter model. The MWRC data show a very clear declining trend at large \(R\) which potentially challenges MOND because it predicts an almost flat RC at large \(R\). The \(5\sigma\) range of the RC gradient is \(dV/dR=-5.07^{+2.51}_{-2.49}\) km/s/kpc (without considering the systematic uncertainties). On the other hand, by considering two major families of the interpolating functions, we have derived the RC gradient formulas explicitly for the MOND framework. We first follow the benchmark baryonic disk density profile and use the fiducial values of \(a_{d}\), \(M_{d}\) and \(M_{b}\) to calculate the RC gradient. There exist \(5\sigma\) allowed ranges of \(p=0.19-0.46\) and \(\delta=0.17-0.43\). However, when these are combined with the Mercury precession constraints \(p\geq 1.22\) and \(\delta\geq 0.33\)(Chan & Lee, 2023), only a narrow range of
\(\delta\) remains. This suggests that almost all possible variations of MOND are ruled out. Even in the extreme case (concentrated baryonic disk model), the benchmark MOND parameters \(p\geq 1\) and \(\delta\geq 1\) are still ruled out beyond \(5\sigma\).
Besides, we also consider possible conservative ranges of \(M_{d}=(1-10)\times 10^{10}M_{\odot}\) and \(a_{d}=1-10\) kpc. For the benchmark cases \(p=1\) and \(\delta=1\), almost all of the RC gradients calculated are above the \(5\sigma\) upper bound of the observed RC gradient. As a larger \(p\) or \(\delta\) gives less negative RC gradients, this gives a severe challenge to the MOND phenomenology in our Galaxy. Nevertheless, we did not consider the systematic uncertainties of the RC data points shown in Ou et al. (2023). Some of the systematic uncertain factors (e.g. the solar position) would somewhat systematically underestimate or overestimate the values of RC, without affecting the value of the RC gradient. However, it is not sure how the other factors discussed in Ou et al. (2023) would affect the RC gradient calculation. If we assume 1% systematic uncertainties and combine with the observational uncertainties, the \(5\sigma\) range of the RC gradient would enlarge to \(dV/dR=-5.07^{+4.14}_{-3.77}\) km/s/kpc. However, if we focus on the \(2\sigma\) range instead, similar parameter space of the MOND parameters are still ruled out at \(2\sigma\) (95.45% C.L.) (see Fig. 3).
This is the first time we can rule out a large parameter space of galactic MOND phenomenology at more than \(5\sigma\). In fact, many previous studies, like the SPARC analysis, have claimed that MOND works very well in galaxies (McGaugh, 2020; Islam & Dutta, 2020). The radial acceleration relation shown in SPARC gives an excellent agreement with MOND's prediction (McGaugh, 2020; McGaugh, Lelli & Schombert, 2016). However, the rotation curve data for the galaxies outside our Local group have large uncertainties. Moreover, the fittings also involve some unknown parameters like the mass-to-light ratio. Therefore, there is plenty of room for MOND to fit with the data. In fact, some later studies using better analysis tools have challenged the existence of a universal acceleration scale \(a_{0}\)(Rodrigues et al., 2018; Marra, Rodrigues & de Almeida, 2020; Chan, Desai & Del Popolo, 2022) and the core-cusp nature in galaxies (Eriksen, Frandsen & From, 2021) predicted by the MOND phenomenology.
Fortunately, the very small error bars of the MWRC generated from the analysis combining the data of APOGEE DR17, GAIA, 2MASS and WISE can give a robust test for the galactic MOND phenomenology. We have tested two major families of interpolating functions which can almost represent all possible transition functions between the deep-MOND and Newtonian regimes. We have also considered wide conservative ranges of \(a_{d}\) and \(M_{d}\). Our robust analysis gives a severe challenge to the MOND phenomenology in our Galaxy, unless the effect of the systematic uncertainties is larger than our expectation. The phenomenological MOND might be just a rough approximation and it is definitely not universal
for all galaxies. Besides, the small uncertainties of the MWRC generated can also be used to analyze the radial acceleration relation for our Galaxy. This can further examine the MOND phenomenology and the alleged universal acceleration scale shown in galaxies.
## 5 Acknowledgements
The work described in this paper was partially supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. EdUHK 18300922).
|
2309.13430 | Resolving References in Visually-Grounded Dialogue via Text Generation | Vision-language models (VLMs) have shown to be effective at image retrieval
based on simple text queries, but text-image retrieval based on conversational
input remains a challenge. Consequently, if we want to use VLMs for reference
resolution in visually-grounded dialogue, the discourse processing capabilities
of these models need to be augmented. To address this issue, we propose
fine-tuning a causal large language model (LLM) to generate definite
descriptions that summarize coreferential information found in the linguistic
context of references. We then use a pretrained VLM to identify referents based
on the generated descriptions, zero-shot. We evaluate our approach on a
manually annotated dataset of visually-grounded dialogues and achieve results
that, on average, exceed the performance of the baselines we compare against.
Furthermore, we find that using referent descriptions based on larger context
windows has the potential to yield higher returns. | Bram Willemsen, Livia Qian, Gabriel Skantze | 2023-09-23T17:07:54Z | http://arxiv.org/abs/2309.13430v1 | # Resolving References in Visually-Grounded Dialogue via Text Generation
###### Abstract
Vision-language models (VLMs) have shown to be effective at image retrieval based on simple text queries, but text-image retrieval based on conversational input remains a challenge. Consequently, if we want to use VLMs for reference resolution in visually-grounded dialogue, the discourse processing capabilities of these models need to be augmented. To address this issue, we propose fine-tuning a causal large language model (LLM) to generate definite descriptions that summarize coreferential information found in the linguistic context of references. We then use a pretrained VLM to identify referents based on the generated descriptions, zero-shot. We evaluate our approach on a manually annotated dataset of visually-grounded dialogues and achieve results that, on average, exceed the performance of the baselines we compare against. Furthermore, we find that using referent descriptions based on larger context windows has the potential to yield higher returns.
## 1 Introduction
Visually-grounded dialogues are conversations in which participants make references to the visual world. Referring in conversation is understood to be a collaborative process, with shared responsibility for ensuring the successful identification of the referent Clark and Wilkes-Gibbs (1986). It is not uncommon for a definite reference to be established over multiple turns, with each separate contribution unlikely to be a minimally distinguishable description of the referent. Taken out of their use context, these referring expressions may be difficult, if not impossible, to resolve. Consider the example dialogue in Figure 1. The underspecified description _"the shiny one"_ leads to a clarification question, _"Do you mean that red one?"_. To resolve the expression _"that red one"_ to its referent, we need information from earlier in the conversation to understand that _"one"_ is a proform of _"apple"_. Without this linguistic context, the red strawberry and the red apple are equally likely referents.
We can break the problem of reference resolution in visually-grounded dialogue down into three subproblems: (1) mention detection, or finding the expressions that can be grounded in the visual context (_"that red one"_); (2) aggregation of referent-specific information (linking _"apple"_, _"the shiny one"_, and _"that red one"_); and (3) referent identification, or the grounding of language (finding the referent that is best described by the three expressions from among a set of candidate referents). This final step requires bridging the gap between vision and language. For this purpose, we can turn to pretrained vision-language models (VLMs), which have shown to be effective at zero-shot text-image retrieval when given a description of an image (e.g., Radford et al., 2021; Jia et al., 2021; Li et al., 2023). However, current VLMs lack the discourse processing capabilities necessary for reference resolution in visually-grounded dialogue. Although some VLMs may correctly identify the red apple as the referent given the entire dialogue of Figure 1, dialogues are often vastly more complex than this hypothetical exchange. Take, for instance, the dialogue in Appendix A: with multiple mentions of different referents within the same utterance, such a brute-force method would immediately fail. It is clear that if we want VLMs to be effective for this purpose, their discourse processing capabilities need to be augmented.
Figure 1: Example dialogue in which two participants discuss fruits. Expressions that denote one or more images are underlined.
To this end, we propose fine-tuning a causal large language model (LLM) for the task of _referent description generation_. Referent description generation can be regarded as a special case of referring expression generation with the goal of always generating the most complete expression possible. For a given mention, the model is trained to generate a definite description that summarizes all information that has been explicitly disclosed about the referent during a conversation. For example, for the mention _"that red one"_ in Figure 1 we would want the model to generate the description _"the shiny red apple"_. We will refer to the fine-tuned model as the _conversational referent description generator_ (CRDG). The description generated by the CRDG is then used by a pretrained VLM to identify the referent, zero-shot. Our approach can be seen as an exploration of the limits of depending on linguistic context alone for generating referent descriptions, as the discourse processing and eventual grounding of the descriptions are entirely disjoint.
For the experiments presented in this paper we use data from the collaborative image ranking task A Game Of Sorts (Willemsen et al., 2022). Referents are represented by separate, but visually similar images from a shared entity category. Due to their largely unrestricted nature and with a focus on the collaborative referential process, the collected dialogues form a challenging test bed for visually-grounded language understanding in conversation. We manually annotate the dialogues by marking mention spans and aligning the spans with the images they denote, and provide both manually constructed and automatically derived "ground truth" referent descriptions based on our manual annotations for all marked mentions.
Our main contributions are as follows:
* We present a generative approach to reference resolution in visually-grounded dialogue that frames the discourse processing side of the task as a causal language modeling problem;
* We show that it is possible to fine-tune a causal LLM to generate referent descriptions from dialogue to be used by a pretrained VLM for referent identification, zero-shot;
* We release the discussed materials, including our annotations for A Game Of Sorts (Willemsen et al., 2022)1.
Footnote 1: [https://github.com/willemsenbram/reference-resolution-via-text-generation](https://github.com/willemsenbram/reference-resolution-via-text-generation), doi:10.5281/zenodo.8176114
## 2 Background
Visually-grounded language understanding is fundamental for conversational agents that engage in dialogue involving references to the visual world. Researchers have introduced a variety of tasks that provide data for development and frameworks for evaluation of visually-grounded dialogue models. These tasks often take the form of goal-oriented, dyadic interactions but differ in terms of, for example, the visual stimuli used, e.g. abstract figures or realistic photos; the roles assigned to participants, e.g. whether symmetric or asymmetric; constraints on message content, e.g. a fixed vocabulary; and the nature of the task, e.g. navigation, identification, ranking, and multi-turn visual question answering (e.g. Das et al., 2017; De Vries et al., 2017; Shore et al., 2018; Ilinykh et al., 2019; Haber et al., 2019; Udagawa and Aizawa, 2019; Willemsen et al., 2022). It has been noted that the task configuration can significantly impact the extent to which certain dialogue phenomena, such as coreferences and clarification requests, are represented in the collected data, if at all (Agarwal et al., 2020; Haber et al., 2019; Ilinykh et al., 2019; Schlangen, 2019; Willemsen et al., 2022). Tasks that heavily constrain the interactions do not reflect the complex nature of dialogue to the same degree as tasks that have been designed for these phenomena to naturally emerge as part of the discourse, such as A Game Of Sorts (Willemsen et al., 2022), which we use in this paper.
The terms referring expression comprehension (e.g. Yu et al., 2016), referring expression grounding (e.g. Zhang et al., 2018), referring expression recognition (e.g. Cirik et al., 2018), and reference resolution (e.g. Kennington et al., 2015) have been used interchangeably to describe the problem of mapping the language that denotes a referent to a representation of that referent in the visual modality. Prior work noted the importance of referring expressions to conversation, but often modeled the problem independent of the dialogue (e.g. Cirik et al., 2018; Schlangen et al., 2016; Yu et al., 2016; Zhang et al., 2018). The granularity at which grounding occurs may differ between works, as the language may be mapped to bounding boxes of individual objects (Cirik et al., 2018; Schlangen et al., 2016; Yu et al., 2016; Zhang et al., 2018), objects or larger image regions represented by seg
mentation masks Liu et al. (2017), or entire images altogether Haber et al. (2019); Takmaz et al. (2020).
To address the problem computationally, both modalities must in some way be encoded. Engineered visual feature representations and simple language models such as those based on n-grams (e.g. Kennington et al., 2015; Kennington and Schlangen, 2017; Shore and Skantze, 2018) have been mostly replaced with more powerful learned representations that embed the images and text in high-dimensional vector spaces Haber et al. (2019); Takmaz et al. (2020). This has made it possible to resolve references by computing representational similarity between an encoding of the text that contains a mention and the embeddings of the candidate referents, where the candidate that has the highest matching score is assumed to be the referent Haber et al. (2019); Takmaz et al. (2020).
Recent work on multimodal representation learning has shown that jointly embedding text and images can work at scale. Trained using a contrastive objective, maximizing representational similarity between true pairings of images and text while simultaneously minimizing similarity of false pairs, vision-language models (VLMs) such as CLIP Radford et al. (2021), ALIGN Jia et al. (2021), BLIP Li et al. (2022), and BLIP-2 Li et al. (2023), have shown to be effective zero-shot classifiers, outperforming the previous state-of-the-art on various benchmarks without the need for further fine-tuning on specific tasks. However, despite their noteworthy image-text matching performance based on simple text queries, these VLMs lack the discourse processing capabilities required for reference resolution in visually-grounded dialogue. Even a simplified example, such as shown in Figure 1, illustrates a fundamental challenge, namely that of coreference resolution. The interpretation of anaphoric pronouns, such as _"it"_, is dependent on their antecedents. Without resolving its coreferences first, identifying the referent based on the pronoun alone leads to a random guess.
To improve downstream performance on discourse processing tasks involving coreference, prior work has approached the problem as one of transforming the original input based on linguistic context. This was done either via substitution, such as in Bhattacharjee et al. (2020) where pronouns were substituted for more descriptive mentions of the same referent, or via generation, such as in Quan et al. (2019) where entire utterances were reconstructed in a pragmatically complete manner with coreferences and ellipses resolved. To the best of our knowledge, this approach has not yet been applied to reference resolution in visually-grounded dialogue.
Most contemporary natural language processing (NLP) works use Transformer-based language models Vaswani et al. (2017). For text generation tasks, it is common to use (unidirectional) autoregressive, or _causal_, language models such as GPT Radford et al. (2018). While processing sequences, causal language models mask the future, allowing the model to only attend to the current and previous tokens while predicting the next token. A persistent trend has been to scale up language models, both in terms of their parameter count and the size of their training datasets. These increasingly larger models, such as GPT-3 Brown et al. (2020), OPT Zhang et al. (2022), PaLM Chowdhery et al. (2022), and LLaMa Touvron et al. (2023), have been dubbed _large language models_ (LLMs). The current leading paradigm to modeling downstream NLP tasks is based on transfer learning, where a pretrained LLM is fine-tuned for a specific task on a smaller, domain-specific dataset.
## 3 Method
We treat visually-grounded reference resolution as a text-image retrieval task, where referents are represented by images. We leave finer-grained grounding of words and phrases to image regions or individual entities or parts thereof for future work.
### Proposed Framework
We frame the discourse processing side of the task as a causal language modeling problem. Figure 2 shows a visualization of the proposed framework.
**Task Definition** We denote the dialogue as \(D=(u_{1},u_{2},...,u_{n})\), where each \(u_{i}\) represents an utterance. Each utterance consists of an ordered sequence of tokens. An utterance may contain one or more mentions, denoted as \(M\). A mention is an ordered subsequence of tokens from an utterance. A mention has an exophoric referent, denoted as \(R\). A mention is embedded in what we call its linguistic context, denoted as \(L\). As an ordered subsequence of \(D\), the linguistic context of a given mention consists of the utterance in which it is contained and all preceding utterances. The number of preceding utterances, hereafter referred to as the dialogue history, may be capped if a finite size context window
is defined. The aim of visually-grounded reference resolution is to resolve a reference to its referent, i.e. to identify \(R\) for a given \(M\), from a set of candidate referents, denoted as \(C\), such that \(R\subseteq C\); \(|R|=1\) for single-image referents, \(|R|>1\) for multi-image referents, and \(R=C\) if \(M\) refers to all members in \(C\).
**Referent Description Generation** We propose to generate a definite description, denoted as \(Y\), for a given mention \(M\) that summarizes all that has been disclosed in \(L\) about the referent \(R\). For this purpose, we fine-tune a causal LLM that learns to generate \(Y\) conditioned on \(L\). \(Y\) is a sequence of tokens expected to be largely constructed from tokens that appear, or are some derivative of tokens that appear, in the coreference chain of \(R\), which is contained in \(L\). We refer to the fine-tuned model as the _conversational referent description generator_ (CRDG). For an example of the context dependency of referent description content, see Figure 4 in Appendix B.
**LLM Input** We mark \(M\) in \(u_{i}\) by inserting positional markers as special tokens to indicate the beginning and end of the mention span. We prepend each utterance in \(L\) with a speaker token to indicate the source of the contribution. When \(D\) is task-oriented, we update \(L\) by prepending task instructions, i.e. a special token followed by a sequence of tokens describing the task performed by the dialogue participants. For an example of the input to the LLM, see Figure 5 in Appendix B.
**Text-Image Retrieval** We use a pretrained VLM to identify \(R\) from \(C\) based on \(Y\), zero-shot. We use the text encoder of the VLM to encode \(Y\) into an \(n\)-dimensional feature vector, denoted as \(\mathbf{v}\). We use the image encoder of the VLM to encode each candidate referent of \(C\) into an \(n\)-dimensional feature vector, which gives a \(|C|\times n\) matrix, denoted as \(\mathbf{A}\). We then compute their matrix-vector product. For single-image referents, i.e. when \(|R|=1\), we take the referent to be \(R=argmax(\mathbf{A}\mathbf{v})\).
In order to produce accurate referent descriptions, the CRDG must implicitly learn to perform coreference resolution as we do not provide explicit supervision for this subtask. In each sample, only the current mention for which we want the model to generate a description is marked; none of its coreferences are in any way indicated. A principal advantage of our model is that it can resolve multiple mentions, even when they have different referents, appearing in the same utterance, including nested mentions. Note that for the purpose of this study, we assume mention detection to be solved. As it stands, using this framework in production requires a separate model to propose candidate mentions at the span level.
### Baseline Models
As a lower bound, we report random chance performance. In addition, we compare performance of our approach to baselines based on simple heuristics and a coreference resolution model.
#### 3.2.1 Heuristics
**Mention** We evaluate the image retrieval performance when the VLMs are presented with just the marked mentions.
**Substitution** We improve upon the mention-only baseline by substituting proforms, e.g. pronouns such as _"it"_, and mentions without descriptive content, e.g. phrases such as _"the one you mentioned"_, with the most recent mention that does not belong to either category. This is expected to be a relatively strong baseline when mentions are specific and anaphora have mostly local antecedents.
#### 3.2.2 Coreference Resolution
We opt for an off-the-shelf2 span-based coreference resolution model (**coref**) originally presented in Lee et al. (2018), but that has since been updated
Figure 2: The proposed visually-grounded reference resolution framework. With the CRDG we generate a referent description for a marked mention, to be used by a (frozen) pretrained VLM for referent identification.
to use SpanBERT Joshi et al. (2020) instead of the original GloVe embeddings Pennington et al. (2014). For each mention, we use the model to resolve its coreference links and aggregate all coreferential information in its cluster based on the given context window.
We experiment with two different representations of the referent descriptions from this model, those being (1) a concatenation of all of the mention's coreferences and (2) an ordered _set-of-words_ representation that contains only the unique lexical items in the cluster. To offset that this model was not specifically trained to handle coreference in conversation, we provide it with the contents of the span of the mention when it does not manage to detect the mention itself and, consequently, not connect it to any of its coreferences. For partial matches, in addition to adding all tokens from the cluster associated with the match, we also add the missing tokens from the span to the description.
## 4 Experiments
### Data
We use the dialogues from the collaborative image ranking task **A Game Of Sorts**AGOS, Willemsen et al. (2022) for our experiments. In AGOS, two players are asked to rank a set of images based on a given sorting criterion. They see the same set of images, but the position of the images on the screen is randomized for each player. Through a largely unrestricted conversation, and without being able to see the perspective of the other player, the players need to agree on how to rank the images given the sorting criterion. Sorting criteria are embedded in scenarios that are intended to create a discussion, leading to mixed-initiative interactions with both parties contributing to the discourse. Each interaction takes place over four rounds with the same set of nine images, effectively guaranteeing repeated references. The image sets used for the game cover five different image categories. Each set contains nine images with each image representing an entity from one of these categories as its main subject. Willemsen et al. (2022) collected three interactions per image set for a total of 15 dialogues.
**Ground Truth** Our formulation of the visually-grounded reference resolution problem requires span-based annotations of mentions aligned with the image(s) they denote. These annotations are the basis of what we will refer to as our "ground truth" references used for both training and evaluation. We follow Willemsen et al. (2022) regarding the marking of mentions in AGOS, in that we only annotate those that are either singletons or are part of an identity relation with other mentions that have an exophoric referent that is part of the visual context, i.e. regardless of form, any referring expression that is meant to denote one or more of the images. During the game, players were asked to provide self-annotations: for each message they sent they were asked to indicate which image(s), if any, they were referring to. We use these self-annotations, post-edited where necessary, to manually mark the spans of mentions that can be grounded in the visual context.
We create three different representations of the "ground truth" referent descriptions. Two are automatically extracted from the marked mentions and are similar in structure to the labels of the **coref** baseline, i.e. (1) an incremental concatenation of the reference chain and (2) an incremental ordered set of words consisting of the unique lexical items in the cluster. The third are manually constructed labels that summarize reference chains as definite descriptions. For each representation, the context window dictates which references are considered for the label.
### Model Specifications
For pointers to implementations, we refer the reader to our repository1.
Footnote 1: [https://github.com/google-research/](https://github.com/google-research/)
#### 4.2.1 LLMs
We fine-tune two LLMs, GPT-2 Radford et al. (2019) and GPT-3 Brown et al. (2020), for conversational referent description generation. For hyperparameters, see our Supplementary Material.
**GPT-2** We fine-tune the 1.5 billion parameter GPT-2 model.
**GPT-3** We fine-tune the 175 billion parameter davinci base model using the OpenAI API.
#### 4.2.2 VLMs
We evaluate the zero-shot text-image retrieval performance of several pretrained VLMs for our task, those being CLIP Radford et al. (2021), ALIGN Jia et al. (2021), BLIP Li et al. (2022), and BLIP-2 Li et al. (2023).
**CLIP** We evaluate two variants of CLIP, CLIP ViT-B/32 and CLIP ViT-L/14.
**ALIGN** We use the COYO-ALIGN implementation trained from scratch on COYO-700M.
**BLIP** We use the BLIP base model.
**BLIP-2** We use the BLIP-2 model that was fine-tuned on the Karpathy and Fei-Fei (2015) training set split of MS COCO Lin et al. (2014).
### Evaluation
We perform (nested) five-fold cross-validation by partitioning the AGOS dataset along the five image sets. To avoid leakage, for each run we use the three dialogues from one image set as the held out test set and train on the twelve dialogues from the four other image sets. To evaluate how dialogue history affects results, we report performance of the different methods for two context windows, **3** and **7**. In addition, we examine whether increasing the size of the context window further would, in principle, lead to greater returns, by assessing ground-truth performance for windows of **13** and the **full** dialogue context. Finally, we conduct an error analysis of the generated descriptions.
Note that because we do not incorporate game state information with respect to the visual context during training, we make a simplifying assumption with regard to the images and reduce the candidate set, at test time, as the game progresses. A successfully ranked image is no longer considered part of the visual context for that round. Although this does mean that the models will not be able to identify the referent for references to ranked images, as they will not be part of the candidate set, such references are an extremely rare occurrence, as players must discuss the unranked images to progress with the task. For the sake of completeness, we will also report results for the unchanged candidate set.
#### 4.3.1 Metrics
We measure task success for visually-grounded reference resolution in terms of text-image retrieval performance. In addition, we estimate the quality of the generated referent descriptions by comparing them to the manually constructed ground truth labels using text similarity metrics.
**Text-Image Retrieval** We estimate the image retrieval performance based on accuracy \([0,1]\), mean reciprocal rank (MRR) \([0,1]\), and normalized discounted cumulative gain (NDCG) \([0,1]\). We limit our evaluation to single-image referents. Accuracy is top-1 accuracy.
For our random lower bound, we can calculate the expected values for accuracy and MRR. For top-1 accuracy we take 1 over the size of the set of candidate images per item, averaging over all items. For MRR we take 1 over the size of the set of candidate images, divided by two per item, averaging over all items. Calculating an expected value for NDCG of a random model is intractable due to its dependence on relevancy scores.
**Text Generation** We evaluate the output from the CRDGs by comparing the generated descriptions to the manually constructed ground truth labels using metrics to quantify similarity. We use the Jaccard index \([0,1]\) to assess vocabulary overlap. We use BLEU \([0,1]\)Papineni et al. (2002) to assess similarity based on n-gram overlap (unigrams to four-grams). We use the longest common subsequence variant of ROUGE \([0,1]\)Lin (2004), i.e. ROUGE-L, as a further indication of the preservation of word order. In addition, we opt for an embedding-based metric as a proxy for semantic equivalence between the predicted and reference sequences. For this purpose, we compute cosine similarity \([0,1]\) between text embeddings.
#### 4.3.2 Human
We conduct two different human subject experiments to assess human performance for this task. We provide additional details about the experimental setup in the Supplementary Material.
**Independent** We conduct an experiment aimed at comparing VLM and human performance on the task where every trial is independent. Participants are given a referent description and are asked to select from a set of candidate images the image they believe is best described by the label. The images and labels are presented to the participants independent of the dialogue. Note that we evaluate with the reduced candidate set. The referent descriptions are the manually constructed ground truth labels based on the **full** dialogue context. To collect data for all labels, ensuring independence of observations, we recruited 354 participants via crowdsourcing. The crowdworkers were financially compensated for their contributions.
**Holistic** We conduct an experiment in which mentions are shown to participants within the context of the dialogue. For each mention, the participants are presented with the dialogue leading up to and including the message which contains the reference. The start and end of the span of the mention that the participant is asked to resolve are visually indicated. For each marked mention, the participant is asked to select which image or images are referenced. As they progress with the task, participants will have access to increasingly more of the dialogue history. For each mention the participants
are presented with all images, but with a visual indication of their status, i.e. for each image whether the players had managed to successfully rank it at that point in the interaction. We recruited 23 participants via crowdsourcing. For each of the 15 AGOS dialogues we collected data from two different participants. Each participant was allowed to provide data for at most one dialogue per image set. The crowdworkers were financially compensated for their contributions.
## 5 Results
### Text-Image Retrieval
Table 1 shows, for context windows **3** and **7**, the zero-shot text-image retrieval performance results for the VLM that averaged best performance over the five folds, which was BLIP-2. For the text-image retrieval accuracy achieved by the other VLMs, performance on the not reduced candidate set, and accuracy per fold for BLIP-2, see Appendix C.
As can be seen from the results presented in Table 1, we achieve best performance with a fine-tuned GPT-3 as the CRDG and BLIP-2 for zero-shot text-image retrieval. In addition to outperforming the baselines, we find that GPT-3 is a more performant discourse processor for this task than GPT-2. This result is consistent between the VLMs.
Results generally show a slight increase in performance when increasing the context window from **3** to **7**. Performance on the ground truth reference descriptions for context windows **13** and the **full** dialogue shows this trend persists, with BLIP-2 achieving approximately \(75\%\) and \(83\%\) accuracy, respectively. A plot of the performance for the four context windows is shown in Figure 6 in Appendix C. This result suggests that the size of the context window may have a significant impact on performance, with an \(11\%\) increase in accuracy from **3** to **full**. Furthermore, the VLMs do not seem overly sensitive to the composition of the referent descriptions, as performance is largely comparable between the automatically generated and the manually constructed ground truth labels.
We find that BLIP-2 is on par with human text-image retrieval performance in terms of top-1 accuracy for the manually constructed ground truth referent descriptions based on the full dialogue history for single-image referents, as our human participants averaged roughly \(80\%\) accuracy in the independent setup. However, when we compare these results with the single-image referent text-image retrieval performance in the holistic setup, we see that the upper bound for this task when references are resolved within the combined linguistic and extralinguistic dialogue context is likely considerably higher as our human participants averaged approximately \(91\%\) accuracy (average of best performance per dialogue is roughly \(93\%\)).
### Text Generation
Table 2 shows the text generation metric results averaged over the five folds, providing an indication of the extent to which the fine-tuned LLMs managed to generate referent descriptions that approximate the manually constructed ground truth labels. We observe that an increase in context window size results in a decrease in scores, which is consistent across metrics. Interestingly, we did not find such a decrease with respect to text-image retrieval performance. We do again find GPT-3 to be more performant than GPT-2, here in terms of approximating the ground truth.
### Error Analysis
Examining the output from the fine-tuned GPT-3 model, we observe a number of recurring errors.
\begin{table}
\begin{tabular}{l c c|c c|c c} \hline \hline & \multicolumn{2}{c|}{**Accuracy**} & \multicolumn{2}{c|}{**MRR**} & \multicolumn{2}{c}{**NDCG**} \\ \cline{2-7} & **3** & **7** & **3** & **7** & **3** & **7** \\ \hline Random &.22 &.22 &.43 &.43 &. & - \\ Mention &.59 &.59 &.73 &.73 &.79 &.79 \\ Substitution &.68 &.68 &.80 &.80 &.85 &.85 \\ \hline coref, chain &.65 &.66 &.78 &.79 &.83 &.84 \\ coref, set &.66 &.66 &.78 &.79 &.84 &.84 \\ \hline GT, chain &.73 &.74 &.83 &.85 &.87 &.88 \\ GT, set &.73 &.75 &.84 &.85 &.87 &.89 \\ GT, manual &.72 &.74 &.83 &.84 &.87 &.88 \\ \hline GPT-2 &.64 &.60 &.77 &.74 &.83 &.80 \\ GPT-3 &.69 &.71 &.81 &.82 &.86 &.86 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Cross-validated image retrieval performance averaged over five folds for single-image referents. _Note_. Scores shown are of VLM that averaged best performance (BLIP-2). Scores are rounded to the nearest hundredth. GT = ground truth.
\begin{table}
\begin{tabular}{l c c|c c} \hline \hline & \multicolumn{2}{c|}{**GPT-2**} & \multicolumn{2}{c}{**GPT-3**} \\ \cline{2-5} & **3** & **7** & **3** & **7** \\ \hline BLEU &.55 &.47 &.75 &.70 \\ ROUGE-L &.71 &.65 &.86 &.83 \\ Jaccard &.44 &.35 &.70 &.63 \\ Cosine &.88 &.85 &.96 &.95 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Text generation metrics evaluation results averaged over five folds for single-image referents. _Note_. Scores are rounded to the nearest hundredth.
The most notable errors are those where the model fails to link a mention to (all of) its coreferences that are present in the dialogue segment, or links mentions that denote different referents. For example, for one mention the ground truth label is _"the sheep dog"_, but the generated label was _"the sheep dog with a leash"_; the model incorrectly attributed the prepositional phrase to the mention as it was actually a descriptor for a different referent. Related, since the CRDGs function at the message level, a mention can have both anaphoric and cataphoric coreferences when there are multiple mentions of the same referent in an utterance An example of such an utterance is _"Good question. 1 think the angry one also looks a little wild. So that could be an option as well. I mean the one with white nose and forehead"_, where _"the angry one"_, _"that"_, and _"the one with white nose and forehead"_ are all mentions of the same referent with the same ground truth label _"the angry dog with a white nose and forehead"_. The model generates this correctly for the latter two, but not the former one for which only _"the angry dog"_ was generated, meaning it correctly substituted the program but did not link the mention with its cataphoric coreferences.
Finally, some generated referent descriptions differ from the ground truth in terms of lexical choice or syntax, but not in terms of information content. This negatively affects scores of text generation metrics based on overlapping content in particular, but these are otherwise not meaningful errors as there are multiple ways to construct semantically similar descriptions, e.g., _"the big dog which looks scary"_ versus _"the big scary-looking dog"_.
## 6 Discussion
We have presented an approach to visually-grounded reference resolution that frames the discourse processing side of the task as a causal language modeling problem. By fine-tuning an LLM to generate referent descriptions for marked mentions in dialogue segments from the collaborative image ranking task A Game Of Sorts (Willemsen et al., 2022), we demonstrate the possibility of treating referent identification as a zero-shot text-image retrieval problem by using pretrained VLMs for the grounding of the generated labels. As we have not in any way indicated coreferential relations in the fine-tuning training data, our results imply that certain pretrained LLMs, here GPT-3, may learn to resolve coreferences implicitly without the need for explicit supervision for this fundamental subtask.
In this work, we have treated the processing of the discourse as entirely disjoint of the visual modality. As such, it has inherent limitations. The mentions we find in the dialogues have not been produced void of the extralinguistic context. The dialogue participants could rely on co-observed visual stimuli to help resolve otherwise ambiguous language use. From linguistic context alone, some ambiguities, such as prepositional phrase attachment, may be impossible to resolve. It is, therefore, noteworthy that the downstream zero-shot text-image retrieval performance using the generated descriptions from our unimodal approach far exceeds chance level accuracy, with the potential for results to improve further given access to the full dialogue history, as we found that the ground truth labels based on larger context windows achieve greater text-image retrieval performance. However, the results from our holistic human evaluation support the notion that a multimodal approach should ultimately prove even more effective.
We found that a decrease in text generation metric scores did not necessarily indicate a similar decrease in text-image retrieval performance, suggesting that the generated descriptions captured sufficiently discriminative information about the referents and achieved similar grounding accuracy despite not approximating the ground truth labels to the same extent. It is also important to note that mentions may not have a single, canonical ground truth referent description due to lexical and syntactic variations between referring attempts.
Despite the relatively small size of the dataset collected by Willemsen et al. (2022), we were still able to fine-tune GPT-3 to perform the task with greater accuracy than the baselines, which speaks to the sample efficiency of (certain) pre-trained LLMs. In comparison, we find that the much smaller GPT-2 is prone to intrusions from the fine-tuning training data and more often fails to resolve the coreferences correctly. Although the complexity of the discourse warrants the use of more powerful models, it is, nevertheless, likely that any LLM used for the task would benefit from a larger fine-tuning dataset. Related, benchmarking performance on other visually-grounded dialogue tasks would provide insights into the generalizability of the method.
In addition to pursuing a multimodal approach, finer-grained grounding, and evaluating our method
on other datasets, possible avenues for future work include expanding the annotations to include coreferential relations other than identity relations, addressing multi-image referents, and unifying the method with a mention proposal system.
## Acknowledgements
This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. The authors would like to thank Erik Ekstedt, Dmytro Kalpakchi, Rajmund Nagy, Jim O'Regan, Ambika Kirkland, Chris Emmery, Chris van der Lee, and the anonymous reviewers for their helpful comments.
|
2309.13356 | Probing the Moral Development of Large Language Models through Defining
Issues Test | In this study, we measure the moral reasoning ability of LLMs using the
Defining Issues Test - a psychometric instrument developed for measuring the
moral development stage of a person according to the Kohlberg's Cognitive Moral
Development Model. DIT uses moral dilemmas followed by a set of ethical
considerations that the respondent has to judge for importance in resolving the
dilemma, and then rank-order them by importance. A moral development stage
score of the respondent is then computed based on the relevance rating and
ranking.
Our study shows that early LLMs such as GPT-3 exhibit a moral reasoning
ability no better than that of a random baseline, while ChatGPT, Llama2-Chat,
PaLM-2 and GPT-4 show significantly better performance on this task, comparable
to adult humans. GPT-4, in fact, has the highest post-conventional moral
reasoning score, equivalent to that of typical graduate school students.
However, we also observe that the models do not perform consistently across all
dilemmas, pointing to important gaps in their understanding and reasoning
abilities. | Kumar Tanmay, Aditi Khandelwal, Utkarsh Agarwal, Monojit Choudhury | 2023-09-23T12:17:10Z | http://arxiv.org/abs/2309.13356v2 | # Probing the Moral Development of Large Language Models through Defining Issues Test
###### Abstract
In this study, we measure the moral reasoning ability of LLMs using the Defining Issues Test [1]- a psychometric instrument developed for measuring the moral development stage of a person according to the Kohlberg's Cognitive Moral Development Model [2]. DIT uses moral dilemmas followed by a set of ethical considerations that the respondent has to judge for importance in resolving the dilemma, and then rank-order them by importance. A moral development stage score of the respondent is then computed based on the relevance rating and ranking. Our study shows that early LLMs such as GPT-3 exhibit a moral reasoning ability no better than that of a random baseline, while ChatGPT, Llama2-Chat, PaLM-2 and GPT-4 show significantly better performance on this task, comparable to adult humans. GPT-4, in fact, has the highest post-conventional moral reasoning score, equivalent to that of typical graduate school students. However, we also observe that the models do not perform consistently across all dilemmas, pointing to important gaps in their understanding and reasoning abilities.
## 1 Introduction
The rapid paced developments and adoption of Large Language Models (LLMs) have led to fierce debates on the ethical concerns and potential harms that these models pose [3; 4; 5; 6], which include but are not limited to copyright, data and user privacy violations [7], linguistic inequality [8], hallucination [9; 10; 11], and toxic content generation [12]. The mainstream and most popular approaches to mitigate the harms related to the LLM-generated content, such as toxic, offensive [13], stereotyping, and exclusionary statements [14], and hate speech [15], have mainly involved alignment of model output to certain pre-determined values through techniques such as RLHF [16; 17], fair decoding [18], or post-processing/editing of the outputs [15; 19]. While these techniques are effective in achieving the underlying alignment goals [20], the goals themselves are often difficult, if not impossible, to define. This is because the ethical or moral values that must be upheld by a model or an AI system depend on the specific application, the user, the usage context, the cultural and geographical context, language and many other factors. In other words, it is impossible to design a _universally-aligned_ LLM.
The problem of alignment becomes further complicated due to _value pluralism_ - a condition where different moral values are in conflict with each other and any choice made by the model will have to jeopardize one value in favor of another [21; 22]. Philosophers capture this idea through "moral
dilemmas" - situations that require one to choose one value over another to arrive at a resolution [23]. In fact, it would not be an overstatement to say that most real world situations involve some kind of value pluralism that requires one to chose between conflicting values. Thus, as LLMs become more ubiquitous and power various everyday applications, they have to face and resolve moral dilemmas arising from value pluralism [21]. Many have argued, therefore, that LLMs should ideally be trained as generic ethical reasoners rather than aligned for certain specific values [24].
To what extent LLMs can carry out deep ethical reasoning, and how can we systematically probe this? In this paper, we borrow ideas from the field of _moral psychology_ to test the ethical or moral understanding and reasoning abilities of several popular LLMs. More specifically, we use the Defining Issues Test (DIT) [25] which is based on Kohlberg's Cognitive Moral Development Model [26], to assess the moral development stage of the LLMs. In this test, a moral dilemma is presented along with 12 different statements on ethical considerations; the respondent (in our case, the LLM) is asked to rank these statements in the order of importance for resolving the dilemma. The outcome of the test is a set of scores that tells about the respondent's moral development stage.
We study seven prominent models: GPT-3 [27], GPT-3.5, GPT-4 [28], ChatGPTv1, ChatGPTv2, PaLM-2 [29] and Llama2-Chat (70B version) [30], with 5 moral dilemmas from DIT and 4 newly designed dilemmas that extend the cultural context and diversity of the probes and precludes the possibility of training data contamination. We observe that GPT-4 achieves the highest moral development score in the range of that of a graduate school student, which according to Kohlberg's model of cognitive moral development indicates a _post-conventional_ moral understanding. GPT-3, on the other hand, performs no better than a random baseline. Performance of other models lie in between these two extremes, that roughly corresponds to the score range of adult humans and college students on DIT, and indicates a _conventional_ moral understanding (as dictated by the moral norms and conventions of the society). Interestingly, for 2 of the 9 dilemmas, no model performs better than the random baseline, and for one of the newly designed dilemmas, GPT-4 performs worse than most other models. This shows that there is a lack of consistency in ethical reasoning across these models, implying the need for deeper investigation, understanding and improvement of LLMs' moral reasoning abilities. This work also leads to several interesting technical, practical and philosophical questions, which are discussed in the last section.
## 2 Background and Related Work
In this section, we provide an overview of Morality, Moral Psychology and models of Cognitive Moral Development, from which we draw inspirations and materials to design this study. We also discuss current treatment of ethics in NLP literature, with a particular focus on LLMs.
### Morality and Moral Development
_Morality_ is the study of what is right and wrong, and has been a central concern in philosophy [31]. Over the years, numerous theories have been proposed to explain how individuals develop their moral reasoning and judgments. Of these, the Cognitive Moral Development (CMD) model [2] proposed by Lawrence Kohlberg in 1969 remains one of the most influential accounts of moral development. Building upon Piaget's work [32], Kohlberg developed a comprehensive theory that consists of six stages divided into three main levels: _pre-conventional_, _conventional_, and _post-conventional_ morality.
At Stage 1, individuals are concerned with avoiding punishment and make moral decisions based on fear of consequences and self-interest. At Stage 2, individuals focus on their own needs and interests but recognize that others have similar needs. Moral judgments are influenced by reciprocity, such as "You scratch my back, I'll scratch yours". Stages 1 and 2 are pre-conventional morality. At Stage 3, individuals seek approval and conform to social (and religious) norms. Moral decisions are made to maintain positive relationships and avoid disapproval. At Stage 4, individuals are concerned with law, rules, and authority figures and their moral reasoning revolves around maintaining social order and upholding the greater good. These two stages fall under the realm of conventional morality. At Stage 5, individuals recognize different groups may have different moral perspectives and base their decisions on principles of fairness, justice, and individual rights, even if these principles conflict with social norms or laws. This stage is further divided into sub-stages - 5A and 5B. Stage 5A suggests that moral obligation derives from voluntary commitments of society's members to cooperate whereas Stage 5B is more concerned with procedures which exists for selecting laws that maximize welfare
as discerned in the majority will. At Stage 6, individuals develop their moral principles based on universal ethical values. They act according to a personal ethical code that transcends societal rules and laws. These principles often align with the concepts of justice, equality, and human rights. Stages 5A, 5B and 6 are, thus, called post-conventional morality.
The CMD model emphasizes the importance of moral reasoning and the development of an individual's moral principles. It posits that as individuals mature, their moral reasoning becomes more sophisticated and abstract, allowing them to make ethical decisions based on principles rather than mere rules. It may be noted that this theory has been criticized for bias towards individualistic and self-expressionistic cultures (mostly prevalent in the Global North), overlooking the diversity of moral development across cultures [33; 34], for having gender bias [35], and for ignoring the role of intuitions and emotions in moral decision making [36]. Despite these criticisms, Kohlberg's theory has played a vital role in advancing our understanding of moral development and remains influential in the field of moral psychology.
### Rest's Defining Issues Test
In line with Kohlberg's framework, James Rest introduced the Defining Issues Test (DIT) [1] as a way to measure an individual's moral development. In this test the respondents are presented with moral dilemmas, and their moral reasoning abilities are assessed by analyzing the justifications provided by them for their decisions. Rest's DIT draws upon Kohlberg's stages to categorize individuals into stages of moral development, offering insights into ethical decision-making processes. For over three decades, the DIT has remained the most popular tool for assessing CMD.2. It includes either three (short-form DIT) or six (original DIT) moral dilemmas, each followed by 12 ethical considerations corresponding to different stages of CMD. The respondent has to first provide a resolution to the dilemma (it has three options: two horns of the dilemma and "can't decide") and then rate the significance ("great", "much", "some", "little" and "no importance") of each item in resolving the moral dilemma, and then select and rank the four most important items.
Footnote 2: Between 1974 and 1988, an estimated 400 studies have used DIT. It has been used in over 40 countries, across various professions and with about 150 new studies each year [37]
The ethical consideration statements can also belong to \(A\) or \(M\) categories instead of the stages of CMD [25]. The \(A\)_items_ are intended to typify an "anti-establishment" orientation, a point of view which condemns tradition and the existing social order. The \(M\)_items_ are meant to be meaningless nonsense statements. The "\(M\)" statements were added as a reliability check as any valid respondent would be expected to rate the statement quite low, while for the purposes of any study, the "\(A\)" statements and it's score are simply disregarded.
The Post Conventional Morality Score (abbreviated as P-score), stands as the most widely utilized metric, serving as an indicator of the "relative significance an individual places on principled moral considerations, specifically those associated with Stages 5 and 6, when deliberating moral dilemmas" [25]. If the most vital (top ranked) statement corresponds to either Stage 5 or 6, four points are added to the P-score. Similarly, if the second, third and fourth ranked statements belong to these post-conventional stages, three, two and one points are added respectively to the P-score. Thus, higher the P-score of a respondent, more the importance they pay to universal ethical values and human rights while making moral judgments.
Apart from P-score, DIT also measures _Personal Interest Schema Score_ which reflects choices influenced by personal interests (Stages 2 and 3 in Kohlberg's model), and _Maintaining Norms Schema Score_ that indicates choices driven by societal norms, including legal systems, established roles, and organizational structures. The percentage of "can't decide" choices measures the respondent's decisiveness, reflecting the ease of processing moral information.
The Moral Judgment Test (MJT) [38], developed by Georg Lind to assess one's moral judgment competencies, is also based on Kohlberg's CMD. However, it measures the degree to which one can consistently employ the same moral value across moral dilemmas rather than the stage of moral development.
### Recent Theories in Moral Philosophy
In recent years, moral philosophy has seen the emergence of innovative theories developed by social psychologists, that expand our understanding of moral decision-making. Moral Foundations Theory [39], proposed by Jonathan Haidt and Jesse Graham, posits that human morality is shaped by a set of innate moral foundations or intuitions. These foundations include care/harm, fairness/cheating, loyalty/betrayal, authority/subversion, and sanctity/degradation. According to this theory, individuals vary in the extent to which they prioritize these moral foundations, leading to differences in moral judgments and values. Dual Process Theory [40], rooted in psychology and neuroscience, posits that moral decision-making involves two cognitive processes: System 1 (intuitive) and System 2 (reflective). System 1 operates quickly and automatically, relying on gut feelings and emotions, while System 2 involves deliberate reasoning and critical thinking. This theory suggests that moral judgments often result from the interplay between these two systems, and the balance can vary among individuals and situations. Though beyond the scope of our current study, these theories can provide novel frameworks for assessing the ethical reasoning abilities of LLMs.
### Current Approaches to Ethics of LLMs
_AI alignment_ is a research field that aims to ensure that AI systems advance the intended goals, preferences, or ethical principles of humans [41]. Numerous scholarly works have contributed significantly to the development of ethical frameworks, principles, guidelines, methodologies, and tools essential for the responsible and ethical design, evaluation, and deployment of LLMs. Additionally, some datasets have been curated for the explicit purpose of training and assessing LLMs in their comprehension of ethical considerations, societal contexts, and norms, as well as their capacity to analyze these complex scenarios [42; 43; 44; 45; 46]. These studies have shed light on the notable ability of LLMs to understand and elucidate toxic content. However, it is important to underscore a salient limitation within these investigations, namely, the inherent bias embedded within the collected data. This bias stems from the geographical locations, cultural backgrounds, and political orientations of the annotators, casting a shadow on the universality of the findings [47].
Some recent works demonstrate how in-context learning [24] and supervised tuning [48; 49] can help aligning LLMs with moral instructions. These works aim to ensure that LLMs respect human values and norms, such as fairness, accountability, transparency, privacy, safety, etc. They also suggest ways to identify, measure, mitigate, and prevent the potential harms of LLMs to individuals and society. Some of these works propose ethical datasets [49] and guidelines [50; 51] to help researchers and practitioners assess and improve the ethical capabilities of LLMs.
However, ethics is not a monolithic or universal concept. Different people may have different ethical views, beliefs, values, preferences, etc. depending on their cultural, social, religious, and political backgrounds [52; 53; 21]. Therefore, it is important to acknowledge and respect the diversity and pluralism of human ethics and values when developing and using LLMs. This means that LLMs should not impose or favor a single or dominant ethical perspective or value system over others but rather allow for multiple and diverse ethical perspectives and value systems to coexist and interact.
Ethical issues often involve shades of gray and require nuanced reasoning that cannot be adequately captured with a binary decision. Most of the current approaches to AI alignment fail to capture the multifaceted nature of ethical reasoning. Ethical decisions often involve multiple dimensions, including fairness, justice, harm, and cultural context, which may not be fully addressed in a binary setup. Binary choices may lack explanatory power. They don't provide insights into why a model made a particular ethical decision, making it challenging to assess the quality of its ethical reasoning. It may not adequately capture the complexities of ethical trade-offs. In real-world scenarios, ethical decisions often involve weighing competing values, which binary tasks may not address effectively.
## 3 Data and Method
In this section, we describe our experimental setup, the datasets, LLMs tested, prompt structure and metrics. We present the LLMs with a prompt that contains the moral dilemma along with the 12 ethical considerations followed by three questions. Based on the responses to these questions, we compute the P-score and individual stage scores for each LLM.
### Dataset
We used five dilemmas from DIT-13 and constructed four novel moral dilemmas. Each author designed one dilemma (story and the ethical consideration statements) similar in structure to the original DIT dilemmas. The statements of each dilemma were then independently annotated by all the authors for the Kohlberg's CMD stages that they represent. Cases of disagreements were discussed and if for a statement no clear consensus was reached, the statement was edited or redesigned to avoid any ambiguity. A brief summary of the dilemmas are described below, and Appendix A presents the four new dilemmas.
Footnote 3: DIT-1 dilemmas are not freely available; we purchased the dataset from The University of Alabama through the official website: [https://ethicaldevelopment.ua.edu/ordering-information.html](https://ethicaldevelopment.ua.edu/ordering-information.html)
The complete DIT-1 consists of six dilemmas: (1) **Heinz dilemma** - Should Heinz steal a drug from an inventor in town to save his wife who is dying and needs the drug?, (2) **Newspaper dilemma** - Should a student newspaper be stopped by a Principal of a high school when the newspaper stirs controversy in the community?, (3) **Student dilemma** - Should students take over an administration building in protest of the Vietnam war?, (4) Webster dilemma - Should a minority member be hired for a job when the community is biased?, (5) **Prisenor dilemma** - Should a man who escaped from prison but has since been leading an exemplary life be reported to authorities? and (6) **Doctor dilemma** - Should a doctor give an overdose of pain-killer to a suffering patient?
The four novel moral dilemmas are: (1) **Monica's Dilemma** - Should Monica give the first authorship to Aisha despite having the major contribution?, (2) **Timmy's Dilemma** - Should Timmy attend his friend's wedding instead of fixing an urgent bug that could put customers' privacy at risk?, (3) **Rajesh's Dilemma** - Should Rajesh rent a house by hiding the secret of his non-vegetarian consumption at home from the vegetarian neighborhood? and (4) **Auroria Dilemma** - Should the country Aurora share its innovations and resources to it's poor neighbor or profit off it's huge investments in research?
The dilemmas are associated with conflicting values such as interpersonal vs. societal (_Heinz dilemma_), interpersonal vs. professional (_Timmy's and Monica's dilemmas_), and community vs. personal values placed in diverse cultural and situational contexts. We exclude the _Doctor's dilemma_ from all experiments as most LLMs do not generate a response for it, presumably due to their content filtering policies.
### Experimental Setup
We study seven popular LLMs: GPT-4 (size undisclosed), PaLM-2 (size undisclosed), ChatGPT (July 2023) (henceforth referred to as ChatGPTv2, 175B params), ChatGPT (December 2022) (henceforth referred to as ChatGPTv1, 175B params), GPT-3.5 (text-davinci-003)(175B params), GPT-3 (175B params) and Llama2-Chat (70B params). All these models are trained on massive amounts of text data from various sources and domains and have different training methods and capabilities.
Figure 1 shows the prompt structure. The text in black are fixed, whereas those in blue are dilemma specific. Since LLMs might have positional bias while ranking the ethical consideration statements for a dilemma, or in choosing one of the three options (O1, O2 and O3) as a resolution for the dilemma, we consider 8 different predefined permutations of the 12 statements (out of 12! possibilities) and all, i.e., 6, permutations of the options. This amounts to 48 distinct prompts per dilemma. For all experiments, we set temperature to 0, presence penalty to 1, top_p to 0.95, and max_tokens to 2000 (except GPT-3 where it is set at 1000 due it's smaller context length).
### Metrics
We used the metric P-score, henceforth \(p_{score}\), as proposed by the DIT authors which indicates the "relative importance a subject gives to principled moral considerations (Stages 5 and 6)". \(p_{score}\) is calculated by assigning points to the four most important statements the respondent (the LLM in our case) has selected that correspond to the post conventional stages. 4, 3, 2 and 1 points are added to the score if the first, second, third and fourth ranked statements belong to Stage 5 or 6 respectively. The final score is obtained by multiplying the sum by 10. As an illustration, suppose that the model predicts 12, 7, 3, 9 as the most important statements of consideration in descending order, of which only items 12 and 3 belong to the post-conventional stages. Then, the \(p_{score}\) will be \(10\cdot(4+2)=60\)
Similarly, we also calculate stage-wise scores, \(score_{\theta}\), as
\[score_{\theta}=10\cdot\sum_{i=1}^{4}((5-i)\cdot S_{i,\theta})\quad\text{where }S_{i,\theta}=\begin{cases}1&\text{if }i^{th} \text{ ranked statement is from Stage-}\theta\\ 0&\text{otherwise}\end{cases} \tag{1}\]
Thus, \(p_{score}=score_{5}+score_{6}\). We also compute the random baseline scores for each dilemma, i.e., the score a respondent will receive on average if they were ranking the items randomly. These baseline numbers depend only on the number of items that belong to a certain stage for a dilemma. Heinz, Prisoner and Newspaper dilemmas have 3 items in Stages 5 and 6, giving a random baseline \(p_{score}\) of 25. All other dilemmas have 4 items in Stages 5 and 6, and a random baseline \(p_{score}\) of 33.33. Thus, the average random \(p_{score}\) over all dilemmas is 30.56.
The maximum possible \(p_{score}\) is 90 for the Heinz, Prisoner and Newspaper dilemmas and 100 for the others. Thus, the \(p_{score}\) averaged on all dilemmas ranges from 0 to 96.67. Higher the \(p_{score}\), deeper the moral understanding and better the moral reasoning ability of a model (or equivalently, of a human respondent). Various surveys conducted on human subjects using DIT [25] report a \(p_{score}\) of around 20 and 30 for junior and senior high school children respectively (mostly pre-conventional stage), between 40 and 46 for college students as well as average adults (mostly at the conventional stage), and between 53 and 63 for graduate school students (early post-conventional stage).
## 4 Results and Observations
The results of our experiments are summarized in two plots: Fig. 2 shows the \(p_{score}\) for each LLM as violin plots grouped by dilemmas. Fig. 2(a) shows the stage-wise scores for the LLMs averaged over all dilemmas; this provides insight into the overall performance and staging of the models. The three key observations from these results are as: (a) Overall, GPT-3 has the lowest and close to random \(p_{score}\), while GPT-4 has the highest \(p_{score}\); the other models in ascending order of \(p_{score}\) are: GPT-3.5, ChatGPTv2, PalM-2, Llama2-Chat, ChatGPTv1. Our study shows that except for GPT-3, all models investigated have a \(p_{score}\) equivalent to an average adult human or college student; only GPT-4 achieves a \(p_{score}\) (= 55.68) in the range of a graduate student and shows post-conventional moral reasoning abilities. (b) All models perform poorly on the Prisoner and Webster dilemmas, while most models perform well on the Timmy and Newspaper dilemmas; and (c) There is significant variability in the responses of all the models over different runs (as shown by the violin plots), as well as specific dilemmas where they perform exceptionally well (e.g., GPT-4 on Newspaper dilemma) or poorly (e.g., GPT-4 on Rajesh's dilemma).
Figure 1: Prompt structure illustrated for the Monica’s Dilemma.
Fig 3b shows the resolutions proposed by the models for each dilemma. Two interesting observations emerge from it: (a) All models agree perfectly for the Webster dilemma. A majority of models agree for the Heinz, Newspaper, Rajesh and Aurora dilemmas. (b) Contrary to other models, ChatGPTv2, does not favor any particular resolution (except in Webster). In the subsequent paragraphs, we present model-specific observations.
**GPT-3.** The prompt structure described in Fig. 1 did not work with GPT-3, as the model failed to generate any cogent response. Through trial-and-error, we constructed a prompt where only the resolution of the moral dilemma and the selection of top four statements (out of 12) were asked for, which seemed to work for the model. Even then, we observed that it frequently ranks the statements at position 1, 3, 5 and 7 as most significant options, irrespective of the stages the sentences belonged to. This explains why the average \(p_{\text{score}}\) for GPT-3, 29.84, is close to that of the random baseline. In conclusion, GPT-3 is incapable of moral reasoning and also, of following complex multistage instructions. Incidentally, we also tested text-davinci-002, but could not make it generate cogent responses. Therefore, the model is excluded from the study.
**GPT-3.5, ChatGPT** (both v1 & v2) and **GPT-4** demonstrate a greater ability of understanding the instructions, presumably due to the RLHF training. Therefore, these models responded consistently to the prompt questions, and also perform significantly better than the random baseline. We observe a general trend that the bigger and the newer models have higher \(p_{score}\), except for ChatGPTv2 that has a slightly lower \(p_{score}\) than its previous version ChatGPTv1. Incidentally, there are anecdotal (but contested) claims [54] that the performance of ChatGPT is degrading over time as newer versions are being released, which is consistent with our observation. With a \(p_{score}\) of 55.68, GPT-4 is the only model that clearly shows post-conventional moral reasoning abilities equivalent of graduate students.
**Llama2-Chat**, even though a much smaller model compared to GPT-3.x series, achieves an unexpectedly high \(p_{score}\) which is less than only GPT-4 and ChatGPTv1. This points to the possibility of
Figure 2: Dilemma wise \(p_{score}\) comparison across LLMs. The dotted line shows the random baseline \(p_{score}\) for the dilemma.
building smaller models with strong moral reasoning capabilities. **PaLM-2** exhibited superior moral reasoning capability with a \(p_{score}\) of 52.24. However, it did not generate a response to the Prisoner dilemma. Therefore, the total \(p_{score}\) is averaged over 8 instead of 9 dilemmas. When averaged over the same 8 dilemmas, the \(p_{score}\) of the other models are (in descending order): GPT-4 - 58.81, ChatGPTv1 - 56.44, Llama2-Chat - 52.85, ChatGPTv2 - 51.55, GPT-3.5 - 49.48 and GPT-3 - 31.20. Thus, PaLM-2 performs worse than GPT-4 and ChatGPTv1, but is comparable to Llama2-Chat and ChatGPTv2. Note that the average \(p_{score}\) is significantly higher for all the models when Prisoner dilemma is removed from the set because all models perform poorly on this dilemma.
## 5 Discussion and Conclusion
In this study, we propose an effective evaluation framework to measure the ethical reasoning capability of LLMs based on Kohlberg's Cognitive Moral Development model and Defining Issues Test. Apart from the 6 moral dilemmas included in DIT-1, we propose 4 novel dilemmas partly to expand the socio-cultural contexts covered by the dilemmas, and partly to ensure that the LLMs were not already exposed to them. Our study shows that GPT-4 exhibits post-conventional moral reasoning abilities at the level of human graduate students, while other models like ChatGPT, LLama2-Chat and PaLM-2 exhibit conventional moral reasoning ability equivalent to that of an average adult human being or college student.
We are aware of several limitations of this study, including the known criticisms of the DIT framework [55; 56], that provides us with enough reasons not to take the numbers at their face value. More investigation is necessary to firmly establish the moral reasoning abilities and limitations of LLMs. Nevertheless, it is interesting to ponder on some of the repercussions of these findings. While one could explain the conventional moral reasoning abilities observed in the LLMs as an effect of the training data [57] at pre-training, instruction fine-tuning and RLHF phases, which certainly contains several instances of conventionalized and codified ethical values, one wonders how an LLM (e.g, GPT-4 ) could exhibit post-conventional moral reasoning abilities. Since the training data and the architectural details of GPT-4 are undisclosed, one can only speculate the reasons. Either the data (most likely the one used during RLHF) consisted of many examples of post-conventional moral reasoning, or it is an emergent property of the model. In the latter case, a deeper philosophical question that arises is whether moral reasoning can emerge in LLMs, and if so, whether it is just a special case of general reasoning ability.
There are other open problems around the dilemmas and types of moral questions where the current models are lagging (e.g., Prisoner and Webster dilemma), what makes these dilemmas difficult, and
Figure 3: Model-wise scores and their dilemma-wise resolutions. PaLM-2 results are from 8 dilemmas (Sec. 4). In Fig-(b), the colors’ RGB components depict the fraction of runs with corresponding resolutions (Green - O1(Should do), Blue - O2(Can’t Decide), Red - O3(Shouldn’t do))
how can we train models with the specific objective of improving their moral reasoning capability. One might also ask that since many of the models, especially GPT-4, is as good or better than an average adult human in terms of their moral development stage scoring, does it then make sense to leave the everyday moral decision making tasks to LLMs. In the future, if and when we are able to design LLMs with \(p_{score}\) higher than expert humans (e.g., lawyers and justices), should we replace judges and jury members by LLMs?
|
2309.12048 | Cosmology with multiple galaxies | Recent works have discovered a relatively tight correlation between
$\Omega_{\rm m}$ and properties of individual simulated galaxies. Because of
this, it has been shown that constraints on $\Omega_{\rm m}$ can be placed
using the properties of individual galaxies while accounting for uncertainties
on astrophysical processes such as feedback from supernova and active galactic
nuclei. In this work, we quantify whether using the properties of multiple
galaxies simultaneously can tighten those constraints. For this, we train
neural networks to perform likelihood-free inference on the value of two
cosmological parameters ($\Omega_{\rm m}$ and $\sigma_8$) and four
astrophysical parameters using the properties of several galaxies from
thousands of hydrodynamic simulations of the CAMELS project. We find that using
properties of more than one galaxy increases the precision of the $\Omega_{\rm
m}$ inference. Furthermore, using multiple galaxies enables the inference of
other parameters that were poorly constrained with one single galaxy. We show
that the same subset of galaxy properties are responsible for the constraints
on $\Omega_{\rm m}$ from one and multiple galaxies. Finally, we quantify the
robustness of the model and find that without identifying the model range of
validity, the model does not perform well when tested on galaxies from other
galaxy formation models. | Chaitanya Chawak, Francisco Villaescusa-Navarro, Nicolas Echeverri Rojas, Yueying Ni, ChangHoon Hahn, Daniel Angles-Alcazar | 2023-09-21T13:15:57Z | http://arxiv.org/abs/2309.12048v1 | # Cosmology with multiple galaxies
###### Abstract
Recent works have discovered a relatively tight correlation between \(\Omega_{\rm m}\) and properties of individual simulated galaxies. Because of this, it has been shown that constraints on \(\Omega_{\rm m}\) can be placed using the properties of individual galaxies while accounting for uncertainties on astrophysical processes such as feedback from supernova and active galactic nuclei. In this work, we quantify whether using the properties of multiple galaxies simultaneously can tighten those constraints. For this, we train neural networks to perform likelihood-free inference on the value of two cosmological parameters (\(\Omega_{\rm m}\) and \(\sigma_{8}\)) and four astrophysical parameters using the properties of several galaxies from thousands of hydrodynamic simulations of the CAMELS project. We find that using properties of more than one galaxy increases the precision of the \(\Omega_{\rm m}\) inference. Furthermore, using multiple galaxies enables the inference of other parameters that were poorly constrained with one single galaxy. We show that the same subset of galaxy properties are responsible for the constraints on \(\Omega_{\rm m}\) from one and multiple galaxies. Finally, we quantify the robustness of the model and find that without identifying the model range of validity, the model does not perform well when tested on galaxies from other galaxy formation models.
Cosmological parameters -- Machine learning techniques -- Galaxy processes -- Computational methods -- Astronomy data analysis 0000-0002-4002-4008]Chaitanya Chawak
0000-0002-2882-7885]Francisco Villaescusa-Navarro
0000-0002-4883-7885]Nicolas Echeverri Rojas
0000-0002-4883-0888]Yueying Ni
0000-0002-4883-0888]ChangHoon Haim
0000-0002-4883-0888]Daniel Angles-Alcazar
## 1 Introduction
Some of the most fundamental questions we can ask in cosmology are: What are the components that make up the Universe? How much does each component contribute? We now know that the Universe should be made up of at least three main components: 1) baryons, representing all the substances and materials we know, 2) dark matter, some fundamental particle that interacts with baryons mostly (perhaps uniquely) through gravity, and 3) dark energy, a mysterious substance (perhaps a property of the vacuum) responsible of the recent acceleration of the Universe. From cosmological data, we believe these three components represent roughly 5%, 25%, and 70% of the current energy content of the Universe.
Parameters such as \(\Omega_{\rm b}\) and \(\Omega_{\rm m}\) represent the fraction of the Universe's energy content in terms of baryons and baryons plus dark matter, respectively. Determining them is important to learn about the nature and properties of dark matter and also to learn about the growth rate of the Universe (Huterer, 2023). There are many different methods to infer these parameters, from studying the properties of the cosmic microwave background anisotropies to the spatial distribution of galaxies. Recently, Villaescusa-Navarro et al. (2022) claimed that a tight relation between \(\Omega_{\rm m}\) and the properties of individual galaxies are present in galaxies from state-of-the-art hydrodynamic simulations. The relationship is present even when varying the value of astrophysical parameters controlling the efficiency of supernova and |
2309.10887 | Provable Advantage in Quantum PAC Learning | We revisit the problem of characterising the complexity of Quantum PAC
learning, as introduced by Bshouty and Jackson [SIAM J. Comput. 1998, 28,
1136-1153]. Several quantum advantages have been demonstrated in this setting,
however, none are generic: they apply to particular concept classes and
typically only work when the distribution that generates the data is known. In
the general case, it was recently shown by Arunachalam and de Wolf [JMLR, 19
(2018) 1-36] that quantum PAC learners can only achieve constant factor
advantages over classical PAC learners.
We show that with a natural extension of the definition of quantum PAC
learning used by Arunachalam and de Wolf, we can achieve a generic advantage in
quantum learning. To be precise, for any concept class $\mathcal{C}$ of VC
dimension $d$, we show there is an $(\epsilon, \delta)$-quantum PAC learner
with sample complexity \[ O\left(\frac{1}{\sqrt{\epsilon}}\left[d+
\log(\frac{1}{\delta})\right]\log^9(1/\epsilon)\right). \] Up to
polylogarithmic factors, this is a square root improvement over the classical
learning sample complexity. We show the tightness of our result by proving an
$\Omega(d/\sqrt{\epsilon})$ lower bound that matches our upper bound up to
polylogarithmic factors. | Wilfred Salmon, Sergii Strelchuk, Tom Gur | 2023-09-19T19:26:20Z | http://arxiv.org/abs/2309.10887v1 | # Provable Advantage in Quantum PAC Learning
###### Abstract
We revisit the problem of characterising the complexity of Quantum PAC learning, as introduced by Bshouty and Jackson [SIAM J. Comput. 1998, 28, 1136-1153]. Several quantum advantages have been demonstrated in this setting, however, none are generic: they apply to particular concept classes and typically only work when the distribution that generates the data is known. In the general case, it was recently shown by Arunachalam and de Wolf [JMLR, 19 (2018) 1-36] that quantum PAC learners can only achieve constant factor advantages over classical PAC learners.
We show that with a natural extension of the definition of quantum PAC learning used by Arunachalam and de Wolf, we can achieve a generic advantage in quantum learning. To be precise, for any concept class \(\mathcal{C}\) of VC dimension \(d\), we show there is an \((\epsilon,\delta)\)-quantum PAC learner with sample complexity
\[O\left(\frac{1}{\sqrt{\epsilon}}\left[d+\log\!\left(\frac{1}{\delta}\right) \right]\log^{9}(1/\epsilon)\right).\]
Up to polylogarithmic factors, this is a square root improvement over the classical learning sample complexity. We show the tightness of our result by proving an \(\Omega(d/\sqrt{\epsilon})\) lower bound that matches our upper bound up to polylogarithmic factors.
## 1 Introduction
Probably approximately correct (PAC) learning [1] is a fundamental model of machine learning. One is given a set of functions \(\mathcal{C}\subseteq\{0,1\}^{\mathcal{X}}=\{f:\mathcal{X}\rightarrow\{0,1\}\}\), called a concept class, that encodes the structure of a learning problem (for example, functions that only depend on the hamming weight of their input). Given labelled examples from an unknown concept \(c\in\mathcal{C}\), we are tasked with learning an approximation to \(c\).
We model the data that the learning algorithm receives by an unknown probability distribution \(\mathcal{D}\) on \(\mathcal{X}\), and say that a hypothesis \(h:\mathcal{X}\rightarrow\{0,1\}\) is \(\epsilon\)-approximately correct if the probability that it differs from \(c\) is at most \(\epsilon\). To be precise, A hypothesis \(h\in\{0,1\}^{\mathcal{X}}\) is said to be \(\epsilon\)-approximately correct if
\[\mathbb{P}_{X\sim\mathcal{D}}\left[h(X)\neq c(X)\right]\leq\epsilon. \tag{1}\]
A learning algorithm \(\mathcal{A}\) draws independent samples \((X,c(X))\), where \(X\) is distributed according to \(\mathcal{D}\), and then outputs a hypothesis \(h\). The algorithm \(\mathcal{A}\) is an \((\epsilon,\delta)\)-learner if, with probability at least \(1-\delta\) over the random samples, it outputs a \(\epsilon\)-approximately correct hypothesis.
The amount of "structure" possessed by \(\mathcal{C}\) is characterised by its Valiant-Chapernikis (VC) dimension [2], denoted \(d\). For a subset \(Y\subseteq X\), we define \(\mathcal{C}|_{Y}:=\{c|_{Y}:c\in\mathcal{C}\}\) as the restriction of the concept class to \(Y\). We say that \(\mathcal{C}\) shatters \(Y\) if \(\mathcal{C}|_{Y}=\{0,1\}^{Y}\), i.e., if all possible labellings of \(Y\) appear in concepts in \(\mathcal{C}\). Then, \(d\) is the maximum size of a shattered set, that is
\[d=\max\{|Y|:Y\text{ is shattered by }\mathcal{C}\}. \tag{2}\]
Over a period of 27 years [3, 4], the exact asymptotic scaling of the minimum number of samples required by an \((\epsilon,\delta)\)-learner was found to be
\[\Theta\left[\frac{1}{\epsilon}\left(d+\log\!\left(\frac{1}{\delta}\right) \right)\right], \tag{3}\]
thereby characterising the complexity of classical PAC learning.
In 1995, Bshouty and Jackson [5] considered a generalisation of PAC learning to the quantum setting [6]. Here, instead of receiving independent identically distributed samples \((X,C(X))\), one receives independent copies of a quantum state
\[|\psi_{c}\rangle=\sum_{x\in\mathcal{X}}\sqrt{\mathcal{D}(x)}\,|x\;c(x)\rangle\,, \tag{4}\]
known as a _quantum sample_. In particular, measuring such a state in the computational basis gives a sample \((X,C(X))\). In turn, instead of counting the number of samples, the quantum sample complexity is the number of copies of the state given to the quantum learning algorithm.
The Quantum PAC model is instrumental in understanding the limits of other quantum cryptographic and computational tasks. For instance, in [7], a connection between differential privacy and PAC learnability of quantum states was established, and recently [8] used the PAC framework to investigate the complexity of learning parameterised quantum circuits, which are ubiquitous in variational quantum algorithms where they are used for quantum state preparation.
In the special case of quantum PAC learning under the uniform distribution, it has been shown that one can obtain quantum sample complexity advantages in specific learning tasks, such as learning Fourier basis functions [9], DNF formulae [5], and \(k\)-juntas [10]. These advantages rely on Fourier sampling, in which one applies the Hadamard transform on every qubit followed by a measurement of the resulting state in the computational basis. One observes a bit string \(s\) with probability given by its squared Fourier coefficient \(|\hat{c}_{s}|^{2}\) and can thus directly infer properties of the Fourier spectrum of the unknown function. However, such advantages rely on the distributions \(\mathcal{D}\) being (approximately) uniform.
The general quantum PAC learning model, with an arbitrary and unknown distribution \(\mathcal{D}\), was studied by Arunachalam and de Wolf [6, 11], who showed that the quantum sample complexity has exactly the same asymptotic scaling as the classical learning complexity, ruling out everything but constant factor prospective advantages.
Thus, most recent literature has focused identifying advantages only in suitably restricted versions of the quantum PAC model [10, 12]. Nevertheless, such models have demonstrated remarkable utility when assessing the complexity of learning quantum states, channels [13, 14, 15], and measurements [16, 17] in quantum theory with lower bounds on query complexity established in [18].
Here, we consider a natural and less restrictive version of the quantum PAC learning model. Instead of access to copies of the state \(|\psi_{c}\rangle\), we assume that we have access to the quantum circuit that generates it, similarly in spirit to [19, 20]. That is, we assume one has access to a quantum circuit \(Q_{c}\) that generates a quantum sample \(|\psi_{c}\rangle\) (for example, as a decomposition into one and two-qubit gates) and thus can implement \(Q_{c}\) and \(Q_{c}^{\dagger}\). Given this natural adjustment to the input access of quantum PAC learning algorithms, we can revisit the question of whether strong generic (beyond constant-factor) quantum advantages are possible for quantum PAC learning.
### Our results
In this paper, we show that there is a square root advantage (up to polylogarithmic factors) for quantum PAC learning over classical PAC learning in the full, general model. Our main result (see Section 5) is summarised by the following theorem.
**Theorem 1.1**_Let \(\mathcal{C}\) be a concept class with VC dimension \(d\). Then, for every \(\epsilon,\delta>0\), there exists a \((\epsilon,\delta)\)-quantum PAC learner for \(\mathcal{C}\) that makes at most_
\[O\left(\frac{1}{\sqrt{\epsilon}}\left[d+\log\!\left(\frac{1}{\delta}\right) \right]\log^{9}\!\left(1/\epsilon\right)\right), \tag{5}\]
_calls to an oracle that generates a quantum sample (\(Q_{c}\)) or its inverse (\(Q_{c}^{\dagger}\))._
In comparison, the optimal classical PAC learning complexity (and quantum PAC complexity given access to copies of \(|\psi_{c}\rangle\)[11]) is given in equation (8). Thus, our upper bound is a square root improvement (up to polylogarithmic factors) over the best possible classical learning algorithm. In fact, we show that this upper bound is essentially tight, up to polylogarithmic factors, as captured by the following theorem.
**Theorem 1.2**: _Let \(\mathcal{C}\) be a concept class with VC dimension \(d\). Then, for a sufficiently small constant \(\delta>0\) and for all \(\epsilon>0\), any quantum \((\epsilon,\delta)\)-learner for \(\mathcal{C}\) makes at least_
\[\Omega\left(\frac{d}{\sqrt{\epsilon}}\right) \tag{6}\]
_calls to an oracle that generates a quantum sample (\(Q_{c}\)) or its inverse (\(Q_{c}^{\dagger}\))._
### Technical overview
Our starting point is the observation that the lower bound of Arunachalam and de Wolf [11] implicitly rests on the assumption that a quantum learning algorithm must not depend on the underlying concept, and it can thus be represented by a (concept independent) POVM. They then reduce the problem of PAC learning to that of state discrimination (where the POVM is state-independent). However, if we allow for the common assumption that the algorithm has access to an oracle \(Q_{c}\) generating \(|\psi_{c}\rangle\), the proof of the lower bound no longer holds1. If the POVM describing the algorithm calls the oracle, it, _by definition_, depends on the underlying concept. Thus, one cannot reduce the problem to that of state discrimination, where it is assumed that the POVM is independent of the input state.
Footnote 1: Since the state \(|\psi_{c}\rangle\) must be produced by some process, this assumption is quite minimal.
If one implements \(Q_{c}\) on some physical device (for example, as a series of one and two-qubit gates), it is natural to assume that one can also implement the inverse process \(Q_{c}^{\dagger}\) (for example, by reversing the order of the gates and replacing each by its inverse). Thus, we argue that if one has access to the state \(|\psi_{c}\rangle\) it is natural to also consider the situation in which one also has access to \(Q_{c}\) and \(Q_{c}^{\dagger}\). Indeed, this setting has recently received significant attention [20, 21].
Given access to \(Q_{c}\) and \(Q_{c}^{\dagger}\), it is tempting to attempt techniques such as Grover search and amplitude amplification, which often achieve quadratic quantum advantages. Consider, for example, the simplest possible concept class \(\mathcal{C}=\{0,1\}^{\mathcal{X}}\): the set of all possible classifiers. It is known that a classical worst-case distribution for this class is a "perturbed" delta-function [11], where there is a marked element \(x_{0}\in\mathcal{X}\) with probability \(\mathcal{D}(x_{0})=1-4\epsilon\), and all other elements have equal probability. Roughly speaking, to \((\epsilon,\delta)\)-learn \(\mathcal{C}\), one must learn a fraction of \(3/4\) of the values of \(c\). However, it takes on average \(O(1/\epsilon)\) samples to return an \(x\) that _isnt_\(x_{0}\) and thus the classical learning query complexity is \(\Omega(|\mathcal{X}|/\epsilon)\). In this case, one could repeatedly run Grover's search, marking any state \(|x\ b\rangle\) as good if we have not yet learnt \(c(x)\). With Grover search, it only takes \(O(1/\sqrt{\epsilon})\) oracle calls to return an \(x\) that is not \(x_{0}\) and thus we see the quantum query complexity is \(O(|\mathcal{X}|/\sqrt{\epsilon})\), the desired quadratic improvement. Therefore, we already outperform the lower bound of Arunachalam and de Wolf [11].
Note that the method above does not immediately generalise to other concept classes. For example, consider the concept class
\[\mathcal{C}=\{c\in\{0,1\}^{\mathcal{X}}:|c^{-1}(\{1\})|=d\}\,\]
the class of classifiers with exactly \(d\) inputs that map to \(1\), and take \(\mathcal{D}\) to be the uniform distribution on \(\mathcal{X}\). If \(|\mathcal{X}|\) is very large, then most unseen \(x\)'s will have \(c(x)=0\) and thus the above approach is uninformative. Instead, instead, one should mark a state \(|x\ b\rangle\) as good if \(b=1\). In this way, one can search for the inputs \(x\in\mathcal{X}\) that have \(c(x)=1\) and hence deduce \(c\). This will also lead to a quadratic quantum advantage.
However, for general concept classes, it is less clear what to search for. One could run the Halving algorithm, where we mark a state \(|x\ b\rangle\) as good if the majority of the concepts \(h\in\mathcal{C}\) that are consistent with the data so far have \(h(x)=1-y\). In this case, every time the Grover algorithm succeeds, one would eliminate at least half of the concepts in \(\mathcal{C}\). However, this leads to a \(\log|\mathcal{C}|\) factor in the learning complexity, which can be as large as \(d\log|\mathcal{X}|\), i.e., arbitrarily larger than \(d\) (the VC dimension of \(\mathcal{C}\)). Thus, even under the simplifying assumption of the uniform distribution, it is unclear how to attempt to use Grover's search to obtain a quantum advantage.
Nevertheless, we show that one can achieve a square root quantum advantage in the general case. As a first step, we use the technique of equivalence queries [22] (also known as random counterexamples). An equivalence query is an alternative learning oracle to the traditional PAC oracle, in which one submits a candidate hypothesis \(h\in\{0,1\}^{\mathcal{X}}\). If \(h=c\), then the oracle outputs \(YES\), otherwise it produces a labelled counterexample \((X,c(X))\) where
1. \(h(X)\neq c(X)\).
2. \(X\) is distributed according to \(\mathbb{P}(y)=\mathcal{D}(y)/\mathcal{D}(\{x:c(x)\neq h(x)\})\).
Observe that by marking a state \(|x\ y\rangle\) as good if \(h(x)=1-y\), we can see how to implement an equivalence query using Grover search, and thus one can hope to use this tool from classical learning theory to achieve an advantage. However, when one removes the simplifying assumption of a known distribution, further problems arise.
For a generic distribution, we do not know \(\mathcal{D}(x)\) for any \(x\in\mathcal{X}\) and therefore one cannot run exact Grover search. Instead, we consider a well-studied technique [23], in which one makes a random number of \(M\) queries to the Grover oracle, where \(M\) is uniformly distributed between \(0\) and a chosen threshold \(T_{G}\). This search succeeds with non-negligible probability if the amplitude of the projection of the initial state onto the subspace spanned by the "good" states (the "good" subspace) is \(\Omega(1/T_{G})\). For an equivalence query \(h\), this amplitude is \(\sqrt{\mathcal{D}(\{x:c(x)\neq h(x)\})}\), which could be arbitrarily small (as \(\mathcal{D}\) is arbitrary). Hence, it may take an arbitrarily large (expected) number of iterations of Grover's search (and hence oracle calls) to run a classical equivalence query learning algorithm.
To solve this issue, we show how to use equivalence queries that succeed with a constant probability, called imperfect equivalence queries, to PAC learn a concept. We can then run these imperfect equivalence queries using Grover search. We use a classical (ideal) equivalence query algorithm, replacing equivalence queries with repeated imperfect equivalence queries, but with a maximum imperfect equivalence query budget \(R\). Suppose that the algorithm requires equivalence queries to hypotheses \(h_{1},\ldots h_{k}\). If all of the successfully run an equivalence query for every hypothesis, then the classical algorithm succeeds, and we use its output. Otherwise, we hit the imperfect equivalence query budget \(R\) and must terminate the classical algorithm early. By choosing \(R\) sufficiently large, we can be sure that if we hit the budget, most of the imperfect equivalence queries were spent on hypotheses \(h_{i}\) that are "close" to \(c\) (and hence have a low chance of the Grover search succeeding). Thus if we take the "average" of the hypotheses \(h_{i}\) weighted by the number of imperfect equivalence queries spent on each hypothesis, we also output a classifier close to \(c\).
To conclude the section, we sketch a proof of our lower bound. We consider an arbitrary concept class \(\mathcal{C}\) of VC dimension \(d\). We note that there is a shattered set \(Y\subseteq\mathcal{X}\) of size \(d\), and take \(\mathcal{D}\) to be a "perturbed" delta-function distribution on \(Y\). We can thus think of concepts \(c\) in \(\mathcal{C}\) as bit strings of length \(d\), where the bit string describes \(c\)'s action on \(Y\). Since \(Y\) is shattered by \(\mathcal{C}\), all possible bit strings will appear. Any candidate PAC algorithm must be able to recover most of the bit string with high probability. We reduce to a known problem by introducing a weak phase-kickback oracle for the bit string, which we use to implement the PAC oracle. We can then use a standard lower bound [20] on recovering a bit string with high probability using a weak phase kickback oracle.
### Open problems
This work leaves several interesting avenues for further research. Firstly, one could attempt to tighten the upper bound (5) to remove polylogarithmic factors and prove a tight matching lower bound. The removal of a \(\log(1/\epsilon)\) factor in the query complexity for classical PAC learning took 27 years [3, 4]; we hope that the quantum case will be simpler. Moreover, in order to achieve \(1/\sqrt{\epsilon}\) scaling with our method, one would require the optimal classical equivalence query learning complexity to have no \(\epsilon\) dependence and thus, a different approach is likely to be required.
It is interesting to consider the power of quantum learning algorithms with access to the oracle \(Q_{c}\), but not its inverse \(Q_{c}^{\dagger}\). The inverse oracle seems necessary for Grover's search, and thus it is unclear if a quantum advantage is possible. The lack of such an advantage would have interesting implications for understanding what makes quantum computing more powerful than classical computation.
Finally, one could consider the implications of this work to generic advantages in more practical models of quantum machine learning, such as quantum neural networks.
### Organisation
We first cover all required technical preliminaries in Section 2. In Section 3, we cover our Grover subroutine that leads to the quadratic advantage. Equivalence queries and how to use imperfect equivalence queries in a classical learning algorithm are both described in Section 4. Using the results of these two sections, we derive the upper bound (5) in Section 5; we prove an almost matching lower bound on our quantum model in Section 6, using a reduction to a phase oracle problem. Finally, we consider the application of our algorithm to learning \(k-\)juntas in Section 7.
Preliminaries
We will only consider functions defined on finite sets. We first introduce the standard, classical model of PAC learning [1]. For a finite set \(\mathcal{X}\), let \(\{0,1\}^{\mathcal{X}}=\{f:\mathcal{X}\rightarrow\{0,1\}\}\), an element \(f\in\{0,1\}^{\mathcal{X}}\) is called a classifier. We wish to _approximately_ learn an unknown classifier \(c\) from a known subset of classifiers \(\mathcal{C}\subseteq\{0,1\}^{\mathcal{X}}\), where \(\mathcal{C}\) is called a concept class.
There is an unknown distribution \(\mathcal{D}\) on \(\mathcal{X}\), where \(\mathcal{D}(x)\) denotes the probability of drawing \(x\) from \(\mathcal{X}\). The distance between two classifiers is defined as the probability they disagree: \(d(h_{1},h_{2}):=\mathbb{P}_{X\sim\mathcal{D}}\left[h_{1}(X)\neq h_{2}(X)\right]\). For a fixed tolerance \(\epsilon>0\) we say a classifier \(h\in\{0,1\}^{\mathcal{X}}\) is \(\epsilon\)-_approximately correct_ if \(d(h,c)\leq\epsilon\).
A learning algorithm \(\mathcal{A}\) has access to some oracle that gives information about \(c\). Traditionally, one assumes that the oracle generates a labelled example \((X,c(X))\) at random, where \(X\) is distributed according to \(\mathcal{D}\). We will consider an additional type of oracle in section 4. The sample complexity of \(\mathcal{A}\) is the number of labelled examples it receives.
For a fixed error probability \(\delta\), we say that an algorithm \(\mathcal{A}\) is an \((\epsilon,\delta)\) learner if, with probability at least \(1-\delta\) (over the randomness of the algorithm), the algorithm outputs an \(\epsilon\)-approximately correct hypothesis, _for every possible \(c\) and \(\mathcal{D}\)_.
For a fixed concept class \(\mathcal{C}\) and \(\epsilon,\delta>0\), one wishes to find an \((\epsilon,\delta)\)-learner with minimum sample complexity. The optimal sample complexity will depend on \(\epsilon,\delta\) and some measure of complexity of the class \(\mathcal{C}\), which we now define. For a subset \(Y\subseteq\mathcal{X}\), we define \(\mathcal{C}|_{Y}:=\{c|_{Y}:c\in\mathcal{C}\}\) as the restriction of the concept class to \(Y\). We say that \(\mathcal{C}\) shatters \(Y\) if \(\mathcal{C}|_{Y}=\{0,1\}^{Y}\), i.e., if all possible labellings of \(Y\) appear in concepts in \(\mathcal{C}\). The Valiant-Chapernikis (VC) dimension [2] of \(\mathcal{C}\), denoted \(d\), is the maximum size of a shattered set, that is
\[d=\max\{|Y|:Y\text{ is shattered by }\mathcal{C}\}. \tag{7}\]
In [3, 4], it was shown that the optimal sample complexity using labelled examples, denoted \(T_{C}(\epsilon,\delta,d)\) scales as
\[T_{C}=\Theta\left[\frac{1}{\epsilon}\left(d+\log\left(\frac{1}{\delta}\right) \right)\right]. \tag{8}\]
In the quantum PAC setting [5], one assumes that the data is stored coherently, i.e., one considers the state
\[|\psi_{c}\rangle:=\sum_{x\in\mathcal{X}}\sqrt{\mathcal{D}(x)}\left|x\ c(x) \right\rangle, \tag{9}\]
chosen so that measuring \(|\psi_{c}\rangle\) in the computational basis gives a random labelled example. Instead of the classical sample complexity, one considers the minimum number of copies \(T_{S}(\epsilon,\delta,d)\) of \(|\psi_{c}\rangle\) required to PAC learn \(\mathcal{C}\). Since one can always measure the state in place of a call to a classical oracle, \(T_{S}\) is, at worst, the optimal sample complexity of a classical algorithm. In fact, Arunachalam and de Wolf [11] showed that there is no (asymptotic) quantum advantage from using states instead of oracle calls - the optimal \(T_{S}\) grows exactly as in equation (8).
We assume a stronger model, in which one has access to an oracle \(Q_{c}\) (which depends on the underlying concept), defined by its action on a fixed known input state \(|\mathrm{IN}\rangle\) (independent of the underlying concept):
\[Q_{c}\left|\mathrm{IN}\right\rangle=|\psi_{c}\rangle=\sum_{x\in\mathcal{X}} \sqrt{\mathcal{D}(x)}\left|x\ c(x)\right\rangle. \tag{10}\]
This is similar in spirit to the recent work [20], which deals with state tomography with a state preparation unitary. We also assume that the algorithm has access to the inverse of the oracle, \(Q_{c}^{\dagger}\). This is relevant if, for example, \(Q_{c}\) is given as a quantum circuit of one or two-qubit gates; in this case, \(Q_{c}^{\dagger}\) may be constructed by reversing the order of the gates and replacing each with its inverse. We define the learning complexity of any algorithm as the total number of queries to \(Q_{c}\) or \(Q_{c}^{\dagger}\). The minimum learning complexity of any \((\epsilon,\delta)\)-learner is denoted \(T_{O}(\epsilon,\delta,\mathcal{C})\).
The lower bound of [11] does not apply to a model with access to \(Q_{c}\), as it assumes the quantum algorithm is described by a POVM that is _independent of the underlying concept_\(c\). However, \(Q_{c}\) explicitly depends on \(c\) and thus, any algorithm (or POVM) that calls \(Q_{c}\) will violate the assumptions in [11]. Hence, one can hope for quantum advantage in this setting.
We recap all of the different learning models considered in Table 1.
We end the preliminaries section with a recap of Grover's algorithm. For a subspace \(\mathcal{V}\) of a Hilbert space \(\mathcal{H}\), let \(\Pi_{\mathcal{V}}\) be the orthogonal projection map onto \(\mathcal{V}\). Furthermore, let \(I_{\mathcal{V}}\) be the reflection operator in \(\mathcal{V}^{\perp}\), given by
\[I_{\mathcal{V}}=\mathbb{1}-2\Pi_{\mathcal{V}}. \tag{11}\]
For a state \(\ket{\psi}\), let \(I_{\ket{\psi}}\) be the reflection operator when \(\mathcal{V}=\text{span}\{\ket{\psi}\}\).
Grover search takes as its input a "good" subspace \(\mathcal{G}\subseteq\mathcal{H}\), and an input state \(\ket{\psi}\). One then implements the Grover operator:
\[D=-I_{\ket{\psi}}I_{\mathcal{G}}. \tag{12}\]
The state \(\ket{\psi}\) can be decomposed as
\[\ket{\psi}=\sin(\theta)\ket{g}+\cos(\theta)\ket{b}, \tag{13}\]
where \(\ket{g},\ket{b}\) are orthonormal, \(\theta\in[0,\pi/2]\), \(\ket{g}\in\mathcal{G},\ket{b}\in\mathcal{G}^{\perp}\). It is well-known [24] that
\[D^{n}\ket{\psi}=\sin((2n+1)\theta)\ket{g}+\cos((2n+1)\theta)\ket{b}. \tag{14}\]
and thus if one knows \(\theta\) exactly, one can apply \(D^{n}\) such that \(\sin((2n+1)\theta)\approx 1\).
## 3 Grover Subroutine
An essential subroutine for our quantum advantage is to use calls to \(Q_{c}\) and \(Q_{c}^{\dagger}\) to run a Grover search [24, 25]. This leads to a quadratic improvement in learning complexity (up to polylogarithmic factors) over classical PAC learning. In this section, we describe our Grover subroutine.
Our Grover subroutine takes as an input a "good" subset \(G\subseteq\{(x,b):x\in\mathcal{X},b\in\{0,1\}\}\), where we wish to find an \(x\) such that \((x,c(x))\in G\). We define a corresponding "good" subspace by
\[\mathcal{G}=\text{span}\{\ket{x\ b}:(x,b)\in G\}. \tag{15}\]
In order to implement Grover's search, we need to implement the Grover operator, as defined in equation (12). We show that implementing \(D\) requires a constant number of queries.
**Lemma 3.1**: _One can implement the Grover operator \(D\) with one call to \(Q_{c}\) and one to \(Q_{c}^{\dagger}\)._
_Proof:_ Note that \(I_{\mathcal{G}}\) is independent of \(c\) and, therefore, may be implemented by a (possibly exponentially sized circuit) without any queries. To implement \(I_{\ket{\psi_{c}}}\), note that
\begin{table}
\begin{tabular}{c|c|c|c|c} Model & \begin{tabular}{c} Quantum \\ or \\ Classical \\ \end{tabular} & Learning resource & \begin{tabular}{c} Optimal \((\epsilon,\delta)\) \\ learner complexity \\ \end{tabular} & \begin{tabular}{c} Bounds on optimal learner \\ complexity \\ \end{tabular} \\ \hline Labelled examples & Classical & \begin{tabular}{c} Sample \((X,C(X))\) \\ where \(X\sim\mathcal{D}\) \\ \end{tabular} & \(T_{C}\) & \(\Theta\left[\frac{1}{\epsilon}\left(d+\log\left(\frac{1}{\delta}\right)\right)\right]\) \\ \hline Equivalence queries & Classical & See Section 4 & \(T_{E}\) & \(O\left(\left[d+\log\left(\frac{1}{\delta}\right)\right]\log^{9}\left(\frac{1}{ \epsilon}\right)\right)\) \\ \hline Imperfect equivalence queries & Classical & See Section 4 & \(T_{IE}\) & \(O(T_{E})\) \\ \hline Quantum samples & Quantum & Copy of \(\ket{\psi_{c}}\) & \(T_{S}\) & \(\Theta(T_{C})\) \\ \hline Quantum oracle calls & Quantum &
\begin{tabular}{c} Application of \(Q_{c}\) \\ or \(Q_{c}^{\dagger}\) \\ \end{tabular} & \(T_{O}\) & \(O(\frac{1}{\sqrt{\epsilon}}T_{IE})\), \(\Omega(\frac{d}{\sqrt{\epsilon}})\) \\ \end{tabular}
\end{table}
Table 1: Different learning models considered in our work. \(T_{M}\) corresponds to the minimum number of resources needed by any \((\epsilon,\delta)\)-learner in model \(M\).
\[I_{\left|\psi_{c}\right\rangle} =1-2\left|\psi_{c}\right\rangle\!\!\left\langle\psi_{c}\right|, \tag{16}\] \[=Q_{c}(1-2\left|\mathrm{IN}\right\rangle\!\!\left\langle\mathrm{IN }\right|)Q_{c}^{\dagger},\] (17) \[=Q_{c}I_{\left|\mathrm{IN}\right\rangle}Q_{c}^{\dagger}. \tag{18}\]
Note that \(I_{\left|\mathrm{IN}\right\rangle}\) is independent of \(c\) and, therefore, may be implemented by a (possibly exponentially sized circuit) without any queries.
We decompose
\[\left|\psi_{c}\right\rangle=\sin(\theta)\left|g\right\rangle+\cos(\theta)\left| b\right\rangle, \tag{19}\]
where \(\left|g\right\rangle,\left|b\right\rangle\) are orthonormal, \(\theta\in[0,\pi/2]\), \(\left|g\right\rangle\in\mathcal{G},\left|b\right\rangle\in\mathcal{G}^{\perp}\). If we knew \(\theta\) exactly, we could apply \(D^{n}\) such that \(\sin((2n+1)\theta)\approx 1\). However, since \(\theta\) depends on \(\mathcal{D}\), which is unknown, this is impossible. Instead, we use the well-established [23] version of Grover's search for an unknown number of items. Our exact subroutine is given below; Algorithm 1.
```
Algorithm 1: Input:\(G\subseteq\{(x,b):x\in\mathcal{X},b\in\{0,1\}\}\) a good subspace, \(\epsilon>0\) a tolerance Output: labelled example \((x,c(x))\). Succeeds if \((x,c(x))\in G\)
1. Produce \(\left|\psi_{c}\right\rangle=Q_{c}\left|\mathrm{IN}\right\rangle\)
2. Pick \(N\) from \(0,1\ldots,\lceil 2/\sqrt{\epsilon}\rceil-1\) uniformly at random
3. Apply \(D\), the Grover operator, \(N\) times to \(\left|\psi_{c}\right\rangle\)
4. Measure the resulting state in the computational basis
```
The properties of our algorithm are summarised in the following theorem
**Theorem 3.2**_Let \(G\subseteq\{(x,b):x\in\mathcal{X},b\in\{0,1\}\}\) be a good subset, \(\epsilon>0\) be a fixed tolerance. Suppose that we run Algorithm 1 with these inputs, then_
1. _In the worst case, the algorithm makes_ \(O(1/\sqrt{\epsilon})\) _oracle (or inverse oracle) calls_
2. _If_ \(\mathbb{P}_{X\sim\mathcal{D}}\left[(X,c(X))\in G\right]\geq\epsilon\) _then the algorithm succeeds, i.e., returns_ \((x,c(x))\in G\)_, with probability at least_ \(p=0.09\)_._
3. _Conditional on succeeding, the output of the algorithm_ \((X,c(X))\) _is distributed according to_ \[\mathbb{P}\left[(X,c(X))|\text{algorithm succeeds}\right]=\frac{\mathbb{P}_{X \sim\mathcal{D}}\left[X\right]}{\mathbb{P}_{X\sim\mathcal{D}}\left[(X,c(X)) \in G\right]}.\] (20)
_Proof:_
Part \((i)\): From the definition of the algorithm and Lemma 3.1, the worst case number of oracle calls is \(1+2(\lceil 2/\sqrt{\epsilon}\rceil-1)=O(1/\sqrt{\epsilon})\).
Part \((ii)\): Let \(M=\lceil 2/\sqrt{\epsilon}\rceil\), let \(\theta\) be as in equation (19) and let \(p_{s}(\theta)\) be the probability that the algorithm succeeds. Note that \(\mathbb{P}_{X\sim\mathcal{D}}\left[(X,c(X))\in G\right]\geq\epsilon\Leftrightarrow \sin(\theta)\geq\sqrt{\epsilon}\). We use Lemma 2 (section 6) from [23], which claims
\[p_{s}(\theta)=\frac{1}{2}-\frac{1}{4M}\frac{\sin(4M\theta)}{\sin(2\theta)}. \tag{21}\]
For \(\sin(\theta)\in[\sqrt{\epsilon},1/\sqrt{2}]\):
\[M \geq\frac{2}{\sin(\theta)}, \tag{22}\] \[\geq\frac{1}{\sin(2\theta)}, \tag{23}\]
and thus
\[p_{s}(\theta)\geq\frac{1}{2}-\frac{1}{4}=\frac{1}{4}>0.09. \tag{24}\]
Note that for \(\theta\in[\pi/4,\pi/2]\),
\[\sin(2\theta)\geq\frac{\pi/2-\theta}{\pi/4}, \tag{25}\]
Thus for \(\theta\in[\pi/4,(1/2-1/4M)\pi]\), we have that
\[p_{s}(\theta) \geq\frac{1}{2}-\frac{1}{4M}\cdot\frac{4/\pi}{\pi/2-(1/2-1/4M)\pi}, \tag{26}\] \[=\frac{1}{2}-\frac{4}{\pi^{2}}>0.09. \tag{27}\]
Finally, for \(\theta\in[(1/2-1/4M)\pi,\pi/2]\), note that \(\sin(2\theta)\geq 0\) and \(\sin(4M\theta)\leq 0\) so that \(p_{s}(\theta)\geq 1/2>0.09\).
Part \((iii)\). This follows from the form of \(D^{n}\left|\psi_{c}\right\rangle\); the relative magnitude of the amplitudes in \(\left|g\right\rangle\) is unchanged by the Grover operator \(D\).
We discuss how to combine the Grover subroutine with the algorithm of section 4 to achieve a quantum learning complexity of equation (5) in section 5.
## 4 Learning with imperfect equivalence queries
Equivalence queries are an alternative learning model for PAC learning. It was recently shown [22] that PAC learning with equivalence queries gives an exponential advantage over learning with labelled examples. In this section, we show how to use imperfect equivalence queries to PAC learn a concept class.
**Definition 4.1** An (ideal) equivalence query consists of submitting a candidate hypothesis \(h\) for an underlying true concept \(c\). If \(h=c\) then we are told YES. Otherwise, we receive a labelled example \((x,c(x))\) where \(c(x)\neq h(x)\) at random according to the distribution \(\mathbb{P}(y)=\mathcal{D}(y)/\mathcal{D}(\{x:c(x)\neq h(x)\})\). Such a labelled example where \(h(x)\neq c(x)\) is called a counterexample.
Equivalence queries are a very strong learning model, which is perhaps unrealistic. Thus, we assume we can only implement them probabilistically:
**Definition 4.2** An imperfect equivalence query consists of submitting a candidate hypothesis \(h\) for the underlying concept \(c\). In return we receive some labelled example \((x,c(x))\) with the following promises
* The distribution of \((X,c(X))\)_conditional on being a counterexample_ is the same as an ideal equivalence query.
* If \(d(h,c)\geq\epsilon\) then with some constant probability \(p\) we receive a counterexample.
Note that we can tell whether our imperfect equivalence query failed or not - we can look at the result \((x,c(x))\) and check whether \(h(x)=c(x)\). If they are equal, the equivalence query failed. Otherwise, it succeeded. Classically, we can implement an imperfect equivalence query using \(1/\epsilon\) random labelled examples - we just sample \(1/\epsilon\) times and see whether \(c(x)\neq h(x)\) for any of our samples. On a quantum computer we can do this in \(1/\sqrt{\epsilon}\) time using Grover's algorithm, as described in section 3 in Theorem 3.2.
We need one additional tool from classical learning theory to run our algorithm:
**Definition 4.3** Suppose we have a set of classifiers \(\mathcal{H}\subseteq\{0,1\}^{\mathcal{X}}\) and a distribution \(\rho\) on \(\mathcal{H}\). Then the weighted majority vote [26], \(\operatorname{WMV}_{\mathcal{H},\,\rho}\in\{0,1\}^{\mathcal{X}}\) is defined such that it maximises
\[\mathbb{P}_{h\sim\rho}\big{[}\operatorname{WMV}_{\mathcal{H},\,\rho}(x)=h(x) \big{]}\,, \tag{28}\]
for every \(x\) (ties can be broken arbitrarily).
Suppose we have a classical algorithm \(\mathcal{A}\) that uses \(T_{E}(\epsilon,\delta,d)\) (ideal) equivalence queries to PAC learn a concept class \(\mathcal{C}\). We show how to use \(O(T_{E}+\log(1/\delta))\) imperfect equivalence queries to PAC learn the same concept class.
The full detail of the algorithm is given below in algorithm 2. It works by running \({\cal A}\), replacing every equivalence query with repeated imperfect equivalence queries until one succeeds. We terminate if the learning algorithm \({\cal A}\) terminates or if we make a total of \(R(T_{E},\delta)\) imperfect equivalence queries.
We give some rough intuition for why the algorithm works before moving to prove so. If \({\cal A}\) terminates, then with high probability, it outputs an approximately correct hypothesis. If we pick \(R\) large enough, then with high probability \(T_{E}\) ideal queries to hypotheses \(h_{i}\) with \(d(h_{i},c)\geq\epsilon\) would all succeed in \(<R/3\) imperfect equivalence queries. Thus, if the algorithm \({\cal A}\) does not terminate and we make \(R\) total imperfect equivalence queries, with high probability, we spent \(>2/3\) of our imperfect equivalence queries on hypotheses \(h_{i}\) with \(d(h_{i},c)<\epsilon\). Hence, if we take the weighted majority vote of all of the hypotheses we queried, weighted by the number of imperfect equivalence queries spent on each hypothesis, most of the vote will be decided by hypotheses that are close to the concept \(c\). Thus, the weighted majority vote will also be close to \(c\).
The full proof of why algorithm 2 works is given as two lemmas. Before these, we introduce some terminology.
**Definition 4.4** A transcript of a run of algorithm 2 is given by the list of hypotheses \({\cal H}=\{h_{i}\}\) that the algorithm queried along with a corresponding collection of natural numbers \(n_{i}>0\), where \(n_{i}\) is the number of imperfect equivalence queries spent on \(h_{i}\).
The time-spent distribution \(\rho\) is the probability distribution on \({\cal H}\) given by \(\rho(h_{i})=n_{i}/\sum_{i}n_{i}\).
Finally, \(F=\{i:d(h_{i},c)\geq\epsilon\}\) is called the "feasible" set, where our imperfect equivalence query succeeds with probability at least \(p\). Correspondingly \(I=\{i:d(h_{i},c)<\epsilon\}\) is the "infeasible" set, where there is no promise on the probability of success.
Firstly, we show that with high probability that a bounded number of queries is spent on the feasible set
**Lemma 4.5**: _With probability \(\geq 1-\delta\) the total number of imperfect equivalence queries to feasible hypotheses is at most_
\[2T_{E}/p+(1/2p^{2})\log(1/\delta). \tag{29}\]
_Proof:_ A imperfect equivalence query of a feasible hypothesis has (by definition) a chance \(\geq p\) of succeeding, and the individual imperfect equivalence queries are independent. Additionally, there are at most \(T_{E}\) feasible hypotheses to query (since the classical algorithm makes at most \(T_{E}\) total equivalence queries). Thus, the probability that we succeed on all the feasible hypotheses using at most \(m\) imperfect queries feasible hypotheses is lower bounded by the probability of getting at least \(T_{E}\) successes from a binomial distribution \(B(m,p)\). Thus, the chance of failure is lower bounded by the chance of fewer than \(T_{E}\) successes from \(B(m,p)\).
Let \(X\sim B(m,p)\). Applying Hoeffding's inequality [27], for \(m\geq T_{E}/p\) we see that
\[\mathbb{P}\left[X<t\right]\leq e^{-2m(p-T_{E}/m)^{2}}. \tag{30}\]
Thus it is sufficient for
\[2m\left(p-\frac{T_{E}}{m}\right)^{2}\geq\log(1/\delta). \tag{31}\]
In turn, it is sufficient that
\[2mp^{2}-4pT_{E}\geq\log(1/\delta), \tag{32}\]
whence we deduce our bound.
Next we prove that if we make enough imperfect equivalence queries on infeasible hypotheses, the weighted majority vote of the transcript must be close to the underlying concept \(c\)
**Lemma 4.6**: _Suppose we spend at least \(2R/3\) imperfect equivalence queries on infeasible hypotheses. Then the weighted majority vote \(M\) of the transcript with the time-spent distribution has \(d(M,c)<4\epsilon\)._
_Proof:_ Fix the transcript \(h_{1},\ldots h_{k}\). Let \(\rho\) be the time-spent distribution and let \(\rho^{\prime}\) be the time-spent distribution conditioned on the infeasible set. That is, for \(i\in I\), \(\rho^{\prime}(h_{i})=\rho(h_{i})/\rho(I)\). Similarly let \(\tilde{\rho}\) the the time-spent distribution conditioned on the feasible set. We first show that if the infeasible set
overwhelmingly votes for a bit \(y\), then the whole transcript must also vote for that \(y\). To be precise, suppose that \(\mathbb{P}_{h\sim\rho^{\prime}}\left[h(x)=y\right]>3/4\), then
\[\mathbb{P}_{h\sim\rho}\left[h(x)=y\right] =\mathbb{P}_{h\sim\rho^{\prime}}\left[h(x)=y\right]\mathbb{P}_{h \sim\rho}\left[h\in I\right]+\mathbb{P}_{h\sim\hat{\rho}}\left[h(x)=y\right] \mathbb{P}_{h\sim\rho}\left[h\in F\right], \tag{33}\] \[>\frac{3}{4}\cdot\frac{2}{3},\] (34) \[=\frac{1}{2}. \tag{35}\]
Letting \(M=\mathrm{WMV}_{\mathcal{H},\,\rho}\), we deduce (inspired by [26]) that
\[\mathbb{P}_{X\sim\mathcal{D}}\left[M(X)\neq c(X)\right] \leq\mathbb{P}_{X\sim\mathcal{D}}\left[\mathbb{P}_{h\sim\rho^{ \prime}}\left[h(X)\neq c(X)\right]\geq\frac{1}{4}\right], \tag{36}\] \[\text{Markov's inequality}, \leq 4\mathbb{E}_{X\sim\mathcal{D}}\mathbb{E}_{h\sim\rho^{\prime}} [\mathbbm{1}_{\{h(X)\neq c(X)\}}],\] (37) \[=4\mathbb{E}_{h\sim\rho^{\prime}}[d(h,c)],\] (38) \[\text{definition of infeasible set}, <4\epsilon \tag{39}\]
We can now prove the performance of our algorithm
**Theorem 4.7**_Let the maximum number of imperfect equivalence queries of algorithm 2 be_
\[R(T_{E}(\epsilon,\delta,d),\delta)=6T_{E}(\epsilon,\delta,d)/p++(3/2p^{2})\log (1/\delta), \tag{40}\]
_then algorithm 2 produces a hypothesis \(h\) with \(d(h,c)\leq 4\epsilon\) with probability at least \(1-2\delta\)._
_Proof:_ By Lemma 4.5, with probability \(\geq 1-\delta\) we spend at most \(R/3\) imperfect equivalence queries on feasible hypotheses - suppose this happens. If we succeed in an equivalence query for every hypothesis required by \(\mathcal{A}\) then with probability at least \(1-\delta\), \(\mathcal{A}\) outputs a hypothesis \(h\) with \(d(h,c)\leq\epsilon\). Otherwise, we spend at least \(2R/3\) imperfect equivalence queries on infeasible hypotheses (as we assumed the feasible ones took at most \(R/3\) imperfect equivalence queries) and then by Lemma 4.6 the weighted majority vote \(\mathrm{WMV}_{\mathcal{H},\,\rho}\) has \(d(\mathrm{WMV}_{\mathcal{H},\,\rho},c)<4\epsilon\). Thus algorithm 2 outputs a \(4\epsilon\)-approximately correct hypothesis with probability at least \((1-\delta)^{2}\geq 1-2\delta\).
**Algorithm 2:**
**Input:**\(\delta>0,\epsilon>0\) (the usual PAC parameters) and \(\mathcal{A}\) a classical equivalence query learning algorithm with worst case query complexity \(T_{E}>0\)
**Output:** Hypothesis \(h\in\{0,1\}^{\mathcal{X}}\)
1. Set the maximum imperfect equivalence query budget as \(R=6T_{E}/p+(3/2p^{2})\log(1/\delta)\). If \(R\) total imperfect equivalence queries have ever been made, go to step 3
2. Run \(\mathcal{A}\), whenever it requires an equivalence query to a hypothesis \(h\), repeatedly make imperfect equivalence queries until one succeeds. If \(\mathcal{A}\) terminates, output the output of \(\mathcal{A}\)
3. Let \(\mathcal{H}=\{h_{1},\ldots,h_{k}\}\) be the set of hypothesis we ran imperfect equivalence queries on (so that \(k\leq T_{E}\)). Suppose we spent \(n_{i}\) imperfect equivalence queries on \(h_{i}\) (so that \(\sum n_{i}=R\)). Let \(\rho(h_{i})=n_{i}/N\) and output \(h=\mathrm{WMV}_{\mathcal{H},\,\rho}\)
## 5 Upper bound on quantum learning complexity
Here, we combine the results of sections 3 and 4 to give an upper bound on \(T_{O}\), the learning complexity of PAC learning with a state preparation oracle \(Q_{c}\) (and its inverse).
Suppose that it takes \(E(\epsilon)\) queries to perform an imperfect equivalence query for a hypothesis \(h\). If we have a classical equivalence learning algorithm \(\mathcal{A}\) with a query complexity of \(T_{E}(\epsilon,\delta,d)\), then we can use algorithm 2 of section 4 to get a quantum PAC learning algorithm with learning complexity
\[E(\epsilon/4)R(T_{E}(\epsilon/4,\delta/2,d),\delta/2). \tag{41}\]
The current best known \(T_{E}\)[22] has a worst-case query complexity of
\[T_{E}=O\left(\left[d+\log\!\left(\frac{1}{\delta}\right)\right]\log^{9}\left( \frac{1}{\epsilon}\right)\right). \tag{42}\]
If we use the Grover subroutine (section 3 algorithm 1) with \(G=\{(x,1-h(x)):x\in\mathcal{X}\}\) to implement the imperfect equivalence queries, we find \(E(\epsilon)=O(1/\sqrt{\epsilon})\). Substituting these \(T_{E}\) and \(E\) into the bound from equation (41), we get an upper bound of
\[T_{O}=O\left(\frac{1}{\sqrt{\epsilon}}\left[d+\log\!\left(\frac{1}{\delta} \right)\right]\log^{9}\left(\frac{1}{\epsilon}\right)\right), \tag{43}\]
which is a square-root improvement (up to polylogarithmic factors) over the classical PAC learning sample complexity of equation (8).
## 6 Lower bound on quantum learning complexity
In this section, we prove a lower bound on quantum PAC learning with a state preparation oracle (and its inverse). We show that \(\Omega(d/\sqrt{\epsilon})\) oracle calls are necessary.
Suppose we have a concept class \(\mathcal{C}\) with VC dimension \(d+1\). Then there is a set \(Z\) of size \(d+1\) in \(\mathcal{X}\) which is shattered by \(\mathcal{C}\). We pick a marked element \(x_{0}\in Z\) and let \(Y=Z\setminus\{x_{0}\}\). We define our distribution \(\mathcal{D}\) as a perturbed delta-function, the standard distribution used to prove lower bounds in learning:
\[\mathcal{D}(x)=\begin{cases}0,&\text{if }x\notin Z,\\ 1-4\epsilon,&\text{if }x=x_{0},\\ 4\epsilon/d,&\text{if }x\in Y.\end{cases} \tag{44}\]
We also restrict our concept class to \(\widetilde{\mathcal{C}}=\{c\in\mathcal{C}:c(x_{0})=0\}\). If our PAC algorithm works on \(\mathcal{C}\), it will certainly work on \(\widetilde{\mathcal{C}}\). Since our distribution is restricted to \(Z\) we need only identify the behaviour of our concept on \(Z\). Thus, we can index our concepts by bit-strings \(u\in\{0,1\}^{d}\) and index them with elements of \(Y\). To be precise, we identify a concept \(c\in\widetilde{\mathcal{C}}\) with a bit-string \(u\in\{0,1\}^{d}\), where \(u_{y}=c(y)\).
For a given bit-string \(u\in\{0,1\}^{d}\), the state preparation oracle acts as
\[Q_{u}\ket{\mathrm{IN}}=\sqrt{1-4\epsilon}\ket{x_{0}\;0}+\sqrt{\frac{4\epsilon }{d}}\sum_{x\in Y}\ket{x\;u_{x}}. \tag{45}\]
Our main approach is to reduce to the following fact from Lemma 51 in [20].
**Lemma 6.1**: _Let \(u\in\{0,1\}^{d}\) be a bit string, and let \(O_{u}\) be a weak phase-kickback oracle, that is_
\[O_{u}\ket{x}=e^{2i\eta u_{x}}\ket{x}. \tag{46}\]
_Then recovering more than \(3/4\) of the bits of \(u\) with high probability requires at least \(\Omega(d/\eta)\) calls to \(O_{u}\), its inverse or controlled versions of these._
_Proof:_ See [20]
We will use calls to controlled versions of \(O_{u}\) (denoted \(c-O_{u}\)) to implement the PAC state generation oracle \(Q_{u}\). We fix \(\eta\in[0,\pi/2]\) such that \(\sin(\eta)=\sqrt{4\epsilon}\).
**Lemma 6.2**: _One can implement \(Q_{u}\) using one call to \(c-O_{u}\), one to \(c-O_{u}^{\dagger}\) and two qubit-ancillae._
_Proof:_ First, it is convenient to shift the phase to have a \(\pm\) symmetry. Define a constant phase gate as
\[P_{\alpha}\ket{x}=e^{i\alpha}\ket{x}. \tag{47}\]
Then let
\[\widetilde{O}_{u}=P_{\eta}O_{u}^{\dagger}, \tag{48}\]
so that
\[\widetilde{O}_{u}\left|x\right\rangle=e^{i\eta\hat{u}_{x}}\left|x\right\rangle, \tag{49}\]
where
\[\hat{u}_{x}=(-1)^{u_{x}}. \tag{50}\]
We start by generating a uniform superposition of indices with the two-qubit ancillae in the \(\left|+\right\rangle\) state:
\[\frac{1}{2\sqrt{d}}\sum_{x\in Y}\left|x\right\rangle[\left|00\right\rangle+ \left|01\right\rangle+\left|10\right\rangle+\left|11\right\rangle]. \tag{51}\]
We next apply 4 controlled gates - either \(c-P_{\eta}\), \(c-P_{-\eta}\)\(c-\widetilde{O}_{u}\) and \(c-\widetilde{O}_{u}^{\dagger}\), such that each term in the superposition in equation (51) picks up a different phase:
\[\mapsto\frac{1}{2\sqrt{d}}\sum_{x\in Y}\left|x\right\rangle\left[e^{i\eta} \left|00\right\rangle+e^{-i\eta}\left|01\right\rangle+e^{i\eta\hat{u}_{x}} \left|10\right\rangle+e^{-i\eta\hat{u}_{x}}\left|11\right\rangle\right]. \tag{52}\]
Note that this requires two calls to singly controlled versions of the oracle - we can implement a double-controlled version by using a CCNOT (Toffoli) gate followed by a controlled oracle. Next, we apply a Hadamard gate to the second qubit register
\[\mapsto\frac{1}{\sqrt{2d}}\sum_{x\in Y}\left|x\right\rangle\left[\left|0 \right\rangle(\cos(\eta)\left|0\right\rangle+i\sin(\eta)\left|1\right\rangle) +\left|1\right\rangle(\cos(\eta\hat{u}_{x})\left|0\right\rangle+i\sin(\eta\hat {u}_{x})\left|1\right\rangle)\right]. \tag{53}\]
We then apply \(S^{\dagger}\) to the second qubit register (to remove the factors of \(i\)). We also use the even/odd ness of \(\cos/\)sin to regroup the terms:
\[\mapsto\frac{1}{\sqrt{2d}}\sum_{x\in Y}\left|x\right\rangle\left[\cos(\eta)( \left|0\right\rangle+\left|1\right\rangle)\left|0\right\rangle+\sin(\eta)( \left|0\right\rangle+\hat{u}_{x}\left|1\right\rangle)\left|1\right\rangle \right]. \tag{54}\]
We then apply a Hadamard gate to the first qubit register:
\[\mapsto\cos(\eta)\left(\frac{1}{\sqrt{d}}\sum_{x\in Y}\left|x\right\rangle \right)\left|00\right\rangle+\sin(\eta)\left(\frac{1}{\sqrt{d}}\sum_{x\in Y} \left|x\;u_{x}\right\rangle\right)\left|1\right\rangle \tag{55}\]
Conditional on the final qubit being in the state \(\left|0\right\rangle\), we apply a unitary to the first register that maps the uniform superposition over \(Y\) into the state \(\left|x_{0}\right\rangle\):
\[\mapsto\cos(\eta)\left|x_{0}\;0\;0\right\rangle+\sin(\eta)\left(\frac{1}{ \sqrt{d}}\sum_{x\in Y}\left|x\;u_{x}\right\rangle\right)\left|1\right\rangle \tag{56}\]
Finally, conditional on the first register not being in the state \(\left|x_{0}\right\rangle\), we apply an \(X\) gate to the second qubit register,followed by an \(H\) gate on the second qubit register:
\[\mapsto\left[\cos(\eta)\left|x_{0}\;0\right\rangle+\sin(\eta)\left(\frac{1}{ \sqrt{d}}\sum_{x\in Y}\left|x\;u_{x}\right\rangle\right)\right]\left|+\right\rangle \tag{57}\]
But by the definition of \(\eta\), we see that this is exactly equal to the action of the PAC oracle:
\[(Q_{u}\left|\text{IN}\right\rangle)\left|+\right\rangle \tag{58}\]
We thus deduce our bound
**Theorem 6.3**\(T_{O}=\Omega\left(\frac{d}{\sqrt{\epsilon}}\right)\)
_Proof:_ We can replace every call to \(Q_{u}\) (or its inverse) in our PAC algorithm with the unitary process described in Lemma 6.2, which requires a constant number of calls to (a controlled) \(O_{u}\) (or its inverse). If
the PAC algorithm outputs a correct hypothesis, then by construction of our distribution, it must agree on at least \(3/4\) of the bits of \(u\). Thus, the algorithm replaced with calls to \(O_{u}\) (and its inverse) satisfies the conditions of Lemma 6.1, and thus it must use at least \(\Omega(d/\eta)\) calls to \(O_{u}\). Hence, we reach a lower bound of
\[T_{O}=\Omega\left(\frac{d}{\arcsin\sqrt{4\epsilon}}\right)=\Omega\left(\frac{d }{\sqrt{\epsilon}}\right). \tag{59}\]
Note that our lower bound matches our upper bound (equation (5)), up to polylogarithmic factors.
## 7 Application to learning \(k-\)juntas
A \(k\)-junta is a function \(f:\{0,1\}^{n}\to 0,1\) that only depends on a subset of \(k\) bits. Letting \(\mathcal{X}=\{0,1\}^{n}\), we can consider the concept class \(\mathcal{C}=\{f\in\{0,1\}^{X}:f\text{ is a }k\text{ junta}\}\). The exact VC dimension of \(\mathcal{C}\) is unknown, but we can bound it using the inequalities
\[2^{d}\leq|\mathcal{C}|\leq|\mathcal{X}|^{d}+1. \tag{60}\]
The first of these comes from noting that if \(\mathcal{C}\) shatters a set of size \(\ell\), it must contain at least \(2^{\ell}\) elements; the second is called Sauer's lemma [28]. We can bound
\[|\mathcal{C}|\leq\binom{n}{k}2^{(2^{k})}, \tag{61}\]
since there are \(\binom{n}{k}\) ways to choose the \(k\) bits determining the junta, and then \(2^{(2^{k})}\) choices for the underlying function. We deduce that
\[d\leq\log\left[\binom{n}{k}\right]+2^{k}\leq k\log(en/k)+2^{k}. \tag{62}\]
Thus, our learning algorithm can PAC learn a \(k-\)junta with
\[O\left(\frac{1}{\sqrt{\epsilon}}\left[k\log\left(\frac{n}{k}\right)+2^{k}+\log \left(\frac{1}{\delta}\right)\right]\log^{9}(1/\epsilon)\right), \tag{63}\]
oracle calls. This has a worse scaling in \(n\) than algorihtms presented in [10, 29], but has a better scaling in \(\epsilon\) and works for _any_ underlying distribution, whereas previous work has focused on the uniform distribution.
## Acknowledgements
The authors thank J. van Apeldoorn, R. de Wolf, S. Arunachalam, J. Cudby, C. Long and J. Bayliss for helpful discussions related to this work.
Wilfred Salmon was supported by the EPRSC and Hitachi. Sergii Strelchuk acknowledges support from the Royal Society University Research Fellowship. Tom Gur is supported by the UKRI Future Leaders Fellowship MR/S031545/1 and an EPSRC New Horizons Grant EP/X018180/1. Sergii Strelchuk and Tom Gur are further supported by EPSRC Robust and Reliable Quantum Computing Grant EP/W032635/1.
|
2309.11541 | On the dynamical stability of copper-doped lead apatite | The recent claim of room temperature superconductivity in a copper-doped lead
apatite compound, called LK-99, has sparked remarkable interest and
controversy. Subsequent experiments have largely failed to reproduce the
claimed superconductivity, while theoretical works have identified multiple key
features including strong electronic correlation, structural instabilities, and
dopability constraints. A puzzling claim of several recent theoretical studies
is that both parent and copper-doped lead apatite structures are dynamically
unstable at the harmonic level, questioning decades of experimental reports of
the parent compound structures and the recently proposed copper-doped
structures. In this work, we demonstrate that both parent and copper-doped lead
apatite structures are dynamically stable at room temperature. Anharmonic
phonon-phonon interactions play a key role in stabilizing some copper-doped
phases, while most phases are largely stable even at the harmonic level. We
also show that dynamical stability depends on both volume and correlation
strength, suggesting controllable ways of exploring the copper-doped lead
apatite structural phase diagram. Our results fully reconcile the theoretical
description of the structures of both parent and copper-doped lead apatite with
experiment. | Sun-Woo Kim, Kang Wang, Siyu Chen, Lewis J. Conway, G. Lucian Pascut, Ion Errea, Chris J. Pickard, Bartomeu Monserrat | 2023-09-20T18:00:01Z | http://arxiv.org/abs/2309.11541v3 | # On the dynamical stability of copper-doped lead apatite
###### Abstract
The recent claim of room temperature superconductivity in a copper-doped lead apatite compound, called LK-99, has sparked remarkable interest and controversy. Subsequent experiments have largely failed to reproduce the claimed superconductivity, while theoretical works have identified multiple key features including strong electronic correlation, structural instabilities, and dopability constraints. A puzzling claim of several recent theoretical studies is that both parent and copper-doped lead apatite structures are dynamically unstable at the harmonic level, questioning decades of experimental reports of the parent compound structures and the recently proposed copper-doped structures. In this work, we demonstrate that both parent and copper-doped lead apatite structures are dynamically stable at room temperature. Anharmonic phonon-phonon interactions play a key role in stabilizing some copper-doped phases, while most phases are largely stable even at the harmonic level. We also show that dynamical stability depends on both volume and correlation strength, suggesting controllable ways of exploring the copper-doped lead apatite structural phase diagram. Our results fully reconcile the theoretical description of the structures of both parent and copper-doped lead apatite with experiment.
## Introduction
Copper-doped lead apatite Pb\({}_{10-x}\)Cu\({}_{x}\)(PO\({}_{4}\))\({}_{6}\)O with \(0.9<x<1.1\), known as LK-99, has been recently claimed to exhibit superconductivity above room temperature and at ambient pressure [1; 2]. This remarkable claim is backed by magnetic (half-)levitation on a permanent magnet and by a sudden drop in resistivity at the claimed superconducting transition temperature. However, subsequent extensive experimental efforts by other groups have failed to confirm the superconductivity [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13]. The magnetic half-levitation is reproduced in some insulating samples, where it is attributed to soft ferromagnetism [6; 7; 8]. A plausible explanation for the sudden resistivity drop is provided by a first order phase transition of Cu\({}_{2}\)S impurities [9; 14], which is further supported by the highly insulating nature of a single crystalline sample without Cu\({}_{2}\)S impurities [13].
On the theoretical front, initial density functional theory calculations reported an electronic structure exhibiting relatively flat bands near the Fermi level for a simple model of copper-doped lead apatite [15; 16; 17; 18]. However, subsequent calculations showed that the inclusion of spin-orbit coupling or non-local correlations lead to an insulating electronic structure [19; 20; 21], a conclusion that is also reached with the inclusion of local correlations using dynamical mean-field theory [22; 23; 24]. Different estimates of critical superconducting temperatures have so far delivered values significantly lower than room temperature [25; 26; 27].
These state-of-the-art electronic structure calculations all assume a specific structural model as a starting point, often suggested by experiment. However, other theoretical works have questioned the suitability of these structural models both in terms of the thermodynamic feasibility of copper doping [28] or the dynamical stability of the experimentally proposed structures [29; 30; 31; 32; 33; 28; 34; 29; 30; 32; 33]. Indeed, one of the most basic quantities used to characterize a material is its dynamical stability. A dynamically stable structure corresponds to a local minimum of the potential (free) energy surface, and its phonon frequencies are real. A dynamically unstable structure corresponds to a saddle point of the potential (free) energy surface, and some of its phonon frequencies are imaginary with associated eigenvectors that encode atomic displacement patterns that lower the energy of the system. Only dynamically stable structures can represent real materials. Puzzlingly, recent computational works have claimed that the experimentally reported structures of the parent lead apatite [29; 30; 31; 28] and of the copper-doped lead apatite compounds [29; 30; 32; 33; 28] are dynamically unstable at the harmonic level, which would imply that they cannot be the true structures of the materials underpinning LK-99, and would question the validity of most electronic structure calculations to date.
In this work, we demonstrate that both parent lead apatite and copper-doped lead apatite compounds are _dynamically stable_ at room temperature. The parent compounds are largely stable at the harmonic level, with some exhibiting very slight instabilities which are suppressed by quartic anharmonic terms. For the copper-doped compounds, dynamical stability at the harmonic level depends on the doping site, but even those that are dynamically unstable at the harmonic level are overall stable at room temperature with the inclusion of anharmonic phonon-phonon interactions.
## Lead apatite
Lead apatite is a compound that was first experimentally reported over 70 years ago [34]. Figure 1 depicts an example of lead apatite, with a hexagonal lattice (space group \(P6_{3}/m\)) and general formula Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)X\({}_{2}\), where X is either a halide atom or an OH group. The variant Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O, which is claimed to be the parent structure of LK-99 [1; 2], has also been known experimentally for decades [34; 35; 36; 37]. The X site corresponds to Wyckoff position \(4e\), giving a multiplicity of four in the unit cell, but these sites are only partially filled with an occupation of \(\frac{1}{2}\) for halide atoms and the OH group, and an occupation of \(\frac{1}{4}\) for O. We consider two representative cases, Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O and Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\), where the specific distribution of species on the X site results in space groups \(P3\) and \(P6_{3}\), respectively.
Figure 1: **Crystal structure and phonon band structures of parent lead apatite.****a.** Crystal structure of lead apatite Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O or Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\). **b.** Harmonic and anharmonic (50 K) phonon band structures of Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O. **c.** Harmonic phonon band structure of Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\).
The phonon dispersion of the parent Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O and Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) compounds is shown in Fig. 1. At the harmonic level, Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O exhibits imaginary phonon frequencies at the zone boundary points M and K of the \(k_{z}=0\) plane, and at the zone boundary point H of the \(k_{z}=\frac{\pi}{c}\) plane. However, the absolute values of these imaginary frequencies are less than 2.3 meV and the resulting anharmonic potentials have a dominant quartic term that strongly suppresses the instability to about 0.2 meV/f.u. (see Supplementary Fig. S4). As a result, the calculation of self-consistent phonons including anharmonic interactions fully stabilizes the structure at the relatively low temperature of 50 K, and potentially lower. Earlier works reported that Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O is dynamically unstable at the harmonic level [28; 29; 30], a result we confirm, but our work further demonstrates that Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O is overall dynamically stable at room temperature driven by higher-order anharmonic terms.
Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) is dynamically stable at the harmonic level. Earlier works reported that Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) is dynamically unstable [28; 29], an opposite conclusion that we attribute to unconverged harmonic calculations (see Supplementary Fig. S3). We note that to fully converge the harmonic calculations of Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\), the required coarse \(\mathbf{q}\)-point grid includes all of \(\Gamma\), M, K, and A points, which can only be accomplished with a regular grid of minimum size \(6\times 6\times 2\) or alternatively a non-uniform Farey grid [38] of minimum size \((2\times 2\times 2)\cup(3\times 3\times 1)\). In our calculations, we use the latter as it is computationally more efficient.
Overall, we find that both lead apatite compounds Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O and Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) are dynamically stable, a conclusion that is in full agreement with multiple experimental reports of the structure of lead apatite over the past 70 years [34; 35; 36; 37; 39].
## III Copper doped lead apatite
The claim of room temperature superconductivity in LK-99 is based on copper doping of lead apatite, with copper replacing about 1 in 10 lead atoms leading to a Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O stoichiometry. There are two symmetrically distinct lead sites, labelled Pb(1) and Pb(2) in the literature (see Fig. 1**a**), and doping at these sites results in structures with the space groups \(P3\) and \(P1\), respectively. The original LK-99 work suggested that the doping site is Pb(1) [1; 2], but subsequent experimental works have suggested that both Pb(1) and Pb(2) sites can be doped [13; 29]. Computational works find that the relative energy between the two doping sites depends on the exchange-correlation functional and the magnitude of the Hubbard \(U\) parameter used on the copper atom, with most choices favouring doping at the Pb(2) site, a prediction we confirm with our
own calculations. For completeness, in this work we explore doping at both sites.
The electronic structures of Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O with copper on the Pb(1) and Pb(2) sites are shown in Fig. 2. For doping at the Pb(1) site, a non-magnetic calculation leads to a metallic state in which the Fermi energy crosses four relatively flat bands (a pair of doubly-degenerate bands). Inclusion of spin-orbit coupling while maintaining the non-magnetic configuration leads to a splitting of the pair of doubly-degenerate bands, and the Fermi energy crosses a pair of singly-degenerate relatively flat bands. A calculation including spin-orbit coupling and allowing a non-zero magnetic moment leads to a ferromagnetic configuration in which the system is gapped. The latter ferromagnetic configuration is the most energetically favourable, but may not be directly relevant for room temperature experiments as single crystal measurements suggest the material is a non-magnetic insulator exhibiting a diamagnetic response with potentially a small ferromagnetic component [13]. Additionally, the ferromagnetic ordering may be an artifact of the DFT calculations, as dynamical mean-field theory calculations [22; 23; 24] suggest a gap opens due to a Mott-like band splitting without the need of ferromagnetic ordering. For doping at the Pb(2) site, we also find a metallic
Figure 2: **Electron and phonon band structures of Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O.****a,b** Electronic band structures of Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O for copper doping at the **a** Pb(1) and **b** Pb(2) sites. NM, FM, SOC represent nonmagnetic, ferromagnetic, and spin-orbit coupling, respectively. For doping at the Pb(2) site, the initial nonmagnetic configuration converged to a ferromagnetic configuration in the presence of spin-orbit coupling. **c** Harmonic and anharmonic (300 K) phonon band structures of Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O structure with \(P3\) symmetry for doping at the Pb(1) site. **d** Harmonic phonon band structure of Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O structure with \(P1\) symmetry for doping at the Pb(2) site. The data are obtained with \(U=3\) eV on the copper \(3d\) orbital.
state with a single band crossing the Fermi level in non-magnetic calculations, and a gapped state in ferromagnetic calculations. For both doping sites, we find that the phonon dispersion is only weakly affected by the level of electronic structure theory used (see Supplmenetary Note 3.1), so the discussion below should be largely independent of the precise electronic structure of the system.
The phonon dispersions of Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O with copper on the Pb(1) and Pb(2) sites are shown in Fig. 2. Doping at the Pb(2) site leads to a dynamically stable structure at the harmonic level of theory. By contrast, doping at the Pb(1) site leads to a dynamically unstable structure at the harmonic level that exhibits two imaginary phonon branches of frequencies about 15\(i\) meV across the entire Brillouin zone. This harmonic instability is present irrespective of the level of theory used, including a Hubbard \(U\) parameter on the copper \(d\) orbitals, spin-orbit coupling, and ferromagnetic ordering (see Supplementary Fig. S5). Importantly, anharmonic phonon-phonon interactions strongly suppress the instability and the structure becomes dynamically stable at 300 K. We reach similar conclusions for copper doping of Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) (see Supplementary Fig. S6). Overall, copper-doped lead apatite is dynamically stable at room temperature for doping at either site.
The original paper claiming superconductivity in LK-99 suggested that copper doping of lead apatite occurs on the Pb(1) site [1; 2]. As the associated structure exhibits a dynamical instability at the harmonic level, we further explore its properties by considering the potential energy surface along the imaginary phonon modes at high symmetry points in the Brillouin zone (Fig. 3**a**). The dominant instability is driven by a \(\Gamma\) point phonon mode, and fully relaxing the structure along this instability leads to a new structure of \(P1\) symmetry, which is dynamically stable at the harmonic level of theory (Fig. 3**b**). We ascribe the harmonic stability of the \(P1\) structure to a downward shift of the occupied part of the density of states compared to the \(P3\) structure, dominated by copper-derived orbitals (Fig. 3**d**). The four bands (a pair of doubly-degenerate bands) that cross the Fermi level in the \(P3\) structure (Fig. 2**a**) split under the distortion, such that the resultant \(P1\) structure has a metallic state with a single doubly-degenerate band crossing the Fermi level in the nonmagnetic configuration (Fig. 3**c**). The distorted \(P1\) structure becomes an insulator in the presence of ferromagnetic ordering, similar to lead apatite with copper doping at the Pb(2) site (Fig. 2**b**).
Interestingly, the relative energy of the \(P3\) structure compared to the \(\Gamma\)-distorted \(P1\) structure is strongly dependent on both the volume and the electronic correlation strength as measured by the Hubbard \(U\) parameter. Specifically, we find that harmonic instabilities favouring the \(P1\) phase occur for large values of \(U\) and large volumes, while the harmonic instability of the \(P3\) phase
completely disappears for small values of \(U\) and small volumes. This is evident from the phonon dispersion of the \(P3\) phase for different \(U\) values and volumes (Fig. 4**a**) and from the enthalpy difference between \(P3\) and \(P1\) phases indicated by the colour bar in the phase diagram in Fig. 4**b**. These observations suggest that controlling volume, for example through hydrostatic pressure or strain, and controlling the degree of electronic correlation, for example by applying a gate voltage or doping, can be used to navigate the structural phase diagram of compounds based on lead apatite. Specifically, it may be possible to observe a temperature-driven structural phase transition between a low temperature \(P1\) phase and high temperature \(P3\) phase in a regime with a large harmonic dynamical instability. Finally, we note that electronic correlation beyond the static description provided by a Hubbard \(U\) correction may play an important role on this phase diagram [22; 23; 24], so further work is required to fully characterise it.
Figure 3: \(\Gamma\)**-distorted \(P1\) structure and its electron and phonon band structures.****a.** Potential energy surface along the imaginary phonon modes at high symmetry points in the Brillouin zone of the \(P3\) structure for doping at the Pb(1) site (see its harmonic phonon dispersion in Fig. 2**c**). **b,c.** **b** Harmonic phonon and **c** electron band structures of the \(P1\) structure distorted along the \(\Gamma\) mode of the \(P3\) structure. **d.** Nonmagnetic density of states (DOS) and Cu partial DOS of both the \(P3\) and \(\Gamma\)-distorted \(P1\) structures. See also partial DOS of other atoms in Supplementary Fig. S7. The data are obtained with \(U=3\,\)eV on the copper \(3d\) orbital.
## Discussion
Since the original claim of room temperature superconductivity in LK-99, seven phonon dispersion calculations have been reported in the literature. Of these, parent [28; 29; 30; 31] and copper-doped lead apatite with a \(P3\) space group [28; 29; 32; 33; 30] are claimed to be dynamically unstable at the harmonic level, while another work claims that the copper-doped lead apatite is dynamically stable at the harmonic level [27].
We attribute these puzzling and contradictory conclusions about the dynamical stability of lead apatite to the complexity of harmonic phonon calculations in this system, with a unit cell containing at least 41 atoms, and to the subtle interplay between volume, electronic correlation strength, and phonons. First, we find that fully converged phonon calculations for the parent compounds require relatively large coarse \(\mathbf{q}\)-point grids. For example, a converged calculation for Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) requires the inclusion of the \(\Gamma\), M, K, and A points in the coarse grid, but none of the previously reported calculations include all these points, and as a result they incorrectly conclude that this compound is dynamically unstable at the harmonic level. Second, we find that dynamical stability for the copper-doped compounds at the harmonic level depends on both the value of the Hubbard \(U\) parameter and the volume of the system (see Fig. 4). In this context, we rationalise the seemingly contradictory conclusions about dynamical stability of the copper-doped compounds by suggesting that different works use different volumes and different choices for the Hubbard \(U\) parameter.
Figure 4: **Volume-Hubbard \(U\) phase diagram.****a.** Representative harmonic phonon band structures of the \(P3\) structure doped at the Pb(1) site for different \(U\) values and volumes. **b.** Volume-Hubbard \(U\) phase diagram of Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O for doping at the Pb(1) site. The volume of the \(P3\) structure is presented on the \(x\)-axis as a reference, corresponding to a pressure range of \(0-25\) GPa. We find slightly larger volume changes in the \(\Gamma\)-distorted \(P1\) structure. The color bar indicates the enthalpy difference between the \(P3\) and \(P1\) structures (in meV/atom).
Beyond clarifying the dynamical stability of lead apatite at the harmonic level, we have shown that anharmonic phonon-phonon interactions play a key role in stabilising multiple lead apatite compounds. Overall, our calculations indicate that both parent and copper-doped lead apatite compounds are dynamically stable at room temperature.
We believe that lead apatite is a nice example to illustrate the ability of state-of-the-art first principles methods to fully characterise a complex and experimentally relevant system. However, our work also demonstrates that reliable results and conclusions can only be reached with a careful consideration of convergence parameters, such as the size of the \(\mathbf{q}\)-point grid, and physical models, such as the inclusion of anharmonic phonon-phonon interactions.
## Conclusions
We show that the experimentally suggested structures of lead apatite and copper-doped lead apatite are dynamically stable at room temperature. Most structures are dynamically stable at the harmonic level, but some key structures, including the structure claimed to be responsible for superconductivity at ambient conditions, only becomes dynamically stable with the inclusion of anharmonic phonon-phonon interactions. Our results resolve a puzzling suggestion by multiple earlier computational works that claimed that the experimentally reported structures of both parent and copper-doped lead apatite compounds were dynamically unstable, and fully reconcile the current experimental and theoretical description of the structure of lead apatite.
## Methods
_Electronic structure calculations._ - We perform density functional theory (DFT) calculations using the Vienna _ab initio_ simulation package vasp[40; 41], which implements the projector-augmented wave method [42]. We employ PAW pseudopotentials with valence configurations \(5d^{10}6s^{2}6p^{2}\) for lead, \(3d^{10}4s^{1}\) for copper, \(3s^{2}3p^{3}\) for phosphorus, \(2s^{2}2p^{4}\) for oxygen, and \(1s^{1}\) for hydrogen. For the exchange-correlation energy, we use both the generalized-gradient approximation functional of Perdew-Burke-Ernzerhof (PBE) [43] and its modified version for solids (PBEsol) [44]. We find that experimental lattice parameters agree well with those predicted by PBEsol, and the data presented in the main text has been obtained using PBEsol (see comparison between PBE and PBEsol in Supplemenatry Fig. S5**a**). An on-site Hubbard interaction \(U\) is applied to the copper \(3d\) orbitals based on the simplified rotationally invariant DFT+\(U\) method by Dudarev and
co-workers [45]. We have checked that DFT+\(U\) gives almost identical lattice parameters to DFT. Converged results are obtained with a kinetic energy cutoff for the plane wave basis of 600 eV and a \(\mathbf{k}\)-point grid of size \(4\times 4\times 5\) and \(6\times 6\times 8\) for the primitive cell of the parent and copper-doped lead apatite, respectively (see convergence test results in Supplementary Note 1). The geometry of the structures is optimised until all forces are below 0.01 eV/A and the pressure is below 1 kbar.
_Harmonic phonon calculations._ - We perform harmonic phonon calculations using the finite displacement method in conjunction with nondiagonal supercells [46; 47]. A converged calculation requires a minimum coarse \(\mathbf{q}\)-point grid including the high symmetry points \(\Gamma\), M, K, and A, which we accomplish by means of a Farey nonuniform grid [38] of size \((2\times 2\times 2)\cup(3\times 3\times 1)\). To evaluate the force derivatives, we use a three-point central formula with a finite displacement of 0.02 bohr. The underlying electronic structure calculations are performed using the same parameters as those described above. We have also cross-checked the phonon band structures with calculations using castep[48] and Quantum Espresso[49] (see Supplementary Fig. S2).
_Anharmonic phonon calculations._ - We perform anharmonic phonon calculations using the stochastic self-consistent harmonic approximation (SSCHA) [50; 51; 52], which accounts for anharmonic effects at both zero and finite temperature. The self-consistent harmonic approximation [53] is a quantum variational method on the free energy, and the variational minimization is performed with respect to a trial harmonic system. In its stochastic implementation, the forces on atoms are calculated in an ensemble of configurations drawn from the trial harmonic system. We use vasp to perform electronic structure calculations using the same parameters as those described above, and consider configurations commensurate with a \(2\times 2\times 2\) supercell. The number of configurations needed to converge the free energy Hessian is of the order of 4,000 for the parent lead apatite structure and of the order of 8,000 configurations for the copper-doped structure.
###### Acknowledgements.
S.-W.K., K.W., and B.M. are supported by a UKRI Future Leaders Fellowship [MR/V023926/1]. B.M. also acknowledges support from the Gianna Angelopoulos Programme for Science, Technology, and Innovation, and from the Winton Programme for the Physics of Sustainability. S.C. acknowledges financial support from the Cambridge Trust and from the Winton Programme for the Physics of Sustainability. The computational resources were provided by the Cambridge Tier-2 system operated by the University of Cambridge Research Computing Service and funded by EPSRC [EP/P020259/1] and by the UK National Supercomputing Service ARCHER2, for which
access was obtained via the UKCP consortium and funded by EPSRC [EP/X035891/1].
## References
* (1) Lee, S., Kim, J.-H. & Kwon, Y.-W. The firs room-temperature ambient-pressure superconductor. _arXiv preprint arXiv:2307.12008_ (2023).
* (2) Lee, S. _et al._ Superconductor Pb\({}_{10-x}\)Cu\({}_{x}\)(PO\({}_{4}\))\({}_{6}\)O showing levitation at room temperature and atmospheric pressure and mechanism. _arXiv preprint arXiv:2307.12037_ (2023).
* (3) Kumar, K., Karn, N. K. & Awana, V. P. S. Synthesis of possible room temperature superconductor LK-99: Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O. _Superconductor Science and Technology_**36**, 10LT02 (2023). URL [https://dx.doi.org/10.1088/1361-6668/acf002](https://dx.doi.org/10.1088/1361-6668/acf002).
* (4) Liu, L. _et al._ Semiconducting transport in Pb\({}_{10-x}\)Cu\({}_{x}\)(PO\({}_{4}\))\({}_{6}\)O sintered from Pb\({}_{2}\)SO\({}_{5}\) and Cu\({}_{3}\)P. _Advanced Functional Materials_**n/a**, 2308938. URL [https://onlinelibrary.wiley.com/doi/abs/10.1002/adfm.202308938](https://onlinelibrary.wiley.com/doi/abs/10.1002/adfm.202308938).
* (5) Wu, H., Yang, L., Xiao, B. & Chang, H. Successful growth and room temperature ambient-pressure magnetic levitation of LK-99. _arXiv preprint arXiv:2308.01516_ (2023).
* (6) Guo, K., Li, Y. & Jia, S. Ferromagnetic half levitation of LK-99-like synthetic samples. _Science China Physics, Mechanics & Astronomy_**66**, 107411 (2023). URL [https://doi.org/10.1007/s11433-023-2201-9](https://doi.org/10.1007/s11433-023-2201-9).
* (7) Wang, P. _et al._ Ferromagnetic and insulating behavior in both half magnetic levitation and non-levitation LK-99 like samples. _Quantum Frontiers_**2**, 10 (2023). URL [https://doi.org/10.1007/s44214-023-00035-z](https://doi.org/10.1007/s44214-023-00035-z).
* (8) Zhang, Y., Liu, C., Zhu, X. & Wen, H.-H. Ferromagnetism and insulating behavior with a logarithmic temperature dependence of resistivity in Pb\({}_{10-x}\)Cu\({}_{x}\)(PO\({}_{4}\))\({}_{6}\)O. _arXiv preprint arXiv:2308.05786_ (2023).
* (9) Zhu, S., Wu, W., Li, Z. & Luo, J. First order transition in Pb\({}_{10-x}\)Cu\({}_{x}\)(PO\({}_{4}\))\({}_{6}\)O (\(0.9<x<1.1\)) containing Cu\({}_{2}\)S. _arXiv preprint arXiv:2308.04353_ (2023).
* (10) Timokhin, I., Chen, C., Yang, Q. & Mishchenko, A. Synthesis and characterisation of LK-99. _arXiv preprint arXiv:2308.03823_ (2023).
* (11) Kumar, K., Karn, N., Kumar, Y. & Awana, V. Absence of superconductivity in LK-99 at ambient conditions. _arXiv preprint arXiv:2308.03544_ (2023).
* (12) Liu, C. _et al._ Phases and magnetism at microscale in compounds containing nominal Pb\({}_{10-x}\)Cu\({}_{x}\)(PO\({}_{4}\))\({}_{6}\)O. _Phys. Rev. Mater._**7**, 084804 (2023). URL [https://link.aps.org/doi/10.1007/s11433-023-2201-9](https://link.aps.org/doi/10.1007/s11433-023-2201-9).
* (13) Liu, C. _et al._ Phase-transition and magnetism
1103/PhysRevMaterials.7.084804.
* [13] Puphal, P. _et al._ Single crystal synthesis, structure, and magnetism of Pb\({}_{10-x}\)Cu\({}_{x}\)(PO\({}_{4}\))\({}_{6}\)O. _arXiv preprint arXiv:2308.06256_ (2023).
* [14] Jain, P. K. Superionic phase transition of copper (I) sulfide and its implication for purported superconductivity of LK-99. _The Journal of Physical Chemistry C_ (2023). URL [https://doi.org/10.1021/acs.jpcc.3c05684](https://doi.org/10.1021/acs.jpcc.3c05684).
* [15] Griffin, S. M. Origin of correlated isolated flat bands in copper-substituted lead phosphate apatite. _arXiv preprint arXiv:2307.16892_ (2023).
* [16] Si, L. & Held, K. Electronic structure of the putative room-temperature superconductor Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O. _arXiv preprint arXiv:2308.00676_ (2023).
* [17] Lai, J., Li, J., Liu, P., Sun, Y. & Chen, X.-Q. First-principles study on the electronic structure of Pb\({}_{10-x}\)Cu\({}_{x}\)(PO\({}_{4}\))\({}_{6}\)O (\(x=0,1\)). _Journal of Materials Science & Technology_**171**, 66-70 (2024). URL [https://www.sciencedirect.com/science/article/pii/S1005030223006291](https://www.sciencedirect.com/science/article/pii/S1005030223006291).
* [18] Kurleto, R. _et al._ Pb-apatite framework as a generator of novel flat-band CuO based physics, including possible room temperature superconductivity. _arXiv preprint arXiv:2308.00698_ (2023).
* [19] Bai, H., Gao, L. & Zeng, C. Ferromagnetic ground state and spin-orbit coupling induced bandgap open in LK99. _arXiv preprint arXiv:2308.05134_ (2023).
* [20] Swift, M. W. & Lyons, J. L. Comment on "Origin of correlated isolated flat bands in copper-substituted lead phosphate apatite". _arXiv preprint arXiv:2308.08458_ (2023).
* [21] Pashov, D., Acharya, S., Lany, S., Dessau, D. S. & van Schilfgaarde, M. Multiple slater determinants and strong spin-fluctuations as key ingredients of the electronic structure of electron-and hole-doped Pb\({}_{10-x}\)Cu\({}_{x}\)(PO\({}_{4}\))\({}_{6}\)O. _arXiv preprint arXiv:2308.09900_ (2023).
* [22] Korotin, D. M., Novoselov, D. Y., Shorikov, A. O., Anisimov, V. I. & Oganov, A. R. Electronic correlations in promising room-temperature superconductor Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O: a DFT+ DMFT study. _arXiv preprint arXiv:2308.04301_ (2023).
* [23] Si, L. _et al._ Pb\({}_{10-x}\)Cu\({}_{x}\)(PO\({}_{4}\))\({}_{6}\)O: a Mott or charge transfer insulator in need of further doping for (super) conductivity. _arXiv preprint arXiv:2308.04427_ (2023).
* [24] Yue, C., Christiansson, V. & Werner, P. Correlated electronic structure of Pb\({}_{10-x}\)Cu\({}_{x}\)(PO\({}_{4}\))\({}_{6}\)O. _arXiv preprint arXiv:2308.04976_ (2023).
* [25] Oh, H. & Zhang, Y.-H. S-wave pairing in a two-orbital t-J model on triangular lattice: possible application to Pb\({}_{10-x}\)Cu\({}_{x}\)(PO\({}_{4}\))\({}_{6}\)O. _arXiv preprint arXiv:2308.02469_ (2023).
* [26] Witt, N., Si, L., Tomczak, J. M., Held, K. & Wehling, T. No superconductivity in Pb\({}_{9}\)Cu\({}_{1}\)(PO\({}_{4}\))\({}_{6}\)O found in orbital and spin fluctuation exchange calculations. _arXiv preprint arXiv:2308.07261_ (2023).
* [27] Paudyal, H., Flatte, M. E. & Paudyal, D. Implications of the electron-phonon coupling in CuPb\({}_{9}\)(PO\({}_{4}\))\({}_{6}\)O for high-temperature superconductivity: an _ab initio_ study. _arXiv preprint arXiv:2308.14294_ (2023).
* [28] Shen, J. _et al._ Phase stability of lead phosphate apatite Pb\({}_{10-x}\)Cu\({}_{x}\)(PO\({}_{4}\))\({}_{6}\)O, Pb\({}_{10-x}\)Cu\({}_{x}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\), and Pb\({}_{8}\)Cu\({}_{2}\)(PO\({}_{4}\))\({}_{6}\). _arXiv preprint arXiv:2308.07941_ (2023).
* [29] Jiang, Y. _et al._ Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\): Phonon bands, localized flat band magnetism, models, and chemical analysis. _arXiv preprint arXiv:2308.05143_ (2023).
* [30] Hao, L. & Fu, E. First-principles calculation on the electronic structures, phonon dynamics, and electrical conductivities of Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O and Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O compounds. _arXiv preprint arXiv:2308.05618_ (2023).
* [31] Liu, J. _et al._ Symmetry breaking induced insulating electronic state in Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O. _arXiv preprint arXiv:2308.11766_ (2023).
* [32] Liu, R., Guo, T., Lu, J., Ren, J. & Ma, T. Different phase leads to different transport behavior in Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O compounds. _arXiv preprint arXiv:2308.08454_ (2023).
* [33] Cabezas-Escares, J., Barrera, N., Cardenas, C. & Munoz, F. Theoretical insight on the LK-99 material. _arXiv preprint arXiv:2308.01135_ (2023).
* [34] Rooksby, H. Identification by X-ray diffraction of crystalline inclusions in glass. _Analyst_**77**, 759-765 (1952).
* [35] Merker, L. & Wondratschek, H. Der oxypromorphic Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O und der ausschnitt Pb\({}_{4}\)P\({}_{2}\)O\({}_{9}\)-Pb\({}_{3}\)(PO\({}_{4}\))\({}_{2}\) des systems PbO-P\({}_{2}\)O\({}_{5}\). _Zeitschrift fur anorganische und allgemeine Chemie_**306**, 25-29 (1960). URL [https://onlinelibrary.wiley.com/doi/abs/10.1002/zaac.19603060105](https://onlinelibrary.wiley.com/doi/abs/10.1002/zaac.19603060105).
* [36] Merker, L., Engel, G., Wondratschek, H. & Ito, J. Lead ions and empty halide sites in apatites. _American Mineralogist: Journal of Earth and Planetary Materials_**55**, 1435-1437 (1970).
* [37] Krivovichev, S. V. & Burns, P. C. Crystal chemistry of lead oxide phosphates: crystal structures of Pb\({}_{4}\)O(PO\({}_{4}\))\({}_{2}\), Pb\({}_{8}\)O\({}_{5}\)(PO\({}_{4}\))\({}_{2}\) and Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O. _Zeitschrift fur Kristallographie-Crystalline Materials_**218**, 357-365 (2003).
* [38] Chen, S., Salzbrenner, P. T. & Monserrat, B. Nonuniform grids for Brillouin zone integration and interpolation. _Phys. Rev. B_**106**, 155102 (2022). URL [https://link.aps.org/doi/10.1103/PhysRevB.106.155102](https://link.aps.org/doi/10.1103/PhysRevB.106.155102).
* [39] Bruckner, S., Lusvardi, G., Menabue, L. & Saladini, M. Crystal structure of lead hydroxyapatite from powder X-ray diffraction data. _Inorganic Chimica Acta_**236**, 209-212 (1995). URL [https://www.sciencedirect.com/science/article/pii/002016939504636N](https://www.sciencedirect.com/science/article/pii/002016939504636N).
* [40] Kresse, G. & Furthmuller, J. Efficiency of ab-initio total energy calculations for metals and semiconductors using a plane-wave basis set. _Computational Materials Science_**6**, 15-50 (1996). URL [https://www.sciencedirect.com/science/article/pii/092702569600080](https://www.sciencedirect.com/science/article/pii/092702569600080).
* [41] Kresse, G. & Furthmuller, J. Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set. _Phys. Rev. B_**54**, 11169-11186 (1996). URL [https://link.aps.org/doi/10.1103/PhysRevB.54.11169](https://link.aps.org/doi/10.1103/PhysRevB.54.11169).
* [42] Blochl, P. E. Projector augmented-wave method. _Phys. Rev. B_**50**, 17953-17979 (1994). URL [https://link.aps.org/doi/10.1103/PhysRevB.50.17953](https://link.aps.org/doi/10.1103/PhysRevB.50.17953).
* Perdew _et al._ [1996] Perdew, J. P., Burke, K. & Ernzerhof, M. Generalized gradient approximation made simple. _Phys. Rev. Lett._**77**, 3865-3868 (1996). URL [https://link.aps.org/doi/10.1103/PhysRevLett.77.3865](https://link.aps.org/doi/10.1103/PhysRevLett.77.3865).
* Perdew _et al._ [2008] Perdew, J. P. _et al._ Restoring the density-gradient expansion for exchange in solids and surfaces. _Phys. Rev. Lett._**100**, 136406 (2008). URL [https://link.aps.org/doi/10.1103/PhysRevLett.100.136406](https://link.aps.org/doi/10.1103/PhysRevLett.100.136406).
* Dudarev _et al._ [1998] Dudarev, S. L., Botton, G. A., Savrasov, S. Y., Humphreys, C. J. & Sutton, A. P. Electron-energy-loss spectra and the structural stability of nickel oxide: An LSDA+U study. _Phys. Rev. B_**57**, 1505-1509 (1998). URL [https://link.aps.org/doi/10.1103/PhysRevB.57.1505](https://link.aps.org/doi/10.1103/PhysRevB.57.1505).
* Lloyd-Williams & Monserrrat [2015] Lloyd-Williams, J. H. & Monserrrat, B. Lattice dynamics and electron-phonon coupling calculations using nondiagonal supercells. _Phys. Rev. B_**92**, 184301 (2015). URL [https://link.aps.org/doi/10.1103/PhysRevB.92.184301](https://link.aps.org/doi/10.1103/PhysRevB.92.184301).
* Monserrat [2018] Monserrat, B. Electron-phonon coupling from finite differences. _J. Phys. Condens. Matter_**30**, 083001 (2018). URL [https://dx.doi.org/10.1088/1361-648X/aaa737](https://dx.doi.org/10.1088/1361-648X/aaa737).
* Crystalline Materials_**220**, 567-570 (2005). URL [https://doi.org/10.1524/zkri.220.5.567.65075](https://doi.org/10.1524/zkri.220.5.567.65075).
* Giannozzi _et al._ [2009] Giannozzi, P. _et al._ QUANTUM ESPRESSO: a modular and open-source software project for quantum simulations of materials. _Journal of Physics: Condensed Matter_**21**, 395502 (2009). URL [https://dx.doi.org/10.1088/0953-8984/21/39/395502](https://dx.doi.org/10.1088/0953-8984/21/39/395502).
* Errea _et al._ [2014] Errea, I., Calandra, M. & Mauri, F. Anharmonic free energies and phonon dispersions from the stochastic self-consistent harmonic approximation: Application to platinum and palladium hydrides. _Phys. Rev. B_**89**, 064302 (2014). URL [https://link.aps.org/doi/10.1103/PhysRevB.89.064302](https://link.aps.org/doi/10.1103/PhysRevB.89.064302).
* Bianco _et al._ [2017] Bianco, R., Errea, I., Paulatto, L., Calandra, M. & Mauri, F. Second-order structural phase transitions, free energy curvature, and temperature-dependent anharmonic phonons in the self-consistent harmonic approximation: Theory and stochastic implementation. _Phys. Rev. B_**96**, 014111 (2017). URL [https://link.aps.org/doi/10.1103/PhysRevB.96.014111](https://link.aps.org/doi/10.1103/PhysRevB.96.014111).
* Monacelli _et al._ [2021] Monacelli, L. _et al._ The stochastic self-consistent harmonic approximation: calculating vibrational properties of materials with full quantum and anharmonic effects. _Journal of Physics: Condensed Matter_**33**, 363001 (2021). URL [https://dx.doi.org/10.1088/1361-648X/ac066b](https://dx.doi.org/10.1088/1361-648X/ac066b).
* Hooton [1955] Hooton, D. LI. a new treatment of anharmonicity in lattice thermodynamics: I. _The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science_**46**, 422-432 (1955). URL [https://doi.org/10.1080/14786440408520575](https://doi.org/10.1080/14786440408520575).
**Supplementary Information for**
**"On the dynamical stability of copper-doped lead apatite"**
Sun-Woo Kim,\({}^{1,\,\ast}\) Kang Wang,\({}^{1}\) Siyu Chen,\({}^{2}\) Lewis J. Conway,\({}^{1,3}\) G. Lucian Pascut,\({}^{4}\) Ion Errea,\({}^{5,6,7}\) Chris J. Pickard,\({}^{1,3}\) and Bartomeu Monserrat\({}^{1,2,\,\,\dagger}\)
\({}^{1}\)_Department of Materials Science and Metallurgy, University of Cambridge, 27 Charles Babbage Road, Cambridge CB3 0FS, United Kingdom_
\({}^{2}\)_Cavendish Laboratory, University of Cambridge, J. J. Thomson Avenue, Cambridge CB3 0HE, United Kingdom_
\({}^{3}\)_Advanced Institute for Materials Research, Tohoku University, 2-1-1 Katahira, Aoba, Sendai 980-8577, Japan_
\({}^{4}\)_MANSiD Research Center and Faculty of Forestry, Stefan Cel Mare University (USV), Suceava 720229, Romania_
\({}^{5}\)_Fisika Aplikatua Saila, Gipuzkoako Ingeniaritza Eskola, University of the Basque Country (UPV/EHU), Europa Plaza 1, 20018 Donostia/San Sebastian, Spain_
\({}^{6}\)_Centro de Fisica de Materiales (CSIC-UPV/EHU), Manuel de Lardizabal Pasaelekua 5, 20018 Donostia/San Sebastian, Spain_
\({}^{7}\)_Donostia International Physics Center (DIPC), Manuel de Lardizabal Pasaelekua 4, 20018 Donostia/San Sebastian, Spain_
###### Contents
* 1 **Convergence of electronic structure calculations**
* 2 **Lead apatite**
* 2.1 Phonon dispersion dependence on the coarse **q**-point grid size
* 2.2 Potential energy surface along the imaginary phonon modes of Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O
* 3 **Copper-doped lead apatite*
* 3.1 Harmonic phonon dispersions at various levels of theory
* 3.2 Partial density of states analysis for Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O
* 3.3 Harmonic analysis for Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\)
* **Supplementary References**
## Supplementary Note 1 Convergence of electronic structure calculations
We have tested various convergence parameters for the electronic structure calculations underpinning the calculation of the phonon dispersions. We find that an energy cutoff of \(600\,\mathrm{eV}\) and a \(\mathbf{k}\)-point grid size of \(4\times 4\times 5\) are converged for the parent lead apatite Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O, as illustrated in Fig. S1. For copper-doped lead apatite Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O we find that a \(\mathbf{k}\)-point grid size of \(6\times 6\times 8\) leads to converged results.
We have also performed a cross-check of the phonon dispersion using three different codes: vasp[1; 2], castep[3] and Quantum Espresso[4]. We have confirmed that these codes
yield very similar results, as depicted in Fig. S2.
**Supplementary Note 2. LEAD APATITE**
### Phonon dispersion dependence on the coarse q-point grid size
Figure S3 shows the phonon dispersions of Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O (top) and Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) (bottom) for different choices of coarse **q**-point grid size. For Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O, a qualitatively correct phonon dispersion is obtained with a coarse grid of size \(2\times 2\times 2\), and the larger grid sizes allow us to confirm that the imaginary frequencies at the K, M, and H points are physical rather than an artifact of Fourier interpolation.
For Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\), a qualitatively correct phonon dispersion is only obtained when all of \(\Gamma\), M, K, and A points are included in the coarse **q**-point grid, which can only be accomplished with a regular grid of minimum size \(6\times 6\times 2\) or alternatively a non-uniform Farey grid of minimum size \((2\times 2\times 2)\cup(3\times 3\times 1)\). We use the latter as it is computationally more efficient.
We note that all earlier phonon calculations for the parent lead apatite compounds in the literature use coarse **q**-point grids of sizes \(1\times 1\times 1\)[5; 6], \(1\times 1\times 2\)[7], or \(2\times 2\times 2\)[8], and most imaginary phonon modes observed in these calculations are not physical but instead an artifact of Fourier interpolation caused by unconverged calculations. Indeed, the phonon dispersions reported in these works coincide with the corresponding unconverged calculations
depicted in Fig. S3.
2. Potential energy surface along the imaginary phonon modes of Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O
The harmonic phonon dispersion of Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)O exhibits imaginary frequencies at the M, K, and H points of the Brillouin zone, as depicted in the left panel of Fig. S4. The absolute values of these imaginary frequencies are below 2.3 meV, suggesting that the system is only marginally unstable at the harmonic level. To confirm this, we calculate the potential energy surface by displacing the atoms along the eigenvectors associated with the three imaginary modes, resulting in the double well potentials shown in the right panel of Fig. S4. The anharmonic potentials have a dominant quartic term that suppresses the instability to a maximum of 0.2 meV per formula unit for the K point, with smaller instabilities for the M and H points. By comparison, the thermal energy associated with room temperature is about 26 meV, suggesting that these shallow double well potentials can be overcome by thermally-induced anharmonic vibrations. We confirm this in the main text by performing self-consistent harmonic calculations at 50 K that deliver a dynamically stable phonon dispersion. The structure is likely dynamically stable at even lower temperatures, possibly at 0 K where it would be stabilized by quantum fluctuations.
Supplementary Note 3. COPPER-DOPED LEAD APATITE
### 3.1. Harmonic phonon dispersions at various levels of theory
The results regarding the dynamical stability of copper-doped lead apatite presented in the main text are robust against the level of theory used to describe the electronic structure of the system, as summarized in Fig. S5 for Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O. Panel a shows a comparison between the PBEsol (used in the main text) and PBE exchange-correlation functionals, which give qualitatively similar phonon dispersions and in particular they give similar imaginary branches. Panel b shows a comparison of the phonon dispersion for different values of Hubbard \(U\) applied on the copper \(3d\) orbital. In all cases there is an imaginary phonon branch, but the absolute value of the associated imaginary frequency increases with increasing \(U\). We report results using \(U=3\,\)eV in the main text. For clarity, the evolution with \(U\) of the \(\Gamma\)-point phonon imaginary frequency is detailed in panel c. Finally, panel d shows the imaginary frequency of the \(\Gamma\)-point phonon comparing the non-magnetic, non-magnetic with spin-orbit coupling, and ferromagnetic with spin-orbit coupling results. In all cases, the \(\Gamma\)-point phonon frequency is imaginary, although the magnitude changes by about 10 meV depending on the level of theory. Overall, the harmonic dynamical instability of Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O is robust against the level of electronic structure theory used.
The phonon dispersion of the variant copper-doped apatite Pb\({}_{10}\)(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) exhibits a similar behavior to that of Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O, as depicted in Fig. S6. In particular, we find a
similar dependence on the Hubbard \(U\) parameter, in which the magnitude of the imaginary frequency increases with increasing \(U\), and a similarly robust imaginary frequency with the inclusion of spin-orbit coupling and ferromagnetic order.
Figure S6: **Harmonic phonon dispersion of Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) at various levels of theory.****a.** Hubbard \(U\) dependence of harmonic phonon band dispersion with \(U\) applied to the copper \(3d\) orbital. **b.** The lowest imaginary \(\Gamma\)-mode energy in the harmonic phonon dispersion as a function of Hubbard \(U\). **c.** Spin-orbit coupling and ferromagnetism effects on the lowest imaginary \(\Gamma\)-mode energy.
### 3.2. Partial density of states analysis for Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O
The \(P3\) Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O structure is dynamically unstable at the harmonic level, and its imaginary frequencies drive it to a lower-energy lower-symmetry \(P1\) structure, which is dynamically stable. In the main text, we attribute the lower energy of the \(P1\) structure compared to the \(P3\) structure to a downward shift of the occupied part of the density of states (DOS) dominated by copper-derived orbitals, as illustrated in Fig. S7. We find a similar downward shift of the density of states when comparing the \(P1\) structure doped at the Pb(2) site, as also illustrated in Fig. S7. Overall, this analysis suggests that the dynamical stability of the \(P1\) phases with doping at either the Pb(1) or Pb(2) sites is driven by the relative energy of the copper states.
### 3.3. Harmonic analysis for Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)(Oh)\({}_{2}\)
In the main text, we have explored the potential energy surface along the imaginary phonon modes for Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)O. Here, we perform a similar analysis for Pb\({}_{9}\)Cu(PO\({}_{4}\))\({}_{6}\)(OH)\({}_{2}\) and find similar results. The dominant instability is driven by a \(\Gamma\) point phonon mode (Fig. S8**a**), and fully relaxing the structure along this instability leads to a new structure of \(P1\) symmetry, which is dynamically stable at the harmonic level of theory (Fig. S8**b**). |
2307.16782 | A non-Newtonian approach in differential geometry of curves:
multiplicative rectifying curves | In this paper, we study the rectifying curves in multiplicative Euclidean
space of dimension 3, i.e., those curves for which the position vector always
lies in its rectifying plane. Since the definition of rectifying curve is
affine and not metric, we are directly able to perform multiplicative
differential-geometric concepts to investigate such curves. Having presented
several characterizations, we completely classify the multiplicative rectifying
curves by means of the multiplicative spherical curves. | Muhittin Evren Aydin, Aykut Has, Beyhan Yilmaz | 2023-07-31T15:49:22Z | http://arxiv.org/abs/2307.16782v1 | # A non-Newtonian approach in differential geometry of curves: multiplicative rectifying curves
###### Abstract.
In this paper, we study the rectifying curves in multiplicative Euclidean space of dimension \(3\), i.e., those curves for which the position vector always lies in its rectifying plane. Since the definition of rectifying curve is affine and not metric, we are directly able to perform multiplicative differential-geometric concepts to investigate such curves. Having presented several characterizations, we completely classify the multiplicative rectifying curves by means of the multiplicative spherical curves.
Key words and phrases: Rectifying curve; spherical curve, multiplicative calculus; multiplicative Euclidean space 2020 Mathematics Subject Classification: Primary 53A04; Secondary 11U10, 08A05
## 1. Introduction
Derivative and integral, which play a central role in the infinitesimal (Newtonian) calculus, are the extensions of the aritmetic operations, addition and subtraction. Hence, it is reasonable to expect that alternative arithmetic operations will also engender alternative calculi. In this sense, Volterra and Grossman and Katz, in their pioneering works [28, 29, 41], independently introduced many different calculi (the geometric calculus, the anageometric calculus, and etc.) from the Newtonian calculus where we are interested in the _multiplicative calculus_. The multiplicative calculus is so-called because the calculus depends on multiplication and division operations.
In the last decades, there has been an ascending interest in improving the theory and applications of multiplicative calculus. From a mathematical point of view, the contribution to a non-Newtonian calculus is of own interest. Non-Newtonian approaches can be found, for example, in complex analysis [9, 11, 31, 39], in differential equations [10, 37, 42, 43, 44], in numerical analysis [1, 12, 32, 33, 35, 45], in algebra [13, 17], in variational analysis [40], and in spectral and Dirac system theories [25, 26, 30, 48]. In addition, multiplicative calculus has remarkable applications in dynamical systems [1, 2, 3, 38], in economics [18, 20, 34], and in image analysis [21, 36].
In this paper, we investigate the differential-geometric curves using the tools of multiplicative calculus. As far as the authors are aware, there has been no such attempt in the literature, except the comprehensive book by Svetlin [22] published in 2022. In his book, the author carried out the multiplicative tools for the study of differential-geometric curves, surfaces and higher-dimensional objects. Here, we pursue two goals: The first is to mathematically enrich the multiplicative calculus by adding differential-geometric interpretations. The second is to show that, in some cases, non-Newtonian derivatives and integrals need to be performed in differential geometry instead of the usual derivatives and integrals.
More clearly, consider the following subset of \(\mathbb{R}^{2}\) (see Figure 2)
\[C=\{(x,y)\in\mathbb{R}^{2}:(\log x)^{2}+(\log y)^{2}=1,x,y>0\}.\]
We also can parameterize this set as \(x(t)=e^{\cos(\log t)}\) and \(y(t)=e^{\sin(\log t)}\), \(t>0\). If we use the usual arithmetic operations, derivative and integral, then it would not be easy to understand what the set \(C\) expresses geometrically. With or without the help of computer programs, we cannot even calculate its basic invariants, e.g., the arc length function \(s(t)\) is given by a complicated integral
\[s(t)=\int^{t}\frac{1}{u}\left((\sin(\log u)e^{\cos(\log u)})^{2}+(\cos(\log u )e^{\sin(\log u)})^{2}\right)^{1/2}du.\]
However, applying the multiplicative tools, we see that \(C\) is indeed a multiplicative circle parameterized by the multiplicative arc length whose center is \((1,1)\) and radius \(e\), which is one of the simplest multiplicative curves (see Section 3). This is the reason why, in some cases, the multiplicative tools need to be applied instead of the usual ones.
In the case of the dimension \(3\), the determination of geometric objects sometimes becomes more difficult when the multiplicative tools are omitted. For example, consider the following parameterized curve (see Fig. 2)
\[x(t)=e^{\sec(\log t)/\sqrt{2}},\quad y(t)=e^{\sec(\log t)\cos(\log t^{\sqrt{2} })/\sqrt{2}},\quad z(t)=e^{\sec(\log t)\sin(\log t^{\sqrt{2}})/\sqrt{2}},t>0. \tag{1}\]
As in the case of the multiplicative circle, it would not be easy to work geometrically on this curve by the means of the usual differential geometry. However, from the perspective of multiplicative differential geometry, it is a multiplicative rectifying curve, a type of parameterized space curves introduced by B.-Y. Chen [14] in 2003.
On the other hand, if one uses the multiplicative derivative together with the usual arithmetic operations, then one would have some difficulties; for example, it would not satisfy the properties such as linearity, chain rule, Leibniz rule and so on (see [8]). However, these are essential tools to establish a differential-geometric theory. To overcome these difficulties, Svetlin [22] proposed a useful idea, which is explained as follows.
Let \(\mathbb{R}^{n}\) be the real vector space of the dimension \(n\geq 2\). A _multiplicative Euclidean space_\(\mathbb{E}^{n}_{*}\) is the pair \((\mathbb{R}^{n},\langle,\rangle_{*})\) where \(\langle,\rangle_{*}\) is the so-called _multiplicative Euclidean inner product_. We note that the usual vector addition and scalar multiplication on \(\mathbb{R}^{n}\) are now replaced with the multiplicative operations (see Section 2). With these new arithmetic operations, the multiplicative derivative has the properties that a derivative has. Therefore, it is now suitable for our purpose.
We would like to emphasize the importance of the rectifying curves whose position vector always lies in its own rectifying plane, because of their close relationship to spherical curves, helices, geodesics and centrodes in mechanics, see [15, 16, 19]. In addition, we point out the underlying space of \(\mathbb{E}^{3}_{*}\) is \(\mathbb{R}^{3}\) and the definition of rectifying curve is affine and not metric. Hence, if we want to examine the geometric features of the rectifying curves, we may directly use the multiplicative differential-geometric concepts. These are the justifications that we consider such curves in \(\mathbb{E}^{3}_{*}\).
The structure of the paper is as follows. After some preliminaries on multiplicative algebra and calculus (Section 2), we will recall in Section 3 the curvatures and Frenet formulas of multiplicative space curves. In Proposition 3.3, we will characterize multiplicative curves lying on a multiplicative sphere in terms of their curvatures. In Section 4, we will introduce the multiplicative counterpart to the notion of rectifying curves. We will completely classify multiplicative rectifying curves using multiplicative spherical curves (Theorem 4.5). Before that, however, we will need to prove some results characterizing rectifying curves in terms of their curvatures and the multiplicative distance function (Propositions 4.1, 4.2, 4.3, 4.4).
## 2. Preliminaries
In the present section we recall the multiplicative arguments from algebra and calculus, cf. [22, 23, 24].
Let \(\mathbb{R}_{*}\) be the set of all the positive real numbers. For \(a,b\in\mathbb{R}_{*}\) we set
\[a+_{*}b = e^{\log a+\log b},\] \[a-_{*}b = e^{\log a-\log b},\] \[a\cdot_{*}b = e^{\log a\cdot\log b},\] \[a/_{*}b = e^{\log a/\log b},\quad b\neq 1.\]
Here, for some \(a\in\mathbb{R}_{*}\), we also have
\[a^{2_{*}}=e^{(\log a)^{2}},\quad a^{\frac{1}{2}_{*}}=e^{\sqrt{\log a}}.\]
It is direct to conclude that the triple \((\mathbb{R}_{*},+_{*},\cdot_{*})\) is a field. We denote by \(\mathbb{R}_{*}\) this field for convenience. Introduce
\[|a|_{*}=\left\{\begin{array}{ll}a,&a\in[1,\infty)\\ 1/a,&a\in(0,1).\end{array}\right.\]
Given \(x\mapsto f(x)\in\mathbb{R}_{*}\), \(x\in I\subset\mathbb{R}_{*}\), the _multiplicative derivative_ of \(f\) at \(x\) is defined by
\[f^{*}(x)=\lim_{h\to 1}(f(x+_{*}h)-_{*}f(x))/_{*}h.\]
In terms of the usual arithmetical operations,
\[f^{*}(x)=\lim_{h\to 1}\left(\frac{f(hx)}{f(x)}\right)^{1/\log h},\]
or by L' Hospital's rule,
\[f^{*}(x)=e^{x(\log f(x))^{\prime}},\]
where the prime is the usual derivative with respect to \(x\).
Denote by \(f^{*(n)}(x)\) the \(n\)th-order multiplicative derivative of \(f(x)\) which is the multiplicative derivative of \(f^{*(n-1)}(x)\), for some positive integer \(n>1\). We call the function \(f(x)\)_multiplicative differentiable_ on \(I\) if \(f^{*(n)}(x)\) exists where, if necessary, \(n\) may be extended in maximum order that will be needed.
We may easily conclude that the multiplicative derivative holds some essential properties as linearity, Leibniz rule and chain rules (see [23]).
The _multiplicative integral_ of the function \(f(x)\) in an interval \([a,b]\subset\mathbb{R}_{*}\) is defined by
\[\int_{*a}^{b}f(x)\cdot_{*}d_{*}x=e^{\int_{a}^{b}\frac{1}{x}\log f(x)dx}.\]
Let \(\mathbb{R}_{*}^{n}=\{(x_{1},...,x_{n}):x_{1},...,x_{n}\in\mathbb{R}_{*}\}\) and \(\mathbf{x},\mathbf{y}\in\mathbb{R}_{*}^{n}\). Then \(\mathbb{R}_{*}^{n}\) is a vector space on \(\mathbb{R}_{*}\) with the pair of operations
\[\mathbf{x}+_{*}\mathbf{y} = (x_{1}+_{*}y_{1},...,x_{n}+_{*}y_{n})=(x_{1}y_{1},...,x_{n}y_{n}),\] \[a\cdot_{*}\mathbf{x} = (a\cdot_{*}x_{1},...,a\cdot_{*}y_{n})=(e^{\log a\log x_{1}},...,e ^{\log a\log x_{n}}),\quad a\in\mathbb{R}_{*}.\]
The elements of _multiplicative canonical basis_ of \(\mathbb{R}_{*}^{n}\) are
\[\mathbf{e}_{1}=(1_{*},0_{*}...,0_{*}),\mathbf{e}_{2}=(0_{*},1_{*},...,0_{*}),...,\mathbf{e}_{n}=(0_{*},0_{*},...,1_{*}),\]
where \(0_{*}=1\) and \(1_{*}=e\). Also, denote by \(\mathbf{0}_{*}=(0_{*},...,0_{*})\) the multiplicative zero vector.
A positive-definite scalar product on \(\mathbb{R}_{*}^{n}\) is defined by
\[\langle\mathbf{x},\mathbf{y}\rangle_{*}=x_{1}\cdot_{*}y_{1}+_{*}...+_{*}x_{n }\cdot_{*}y_{n}=e^{\log x_{1}\log y_{1}+...+\log x_{n}\log y_{n}}.\]
We call \(\langle,\rangle_{*}\)_multiplicative Euclidean inner product_ and \((\mathbb{R}_{*}^{n},\langle,\rangle_{*})\)_multiplicative Euclidean space_ denoted by \(\mathbb{E}_{*}^{n}\). Also, we call two vectors \(\mathbf{x}\) and \(\mathbf{y}\)_multiplicative orthogonal_ if \(\langle\mathbf{x},\mathbf{y}\rangle_{*}=0_{*}\). The induced _multiplicative Euclidean norm_\(\|\cdot\|_{*}\) on \(\mathbb{E}_{*}^{n}\) is
\[\|\mathbf{x}\|_{*}=(\langle\mathbf{x},\mathbf{y}\rangle_{*})^{\frac{1}{2_{*} }}=e^{\sqrt{\log x_{1}^{2}+...+\log x_{n}^{2}}}.\]
A vector \(\mathbf{x}\) with \(\|\mathbf{x}\|_{*}=1_{*}\) is said to be _multiplicative unitary_.
Let \(\theta\in[0_{*},e^{\pi}]\). We then introduce
\[\cos_{*}\theta=e^{\cos(\log\theta)},\quad\arccos_{*}\phi=e^{\arccos(\log\phi)},\]
for \(\phi\in[e^{-1},1_{*}]\). Then, the _multiplicative radian measure_ of _multiplicative angle_ between \(\mathbf{x}\) and \(\mathbf{y}\) is defined by
\[\theta=\arccos_{*}\left(\langle\mathbf{x},\mathbf{y}\rangle_{*}/_{*}(\| \mathbf{x}\|_{*}\cdot_{*}\|\mathbf{y}\|_{*})\right)=\arccos_{*}\left(e^{\frac{ \log\langle\mathbf{x},\mathbf{y}\rangle_{*}}{\log\|\mathbf{x}\|_{*}\log\| \mathbf{y}\|_{*}}}\right).\]
The _multiplicative cross product_ of \(\mathbf{x}\) and \(\mathbf{y}\) in \(\mathbb{E}_{*}^{3}\) is defined by
\[\mathbf{x}\times_{*}\mathbf{y}=(e^{\log x_{2}\log y_{3}-\log x_{3}\log y_{2}}, e^{\log x_{3}\log y_{1}-\log x_{1}\log y_{3}},e^{\log x_{1}\log y_{2}-\log x_{2} \log y_{1}}).\]
It is direct to prove that the multiplicative cross product holds the standart algebraic and geometric properties. For example, \(\mathbf{x}\times_{*}\mathbf{y}\) is multiplicative orthogonal to \(\mathbf{x}\) and \(\mathbf{y}\). In addition, \(\mathbf{x}\times_{*}\mathbf{y}=\mathbf{0}_{*}\) if and only if \(\mathbf{x}\) and \(\mathbf{y}\) are multiplicative collinear.
A _multiplicative line_ passing through a point \(P=(p_{1},p_{2},p_{3})\) and multiplicative parallel to \(\mathbf{v}=(v_{1},v_{2},v_{3})\) is a subset of \(\mathbb{E}_{*}^{3}\) defined by
\[\{Q=(q_{1},q_{2},q_{3})\in\mathbb{E}_{*}^{3}:Q=P+_{*}t\cdot_{*}\mathbf{v}\},\]
where \(q_{i}=e^{\log p_{i}+\log t\log v_{i}}\), \(i=1,2,3\). We point out that the multiplicative parallelism is algebraically equivalent to multiplicative collinearity.
A _multiplicative plane_ passing through a point \(P\) and multiplicative orthogonal to \(\mathbf{v}\) is a subset of \(\mathbb{E}_{*}^{3}\) defined by
\[\{Q\in\mathbb{E}_{*}^{3}:\langle Q-_{*}P,\mathbf{v}\rangle_{*}=0_{*}\},\]
where
\[e^{(\log q_{1}-\log p_{1})\log v_{1}+(\log q_{2}-\log p_{2})\log v_{2}+(\log q _{3}-\log p_{3})\log v_{3}}=0_{*}.\]
A _multiplicative sphere_ with radius \(r>0_{*}\) and centered at \(C=(c_{1},c_{2},c_{3})\in\mathbb{E}_{*}^{3}\) is a subset of \(\mathbb{E}_{*}^{3}\) defined by
\[\{Q\in\mathbb{E}_{*}^{3}:\|Q-_{*}C\|_{*}=r\},\]
where
\[e^{(\log q_{1}-\log c_{1})^{2}+(\log q_{2}-\log c_{2})^{2}+(\log q_{3}-\log c _{3})^{2}}=e^{(\log r)^{2}}.\]
## 3. Differential geometry of curves in \(\mathbb{E}_{*}^{3}\)
Consider a function \(\mathbf{x}:I\subset\mathbb{R}_{*}\to\mathbb{E}_{*}^{3}\) where \(s\mapsto\mathbf{x}(s)=(x_{1}(s),x_{2}(s),x_{3}(s))\). Suppose that \(x_{i}(s)\) (\(i=1,2,3\)) is multiplicative differentiable on \(I\), setting \(\mathbf{x}^{*}(s)=(x_{1}^{*}(s),x_{2}^{*}(s),x_{3}^{*}(s))\).
We call the subset \(\mathcal{C}\subset\mathbb{E}_{*}^{3}\) which is the range of \(\mathbf{x}(s)\) a _multiplicative curve_. Here \(\mathbf{x}(s)\) is said to be a _multiplicative parametrization_ of \(\mathcal{C}\). We also call \(\mathbf{x}(s)\) a
multiplicative regular parameterized curve_ if nowhere \({\bf x}^{*}(s)\) is \({\bf 0}_{*}\). If, also, \(\|{\bf x}^{*}(s)\|_{*}=1_{*}\), or equivalently,
\[e^{(\log x_{1}^{*})^{2}+(\log x_{2}^{*})^{2}+(\log x_{3}^{*})^{2}}=1_{*},\quad \mbox{ for every }s\in I,\]
then we call that \({\mathcal{C}}\) is parametrized by _multiplicative arc length_.
We may easily observe that a multiplicative arc length parameter is independent from the multiplicative translations. In addition, one always could find a multiplicative arc length parameter of the curve \({\mathcal{C}}\) (see [22]). In the remaining part, unless otherwise specified, we will assume that \({\bf x}(s)\) in \({\mathbb{E}}_{*}^{3}\) is a multiplicative arc length parameterized curve.
We call that \({\bf u}(t)=(u_{1}(t),...,u_{n}(t))\) is a _multiplicative differentiable vector field_ along the curve \({\mathcal{C}}\) if each \(u_{i}(t)\) is multiplicative differentiable on \(I\), \(i=1,...,n\). If \({\bf u}(t)\) and \({\bf v}(t)\) are two multiplicative differentiable vector fields on \({\mathcal{C}}\), then it is direct to conclude
\[(\langle{\bf u}(t),{\bf v}(t)\rangle_{*})^{*}=\langle{\bf u}^{*}(t),{\bf v}(t) \rangle_{*}+_{*}\langle{\bf u}(t),{\bf v}^{*}(t)\rangle_{*}. \tag{2}\]
Assume that \({\bf x}(s)\) is _multiplicative biregular_, that is, nowhere \({\bf x}^{*}(s)\) and \({\bf x}^{**}(s)\) are multiplicative collinear. We consider a trihedron \(\{{\bf t}(s),{\bf n}(s),{\bf b}(s)\}\) along \({\bf x}(s)\), so-called _multiplicative Frenet frame_, where
\[\begin{array}{l}{\bf t}(s)={\bf x}^{*}(s),\\ {\bf n}(s)={\bf x}^{**}(s)/_{*}\|{\bf x}^{**}(s)\|_{*},\\ {\bf b}(s)={\bf t}(s)\times_{*}{\bf n}(s).\end{array}\]
Hence, by setting \({\bf t}(s)=(t_{1}(s),t_{2}(s),t_{3}(s))\), \({\bf n}(s)=(n_{1}(s),n_{2}(s),n_{3}(s))\) and \({\bf b}(s)=(b_{1}(s),b_{2}(s),b_{3}(s))\), we have
\[\begin{array}{l}t_{i}=x_{i}^{*},\\ \par n_{i}=e^{\overline{\sqrt{(\log x_{1}^{**})^{2}+(\log x_{2}^{**})^{2}+(\log x _{3}^{**})^{2}}}},\\ b_{i}=e^{(-1)^{j+k-1}(\log t_{j}\log n_{k}-\log t_{k}\log n_{j})},i,j,k\in\{1,2,3\},\end{array}\]
where \(j\neq i\neq k\) and \(j<k\).
The vector field \({\bf t}(s)\) (resp. \({\bf n}(s)\) and \({\bf b}(s)\)) along \({\bf x}(s)\) is said to be _multiplicative tangent_ (resp. _principal normal_ and _binormal_). It is direct to prove that \(\{{\bf t}(s),{\bf n}(s),{\bf b}(s)\}\) is mutually multiplicative orthogonal and \({\bf n}(s)\times_{*}{\bf b}(s)={\bf t}(s)\) and \({\bf b}(s)\times_{*}{\bf t}(s)={\bf n}(s)\). We also point out that the arc length parameter and multiplicative Frenet frame are independent from the choice of multiplicative parametrization [22].
We define the _multiplicative curvature_\(\kappa(s)\) and _torsion_\(\tau(s)\) as
\[\kappa(s)=\|{\bf x}^{**}(s)\|_{*}=e^{\sqrt{(\log(x_{1}^{**}(s))^{2}+(\log(x_{ 2}^{**}(s))^{2}+(\log(x_{3}^{**}(s))^{2}}}\]
\[\tau(s)=\langle\mathbf{n}^{*}(s),\mathbf{b}(s)\rangle_{*}=e^{\log n_{1}^{*}(s)\log b _{1}(s)+\log n_{2}^{*}(s)\log b_{2}(s)+\log n_{3}^{*}(s)\log b_{3}(s)}.\]
The _multiplicative Frenet formulas_ are now
\[\begin{array}{l}\mathbf{t}^{*}=\kappa\cdot_{*}\mathbf{n},\\ \mathbf{n}^{*}=-_{*}\kappa\cdot_{*}\mathbf{t}+_{*}\tau\cdot_{*}\mathbf{b}\\ \mathbf{b}^{*}=-_{*}\tau\cdot_{*}\mathbf{n},\end{array}\]
or equivalently,
\[\begin{array}{l}t_{i}^{*}=e^{\log\kappa\log n_{i}},\\ n_{i}^{*}=e^{-\log\kappa\log t_{i}+\log\tau\log b_{i}},\\ b_{i}^{*}=e^{-\log\tau\log n_{i}},\quad i=1,2,3.\end{array}\]
We call \(\mathbf{x}(s)\)_multiplicative twisted_ if nowhere \(\kappa(s)\) and \(\tau(s)\) is \(0_{*}\). The multiplicative analogous of the fundamental theorem for space curves is the following (see [22, p. 132-135]).
**Theorem 3.1** (Existence).: _[_22_]_ _Given multiplicative differentiable functions \(f(s)>0_{*}\), \(g(s)\) on \(I\). Then, there is a unique multiplicative parametrized curve whose \(\kappa(s)=f(s)\) and \(\tau(s)=g(s)\)._
**Theorem 3.2** (Uniqueness).: _[_22_]_ _Given two multiplicative parametrized curves \(\mathbf{x}(s)\) and \(\mathbf{y}(s)\), \(s\in I\), whose \(\kappa_{\mathbf{x}}(s)=\kappa_{\mathbf{y}}(s)\) and \(\tau_{\mathbf{x}}(s)=\tau_{\mathbf{y}}(s)\). Then, there is a multiplicative rigid motion \(F\) such that \(\mathbf{y}(s)=F(\mathbf{x}(s))\)._
The _multiplicative osculating_ (resp. _normal_ and _rectifying_) _plane_ of \(\mathbf{x}(s)\) at some \(s\in I\) is a multiplicative plane passing through \(\mathbf{x}(s)\) and multiplicative orthogonal to \(\mathbf{b}(s)\) (resp. \(\mathbf{t}(s)\) and \(\mathbf{n}(s)\)).
We notice that \(\mathbf{x}(s)\) is a subset of a multiplicative line if and only if \(\kappa(s)\) is \(0_{*}\) on \(I\). Analogously, \(\mathbf{x}(s)\) is a subset of multiplicative osculating plane itself if and only if \(\tau(s)\) is \(0_{*}\) on \(I\). We call \(\mathbf{x}(s)\) is a _multiplicative spherical curve_ if it is a subset of a multiplicative sphere.
In terms of the multiplicative Frenet frame, we can write the decomposition for \(\mathbf{x}(s)\) as
\[\mathbf{x}(s)=\langle\mathbf{x}(s),\mathbf{t}(s)\rangle_{*}\cdot_{*}\mathbf{t }(s)+_{*}\langle\mathbf{x}(s),\mathbf{n}(s)\rangle_{*}\cdot_{*}\mathbf{n}(s)+ _{*}\langle\mathbf{x}(s),\mathbf{b}(s)\rangle_{*}\cdot_{*}\mathbf{b}(s), \tag{3}\]
or equivalently,
\[x_{i}(s)=e^{\log\langle\mathbf{x}(s),\mathbf{t}(s)\rangle_{*}\log t_{i}+\log \langle\mathbf{x}(s),\mathbf{n}(s)\rangle_{*}\log n_{i}+\log\langle\mathbf{x} (s),\mathbf{b}(s)\rangle_{*}\log b_{i}},\quad i=1,2,3. \tag{4}\]
Here we call the functions \(\langle\mathbf{x}(s),\mathbf{t}(s)\rangle_{*}\), \(\langle\mathbf{x}(s),\mathbf{n}(s)\rangle_{*}\) and \(\langle\mathbf{x}(s),\mathbf{b}(s)\rangle_{*}\) the _multiplicative tangential, principal normal_ and _binormal components_ of \(\mathbf{x}(s)\), respectively. In addition, denote by \(\mathbf{x}^{\uparrow_{*}}\) the multiplicative tangential component and by \(\mathbf{x}^{\perp_{*}}\) multiplicative _normal_ component. Then,
\[\|\mathbf{x}^{\uparrow_{*}}(s)\|_{*}=\langle\mathbf{x}(s),\mathbf{t}(s)\rangle _{*}=e^{\log x_{1}\log t_{1}+\log x_{2}\log t_{2}+\log x_{3}\log t_{3}},\]
and
\[\|{\bf x}^{\perp_{*}}(s)\|_{*}=e^{\sqrt{(\log\langle{\bf x}(s),{\bf n }(s)\rangle_{*})^{2}+(\log\langle{\bf x}(s),{\bf b}(s)\rangle_{*})^{2}}}\] \[=e^{\sqrt{(\log x_{1}\log n_{1}+\log x_{2}\log n_{2}+\log x_{3}\log n _{3})^{2}+(\log x_{1}\log b_{1}+\log x_{2}\log b_{2}+\log x_{3}\log b_{3})^{2}}}.\]
Next, we give a characterization of the multiplicative spherical curves with \(\kappa,\tau\neq 0_{*}\) in terms of their multiplicative position vectors. Without loss of generality, we will assume that \(\mathbb{S}_{*}^{2}\) is a multiplicative sphere with radius \(r>0_{*}\) and centered at \({\bf 0}_{*}\).
**Proposition 3.3**.: _Let \({\bf x}(s)\subset\mathbb{E}_{*}^{3}\) be a multiplicative twisted curve. If \({\bf x}(s)\subset\mathbb{S}_{*}^{2}\), then_
\[{\bf x}(s)=(e^{-1}/_{*}\kappa(s))\cdot_{*}{\bf n}(s)+_{*}((e^{-1}/_{*}\kappa( s))^{*}/_{*}\tau(s))\cdot_{*}{\bf b}(s)\]
_or equivalently_
\[x_{i}(s)=e^{-\log n_{i}(s)/\log\kappa(s)+(-1/\log\kappa(s))^{*}\log b_{i}(s)/ \log\tau(s)}.\]
Proof.: By the assumption, because \({\bf x}(s)\subset\mathbb{S}_{*}^{2}\), we have \(\|{\bf x}(s)\|_{*}=r\), for every \(s\). It is equivalent to
\[\langle{\bf x}(s),{\bf x}(s)\rangle_{*}=e^{(\log r)^{2}}. \tag{5}\]
If we take multiplicative differentiation of both-hand sides in Eq. (5) with respect to \(s\) then we have
\[(\langle{\bf x}(s),{\bf x}(s)\rangle_{*})^{*}=\left(e^{(\log r)^{2}}\right)^{ *},\]
or, by Eq. (2),
\[e^{2}\cdot_{*}\langle{\bf x}(s),{\bf x}^{*}(s)\rangle_{*}=0_{*}. \tag{6}\]
We understand from Eq. (6) that the multiplicative tangential component of \({\bf x}(s)\) is
\[\langle{\bf x}(s),{\bf t}(s)\rangle_{*}=0_{*}. \tag{7}\]
Again we take multiplicative differentiation of both-hand sides in Eq. (7) with respect to \(s\), and we have
\[(\langle{\bf x}(s),{\bf t}(s)\rangle_{*})^{*}=(0_{*})^{*},\]
or
\[\langle{\bf t}(s),{\bf t}(s)\rangle_{*}+_{*}\langle{\bf x}(s),{\bf t}^{*}(s) \rangle_{*}=0_{*}.\]
Using the multiplicative Frenet formulas,
\[1_{*}+_{*}\kappa(s)\cdot_{*}\langle{\bf x}(s),{\bf n}^{*}(s)\rangle_{*}=0_{*}\]
or
\[\langle{\bf x}(s),{\bf n}(s)\rangle_{*}=e^{-1}/_{*}\kappa(s). \tag{8}\]
If we apply the same arguments in Eq. (8) and consider Eq. (7), then we obtain
\[\langle{\bf x}(s),{\bf b}(s)\rangle_{*}=(e^{-1}/_{*}\kappa(s))^{*}/_{*}\tau(s). \tag{9}\]
The result of the theorem follows by replacing Eqs. (7), (8), (9) in Eqs. (3) and (4), respectively.
As a direct conseqeunc of Proposition 3.3, if \(\mathbf{x}(s)\) is multiplicative twisted and if \(\mathbf{x}(s)\subset\mathbb{S}_{*}^{2}\), then
\[r=\left((e^{-1}/_{*}\kappa(s))^{2_{*}}+_{*}\left((e^{-1}/_{*}\kappa(s))^{*}/_{* }\tau(s))\right)^{2_{*}}\right)^{\frac{1}{2_{*}}}\]
or equivalently,
\[(\log r)^{2}=\left(\frac{1}{\log\kappa(s)}\right)^{2}+\left(\frac{e^{s(-1/\log \kappa(s))^{\prime}}}{\log\tau(s)}\right)^{2},\]
where the prime \({}^{\prime}\) denotes the usual derivative with respect to \(s\).
We point out that Proposition 3.3 is valid for a multiplicative twisted curve. However, even in the case \(\tau(s)=0_{*}\), we may use the arguments given in its proof. More explicitly, assume that \(\mathbf{x}(s)\subset\mathbb{S}_{*}^{2}\) with \(\tau(s)=0_{*}\). Then, if we take multiplicative differentiation in Eq. (8), we derive that \(\langle\mathbf{x}(s),\mathbf{b}(s)\rangle_{*}=0_{*}\), implying that \(\mathbf{x}(s)\) and \(\mathbf{n}(s)\) are multiplicative collinear due to Eq. (7). Hence, we conclude that \(\mathbf{x}(s)=(e^{-1}/_{*}\kappa(s))\cdot_{*}\mathbf{n}(s)\) and \(r=|e^{-1}/_{*}\kappa(s)|_{*}\), or equivalently,
\[x_{i}(s)=e^{(-1/\log\kappa(s))\log n_{i}(s)}\quad\text{and}\quad r=|e^{-1/\log \kappa(s)}|_{*},\quad i=1,2,3.\]
As an example, we may take such a multiplicative spherical curve of \(\mathbb{S}_{*}^{2}\) with radius \(1_{*}\), called _multiplicative great circle_, as follows. Let \(\Gamma\) pass through \(\mathbf{0}_{*}\) and multiplicative orthogonal to \(\mathbf{e}_{3}\). Consider the multiplicative equator (see Fig. 1)
\[\mathcal{C}=\Gamma\cap\mathbb{S}_{*}^{2}=\{(x,y,z)\in\mathbb{E}_{*}^{3}:(\log x )^{2}+(\log y)^{2}=1,z=1\}.\]
It can be parametrized by its multiplicative arclength as \(\mathbf{x}(s)=\left(e^{\cos\log s},e^{\sin\log s},1\right)\). Here, by a direct calculation, we may observe that
\[\kappa(s)=1_{*}\quad\text{and}\quad\mathbf{n}(s)=\left(e^{-\cos\log s},e^{- \sin\log s},1\right),\]
implying \(\mathbf{x}(s)=e^{-1}\cdot_{*}\mathbf{n}(s)\).
## 4. Multiplicative rectifying curves
In this section we will introduce the multiplicative analogous of the rectifying curves and then present some characterization and classification results.
Given a multiplicative biregular curve \(\mathbf{x}(s)\) in \(\mathbb{E}_{*}^{3}\), \(s\in I\subset\mathbb{R}_{*}\), where \(\{\mathbf{t}(s),\mathbf{n}(s),\mathbf{b}(s)\}\) is the multiplicative Frenet frame. We call \(\mathbf{x}(s)\) a _multiplicative rectifying curve_ if the multiplicative principal normal component is \(0_{*}\). In other words, \(\langle\mathbf{x}(s),\mathbf{n}(s)\rangle_{*}=0_{*}\), for every \(s\in I\) and so Eq. (3) is now
\[\mathbf{x}(s)=\lambda(s)\cdot_{*}\mathbf{t}(s)+_{*}\mu(s)\cdot_{*}\mathbf{b}( s), \tag{10}\]
where \(\lambda(s)=\langle\mathbf{x}(s),\mathbf{t}(s)\rangle_{*}\) and \(\mu(s)=\langle\mathbf{x}(s),\mathbf{b}(s)\rangle_{*}\). In terms of the usual operations, Eq. (10) writes as
\[x_{i}(s)=e^{\log\lambda(s)\log t_{i}(s)+\log\mu(s)\log b_{i}(s)},\quad i=1,2,3.\]
The first result of this section characterizes the multiplicative rectifying curves in terms of their multiplicative tangent and binormal components.
**Proposition 4.1**.: _If \(\mathbf{x}(s)\subset\mathbb{E}_{*}^{3}\) with \(\kappa\neq 0_{*}\) is a multiplicative rectifying curve, then nowhere the multiplicative torsion is \(0_{*}\) and_
\[\langle\mathbf{x}(s),\mathbf{t}(s)\rangle_{*}=s+_{*}a=as,\quad\langle\mathbf{ x}(s),\mathbf{b}(s)\rangle_{*}=b,\quad a,b\in\mathbb{R}_{*},b\neq 0_{*}.\]
_The converse statement is true as well._
Proof.: Assume that \(\mathbf{x}(s)\) is a multiplicative rectifying curve. Taking a multiplicative differentiation in Eq. (10),
\[\mathbf{x}^{*}(s)=\lambda^{*}(s)\cdot_{*}\mathbf{t}(s)+_{*}\lambda(s)\cdot_{* }\mathbf{t}^{*}(s)+_{*}\mu^{*}(s)\cdot_{*}\mathbf{b}(s)+_{*}\mu(s)\cdot_{*} \mathbf{b}^{*}(s).\]
The multiplicative Frenet formulas follow
\[(\lambda^{*}(s)-_{*}1_{*})\cdot_{*}\mathbf{t}(s)+_{*}(\lambda^{*}(s)\cdot_{*} \kappa(s)-_{*}\mu(s)\cdot_{*}\tau(s))\cdot_{*}\mathbf{n}(s)+\mu^{*}(s)\cdot_{ *}\mathbf{b}(s)=\mathbf{0}_{*}.\]
Due to the multiplicative linearly independence,
\[\lambda^{*}(s)=1_{*},\quad\lambda(s)\cdot_{*}\kappa(s)=\mu(s)\cdot_{*}\tau(s),\quad\mu^{*}(s)=0_{*}. \tag{11}\]
We here conclude that
\[\lambda(s)=\int_{*}^{s}1_{*}\cdot_{*}d_{*}u=e^{\int^{s}\frac{1}{u}\log edu}=e^ {\log s+\log a}=s+_{*}a,\quad a>0,\]
Figure 1. A multiplicative equator of \(\mathbb{S}_{*}^{2}\)
and
\[\mu(s)=\int_{*}^{s}0_{*}\cdot_{*}d_{*}u=e^{\int^{s}\frac{1}{u}\log 1du}=e^{\log b}=b, \quad b>0.\]
Notice here that \(b\neq 0_{*}\) because otherwise one derives from the middle equation in Eq. (11) that \(\kappa(s)\) is \(0_{*}\). This contradicts with biregularity of \(\mathbf{x}(s)\). Analogously, \(\tau(s)\) is nowhere \(0_{*}\).
Conversely, suppose that the multiplicative tangential and binormal components are
\[\langle\mathbf{x}(s),\mathbf{t}(s)\rangle_{*}=s+_{*}a,\quad\langle\mathbf{x}(s ),\mathbf{b}(s)\rangle_{*}=b,\]
where \(a,b\in\mathbb{R}_{*},b\neq 0_{*}\). Using Eq. (2),
\[0_{*} = (\langle\mathbf{x}(s),\mathbf{b}(s)\rangle_{*})^{*}\] \[= \langle\mathbf{x}^{*}(s),\mathbf{b}(s)\rangle_{*}+_{*}\langle \mathbf{x}(s),\mathbf{b}^{*}(s)\rangle_{*}\] \[= \langle\mathbf{t}(s),\mathbf{b}(s)\rangle_{*}-_{*}\tau(s) \langle\mathbf{x}(s),\mathbf{n}(s)\rangle_{*}\] \[= -_{*}\tau(s)\langle\mathbf{x}(s),\mathbf{n}(s)\rangle_{*}.\]
Since the multiplicative curvatures are nowhere \(0_{*}\), we may easily obtain \(\langle\mathbf{x}(s),\mathbf{n}(s)\rangle_{*}=0_{*}\). This completes the proof.
Besides Proposition 4.1, several characterizations of multiplicative rectifying curves may be presented as follows.
**Proposition 4.2**.: _If \(\mathbf{x}(s)\subset\mathbb{E}_{*}^{3}\) is a multiplicative rectifying curve, then the multiplicative ratio of the multiplicative curvatures is_
\[\tau(s)/_{*}\kappa(s)=c\cdot_{*}s+_{*}d=e^{c\log s+\log d},\quad c,d\in \mathbb{R}_{*},c\neq 0_{*}.\]
_The converse statement is true as well._
Proof.: By the middle equation in Eq. (11), one writes
\[\tau(s)/_{*}\kappa(s)=\lambda(s)/_{*}\mu(s)=(s+_{*}a)/_{*}b=e^{\frac{\log s+ \log a}{\log b}},\quad a,b\in\mathbb{R}_{*},b\neq 0_{*}.\]
Setting \(\log c=1/\log b\) and \(\log d=\log a/\log b\) gives the first part of the proof.
Conversely, suppose that \(\mathbf{x}(s)\) with \(\kappa\neq 0_{*}\) holds \(\tau(s)/_{*}\kappa(s)=c\cdot_{*}s+_{*}d\), for \(c,d\in\mathbb{R}_{*},c\neq 0_{*}.\) Also, we may write
\[\tau(s)/_{*}\kappa(s)=(s+_{*}a)/_{*}b,\quad a,b\in\mathbb{R}_{*},b\neq 0_{*}\]
or equivalently,
\[b\cdot_{*}\tau(s)-_{*}(s+_{*}a)\cdot_{*}\kappa(s)=0_{*}.\]
We now set
\[f(s)=\mathbf{x}(s)-_{*}(s+_{*}a)\cdot_{*}\mathbf{t}(s)-_{*}b\cdot_{*}\mathbf{ b}(s),\]
where \(f(s)\) is multiplicative differentiable on its domain. If we take multiplicative differentiation of the last equation and if we use the multiplicative Frenet formulas, then we obtain
\[f^{*}(s)=[-_{*}(s+_{*}a)\cdot_{*}\kappa(s)+_{*}b\cdot_{*}\tau(s)]\cdot_{*}{\bf n}( s).\]
Here concludes that \(f^{*}(s)=0_{*}\) or equivalently \(f(s)\) is constant. Then, \({\bf x}(s)\) is multiplicative congruent to a multiplicative rectifying curve.
We define the _multiplicative distance function_ of \({\bf x}(s)\) as \(\rho(s)=\|{\bf x}(s)\|_{*}\). Hence, we have the following.
**Proposition 4.3**.: _If \({\bf x}(s)\subset\mathbb{E}_{*}^{3}\) is a multiplicative rectifying curve with \(\kappa>0_{*}\), then_
\[\rho(s)^{2_{*}}=s^{2_{*}}+_{*}e^{c}\cdot_{*}s+_{*}e^{d},\quad c,d\in\mathbb{R}, d>0, \tag{12}\]
_or equivalently,_
\[e^{(\log\rho(s))^{2}}=e^{(\log s)^{2}+c\log s+d}.\]
_The converse statement is true as well._
Proof.: Assume that \({\bf x}(s)\) is a multiplicative rectifying curve. It then follows from Eq. (10) that
\[\rho(s)^{2_{*}}=\langle{\bf x}(s),{\bf x}(s)\rangle_{*}=\lambda(s)^{2_{*}}+_{ *}\mu(s)^{2_{*}}.\]
Here, by Proposition 4.1 we know that \(\lambda(s)=s+_{*}a\) and \(\mu(s)=b\), for \(a,b\in\mathbb{R}_{*}\), \(b\neq 0_{*}\). Hence
\[\rho(s)^{2_{*}}=s^{2_{*}}+_{*}e^{2}\cdot_{*}a\cdot_{*}s+_{*}a^{2_{*}}+_{*}b^{2 _{*}}.\]
Setting
\[e^{c}=e^{2}\cdot_{*}a=e^{2\log a},\quad e^{d}=a^{2_{*}}+_{*}b^{2_{*}}=e^{(\log a )^{2}+(\log b)^{2}},\]
we arrive to Eq. (12). Conversely, let Eq. (12) hold. We will show that \({\bf x}(s)\) is a multiplicative rectifying curve. Then,
\[\langle{\bf x}(s),{\bf x}(s)\rangle_{*}=s^{2_{*}}+_{*}e^{c}\cdot_{*}s+_{*}e^{d}.\]
Taking multiplicative differentiation, we have
\[e^{2}\cdot_{*}\langle{\bf t}(s),{\bf x}(s)\rangle_{*}=e^{2}\cdot_{*}s+_{*}c,\]
or, because \(e^{c}/_{*}e^{2}=a\),
\[\langle{\bf t}(s),{\bf x}(s)\rangle_{*}=s+_{*}a.\]
We again take multiplicative differentiation, obtaining
\[1_{*}+_{*}\kappa(s)\cdot_{*}\langle{\bf n}(s),{\bf x}(s)\rangle_{*}=1_{*},\]
where we used the multiplicative Frenet formulas. Hence, \(\langle{\bf n}(s),{\bf x}(s)\rangle_{*}=0_{*}\), completing the proof.
**Proposition 4.4**.: _If \({\bf x}(s)\subset\mathbb{E}_{*}^{3}\) with \(\kappa\neq 0_{*}\) is a multiplicative rectifying curve, then \(\rho(s)\) is nonconstant and \(\|{\bf x}^{\perp_{*}}(s)\|_{*}\) is constant. The converse statement is true as well._
Proof.: Because Propositions 4.1 and 4.3, the only converse statement will be proved. Assume that \(\rho(s)=\|\mathbf{x}(s)\|_{*}\) is nonconstant and \(\|\mathbf{x}^{\perp_{*}}(s)\|_{*}=m>0_{*}\), \(m\in\mathbb{R}_{*}\). In terms of the multiplicative Frenet frame, the latter assumption yields
\[m^{2_{*}}=\langle\mathbf{x}(s),\mathbf{n}(s)\rangle_{*}^{2_{*}}+_{*}\langle \mathbf{x}(s),\mathbf{b}(s)\rangle_{*}^{2_{*}},\]
and so
\[\langle\mathbf{x}(s),\mathbf{x}(s)\rangle_{*}=\langle\mathbf{x}(s),\mathbf{t }(s)\rangle_{*}^{2_{*}}+_{*}m^{2_{*}}.\]
Taking multiplicative derivative and then considering the multiplicative Frenet formulas,
\[e^{2}\cdot_{*}\langle\mathbf{x}(s),\mathbf{t}(s)\rangle_{*}=e^{2}\cdot_{*} \langle\mathbf{x}(s),\mathbf{t}(s)\rangle_{*}\cdot_{*}(1_{*}+_{*}\kappa(s) \cdot_{*}\langle\mathbf{x}(s),\mathbf{n}(s)\rangle_{*}).\]
Because the assumption, i.e. \(\rho(s)\) is not constant, \(\langle\mathbf{x}(s),\mathbf{t}(s)\rangle_{*}\) cannot be \(0_{*}\), implying \(\langle\mathbf{x}(s),\mathbf{n}(s)\rangle_{*}=0_{*}\). This completes the proof.
We now introduce
\[\tan_{*}s=e^{\tan(\log s)},\quad s\in(e^{-\pi/2},e^{\pi/2}),\]
and
\[\sec_{*}s=e^{\sec(\log s)},\quad s\in[0_{*},e^{\pi}],s\neq e^{\pi/2}.\]
It is direct to conclude that \((\sec_{*}s)^{2_{*}}=1_{*}+_{*}(\tan_{*}s)^{2_{*}}\).
In what follows, we determine all the multiplicative rectifying curves by means of the multiplicative spherical curves.
**Theorem 4.5**.: _Let \(\mathbf{x}(s)\subset\mathbb{E}_{*}^{3}\) with \(\kappa\neq 0_{*}\) be a multiplicative rectifying curve and \(\mathbb{S}_{*}^{2}\) the multiplicative sphere of radius \(1_{*}\). Then, there is a multiplicative reparametrization of \(\mathbf{x}(s)\) such that_
\[\mathbf{x}(\tilde{s})=(a\cdot_{*}\sec_{*}\tilde{s})\cdot_{*}\mathbf{y}(\tilde {s}),\quad a\in\mathbb{R}_{*},a>0_{*},\]
_where \(\mathbf{y}(\tilde{s})\) is a parameterized curve lying in \(\mathbb{S}_{*}^{2}\) by multiplicative arc length. The converse statement is true as well._
Proof.: Suppose that \(0_{*}\) is included in the domain of \(\mathbf{x}\). By Proposition 4.3, we have
\[\rho(s)^{2_{*}}=s^{2_{*}}+_{*}e^{c}\cdot_{*}s+_{*}e^{d},\quad d>0.\]
Up to a multiplicative translation in \(s\), we may take \(\rho(s)^{2_{*}}=s^{2_{*}}+_{*}e^{d}\). Since \(d>0\), there is a constant \(a\) such that \(e^{d}=a^{2_{*}}\). Here \(a\) must be greater than \(0_{*}\) because \(0_{*}\in I\). Introduce a curve \(\mathbf{y}(s)\) as follows
\[\mathbf{x}(s)=\rho(s)\cdot_{*}\mathbf{y}(s), \tag{13}\]
where \(\rho(s)=(s^{2_{*}}+_{*}a^{2_{*}})^{\frac{1}{2_{*}}}\). Since \(\langle\mathbf{y}(s),\mathbf{y}(s)\rangle_{*}=1_{*}\), the curve \(\mathbf{y}(s)\) is a subset of \(\mathbb{S}_{*}^{2}\). Notice here that
\[e^{2}\cdot_{*}\langle\mathbf{y}^{*}(s),\mathbf{y}(s)\rangle_{*}=0_{*}.\]
Taking multiplicative differentiation in Eq. (13),
\[\mathbf{x}^{*}(s)=\rho^{*}(s)\cdot_{*}\mathbf{y}(s)+_{*}\rho(s)\cdot_{*} \mathbf{y}^{*}(s),\]
where, because \(\rho^{*}(s)=s/_{*}\rho(s)\),
\[\mathbf{x}^{*}(s)=(s/_{*}\rho(s))\cdot_{*}\mathbf{y}(s)+_{*}\rho(s)\cdot_{*} \mathbf{y}^{*}(s).\]
Noting \(\langle\mathbf{x}^{*}(s),\mathbf{x}^{*}(s)\rangle_{*}=1_{*}\), we conclude
\[\langle\mathbf{y}^{*}(s),\mathbf{y}^{*}(s)\rangle_{*}=\left(1_{*}-_{*}(s/_{*} \rho(s))^{2_{*}}\right)/_{*}\rho(s)^{2_{*}}.\]
In terms of the usual operations,
\[\langle\mathbf{y}^{*}(s),\mathbf{y}^{*}(s)\rangle_{*}=e^{\left(\frac{(\log a) ^{2}}{((\log s)^{2}+(\log a)^{2})^{2}}\right)}.\]
In order to parametrize \(\mathbf{y}(s)\) by multiplicative arc length, we set
\[\tilde{s}=\int_{*0_{*}}^{s}\langle\mathbf{y}^{*}(s),\mathbf{y}^{*}(s)\rangle^{ \frac{1}{2}_{*}}\cdot_{*}d_{*}u,\]
or
\[\tilde{s}=\int_{*0_{*}}^{s}e^{\left(\frac{(\log a)^{2}}{((\log s)^{2}+(\log a )^{2})^{2}}\right)^{\frac{1}{2}}}\cdot_{*}d_{*}u.\]
By the definiton of the multiplicative integral,
\[\log\tilde{s}=\left(\int_{1}^{s}\frac{1}{u}\frac{\log a}{(\log u)^{2}+(\log a )^{2}}du\right).\]
Hence,
\[\log\tilde{s}=\arctan\left(\frac{\log s}{\log a}\right),\]
or \(\log s=(\log a)\tan(\log\tilde{s})\). This immediately yields
\[s=a\cdot_{*}\tan_{*}\tilde{s}.\]
Then,
\[\rho(\tilde{s})=\left((a\cdot_{*}\tan_{*}\tilde{s})^{2_{*}}+_{*}a^{2_{*}} \right)^{\frac{1}{2}_{*}}=\left((\log a)^{2}(1+\tan^{2}(\log\tilde{s}))\right) ^{\frac{1}{2}}=e^{(\log a)\sec(\log\tilde{s})}\]
or
\[\rho(\tilde{s})=a\cdot_{*}\sec_{*}\tilde{s}.\]
Considering this into Eq. (13) completes the first part of the proof.
Conversely, suppose that \(\tilde{s}\mapsto\mathbf{x}(\tilde{s})\) is defined by
\[\mathbf{x}(\tilde{s})=(a\cdot_{*}\sec_{*}\tilde{s})\cdot_{*}\mathbf{y}(\tilde {s}),\]
where \(a>0_{*}\) and \(\mathbf{y}(\tilde{s})\subset\mathbb{S}_{*}^{2}\) with \(\|\mathbf{y}^{*}(\tilde{s})\|_{*}=1_{*}\). We will show that \(\mathbf{x}(\tilde{s})\) is a multiplicative rectifying curve. We first observe \(\rho(\tilde{s})=(a\cdot_{*}\sec_{*}\tilde{s})\), which is nonconstant. Now we take multiplicative derivative of \(\mathbf{x}(\tilde{s})\) with respect to \(\tilde{s}\),
\[\mathbf{x}^{*}(\tilde{s})=(a\cdot_{*}\sec_{*}\tilde{s})^{*}\cdot_{*}\mathbf{y} (\tilde{s})+_{*}(a\cdot_{*}\sec_{*}\tilde{s})\cdot_{*}\mathbf{y}^{*}(\tilde{s }).\]
Because \((a\cdot_{*}\sec_{*}\tilde{s})^{*}=a\cdot_{*}\sec_{*}\tilde{s}\tan_{*}\tilde{s}\),
\[\mathbf{x}^{*}(\tilde{s})=(a\cdot_{*}\sec_{*}\tilde{s})\cdot_{*}((\tan_{*} \tilde{s})\cdot_{*}\mathbf{y}(\tilde{s})+_{*}\mathbf{y}^{*}(\tilde{s})). \tag{14}\]
Point out that \(\langle\mathbf{x}(\tilde{s}),\mathbf{y}^{*}(\tilde{s})\rangle_{*}=0_{*}\) because \(\langle\mathbf{y}(\tilde{s}),\mathbf{y}^{*}(\tilde{s})\rangle_{*}=0_{*}\). Multiplying Eq. (14) by \(\mathbf{x}(\tilde{s})\) in the multiplicative sense, then
\[\langle\mathbf{x}(\tilde{s}),\mathbf{x}^{*}(\tilde{s})\rangle_{*}=a\cdot_{*} \sec_{*}\tilde{s}\cdot_{*}\tan_{*}\tilde{s}\cdot_{*}\langle\mathbf{x}(\tilde{ s}),\mathbf{y}(\tilde{s})\rangle_{*}\]
or
\[\langle\mathbf{x}(\tilde{s}),\mathbf{x}^{*}(\tilde{s})\rangle_{*}=(a\cdot_{*} \sec_{*}\tilde{s})^{2_{*}}\cdot_{*}\tan_{*}\tilde{s}.\]
On the other hand, we may write
\[\mathbf{x}(\tilde{s})=\mathbf{x}^{\intercal_{*}}+_{*}\mathbf{x}^{\perp*}( \tilde{s})=\langle\mathbf{x}(\tilde{s}),\mathbf{x}^{*}(\tilde{s})/_{*}\| \mathbf{x}^{*}(\tilde{s})\|_{*}\rangle_{*}\cdot_{*}(\mathbf{x}^{*}(\tilde{s})/ _{*}\|\mathbf{x}^{*}(\tilde{s})\|_{*})+_{*}\mathbf{x}^{\perp*}(\tilde{s}),\]
or
\[\mathbf{x}(\tilde{s})=(1/_{*}\rho(\tilde{s})^{2_{*}})\langle\mathbf{x}(\tilde {s}),\mathbf{x}^{*}(\tilde{s})\rangle_{*}\cdot_{*}\mathbf{x}^{*}(\tilde{s})+ _{*}\mathbf{x}^{\perp*}(\tilde{s}).\]
Hence,
\[\mathbf{x}^{\perp_{*}}(\tilde{s})=\mathbf{x}(\tilde{s})-_{*}\tan_{*}\tilde{s }\cdot_{*}\mathbf{x}^{*}(\tilde{s})\]
and
\[\langle\mathbf{x}^{\perp*}(\tilde{s}),\mathbf{x}^{\perp*}(\tilde{s})\rangle_ {*}=\langle\mathbf{x}(\tilde{s}),\mathbf{x}(\tilde{s})\rangle_{*}-_{*}e^{2} \cdot_{*}\langle\mathbf{x}(\tilde{s}),\mathbf{x}^{*}(\tilde{s})\rangle_{*}+( \tan_{*}\tilde{s})^{2_{*}}\cdot_{*}\langle\mathbf{x}^{*}(\tilde{s}),\mathbf{x} ^{*}(\tilde{s})\rangle_{*}.\]
By a simple calculation, we find \(\langle\mathbf{x}^{\perp*}(\tilde{s}),\mathbf{x}^{\perp*}(\tilde{s})\rangle_ {*}=a^{2_{*}}\). The remaining part of the proof is by Proposition 4.4.
As an example, consider the following multiplicative spherical curve
\[\mathbf{y}(s)=\left(e^{\frac{1}{\sqrt{2}}},e^{\frac{1}{\sqrt{2}}\cos(\log s ^{\sqrt{2}})},e^{\frac{1}{\sqrt{2}}\sin(\log s^{\sqrt{2}})}\right)\]
parameterized by multiplicative arc length. Now, by Theorem 4.5 if we set \(a=1_{*}\) and \(\mathbf{x}(s)=\sec_{*}s\cdot_{*}\mathbf{y}(s)\), then we obtain the rectifying curve given by Eq. (1).
Figure 2. Left: a multiplicative circle at centered \((1,1)\) and radius \(e\). Right: a multiplicative rectifying curve parametrized by Eq. (1), \(0.5\leq t\leq 3\).
## Conclusions
Over the last decade, an ascent number of the differential-geometric studies (see [4, 6, 7, 27, 46, 47]) have appeared in which a different calculus (e.g. fractional calculus) from Newtonian calculus is carried out. In the cited papers, the local and non-local fractional derivatives are used. In the case of non-local fractional derivatives, the usual Leibniz and chain rules are known not to be satisfied, which is a major obstacle to establish a differential-geometric theory. In addition, the local fractional derivatives have no remarkable effect on the differential-geometric objects, see [5]. There is a gap in the literature here for a non-Newtonian calculus to be applied these objects.
We highlight two important aspects of our study when a non-Newtonian calculus is performed on a differential-geometric theory: The first is to fill the gap mentioned above. The second is to allow the use of an alternative calculus to the usual Newtonian calculus in differential geometry, the advantages of which have already been addressed in Section 1.
## Acknowledgments
This work is supported by The Scientific and Technological Council of Turkey (TUBITAK) with number (123F055).
## Conflict of interest
The authors declare that there is no conflict of interest.
|
2309.08010 | Malicious Cyber Activity Detection Using Zigzag Persistence | In this study we synthesize zigzag persistence from topological data analysis
with autoencoder-based approaches to detect malicious cyber activity and derive
analytic insights. Cybersecurity aims to safeguard computers, networks, and
servers from various forms of malicious attacks, including network damage, data
theft, and activity monitoring. Here we focus on the detection of malicious
activity using log data. To do this we consider the dynamics of the data by
exploring the changing topology of a hypergraph representation gaining insights
into the underlying activity. Hypergraphs provide a natural representation of
cyber log data by capturing complex interactions between processes. To study
the changing topology we use zigzag persistence which captures how topological
features persist at multiple dimensions over time. We observe that the
resulting barcodes represent malicious activity differently than benign
activity. To automate this detection we implement an autoencoder trained on a
vectorization of the resulting zigzag persistence barcodes. Our experimental
results demonstrate the effectiveness of the autoencoder in detecting malicious
activity in comparison to standard summary statistics. Overall, this study
highlights the potential of zigzag persistence and its combination with
temporal hypergraphs for analyzing cybersecurity log data and detecting
malicious behavior. | Audun Myers, Alyson Bittner, Sinan Aksoy, Daniel M. Best, Gregory Henselman-Petrusek, Helen Jenne, Cliff Joslyn, Bill Kay, Garret Seppala, Stephen J. Young, Emilie Purvine | 2023-09-14T19:40:19Z | http://arxiv.org/abs/2309.08010v1 | # Malicious Cyber Activity Detection Using Zigzag Persistence
###### Abstract
In this study we synthesize zigzag persistence from topological data analysis with autoencoder-based approaches to detect malicious cyber activity and derive analytic insights. Cybersecurity aims to safeguard computers, networks, and servers from various forms of malicious attacks, including network damage, data theft, and activity monitoring. Here we focus on the detection of malicious activity using log data. To do this we consider the dynamics of the data by exploring the changing topology of a hypergraph representation gaining insights into the underlying activity. Hypergraphs provide a natural representation of cyber log data by capturing complex interactions between processes. To study the changing topology we use zigzag persistence which captures how topological features persist at multiple dimensions over time. We observe that the resulting barcodes represent malicious activity differently than benign activity. To automate this detection we implement an autoencoder trained on a vectorization of the resulting zigzag persistence barcodes. Our experimental results demonstrate the effectiveness of the autoencoder in detecting malicious activity in comparison to standard summary statistics. Overall, this study highlights the potential of zigzag persistence and its combination with temporal hypergraphs for analyzing cybersecurity log data and detecting malicious behavior.
## I Introduction
In this study, we leverage zigzag persistence [10], a method from topological data analysis (TDA) [11, 26], coupled with autoencoder anomaly detection to delve into the temporal activity of cyber data and effectively detect malicious behavior.
Cybersecurity aims to safeguard computers, networks, and users from various forms of malicious attacks that undermine confidentiality, integrity, or availability [5, 24]. These attacks are typically carried out by gaining unauthorized access to systems or services, often leaving behind evidence of the attacks in the underlying log data which captures information such as timestamps, Internet Protocol (IP) addresses, ports, executable paths, and command line entries. However, detecting that malicious activity in the log data is challenging due to the data's size and complexity.
One common approach to finding malicious activity in cyber logs involves constructing and analyzing graph representations of the data, such as process trees [17] or flow networks [4], that model dyadic relations between entities. However, standard graphs cannot capture multi-way interactions that are common in cyber data. Instead, using higher dimensional graphs, known as hypergraphs [8], for modeling cyber log data more effectively captures the complex interactions present between users, processes, ports, and other resources. Hypergraphs have proven valuable in diverse branches of data science, including machine learning, biology, and social networks [14, 16, 25].
While hypergraphs capture the complex multi-way relationships, traditional _static_ hypergraphs may fail to additionally represent the dynamic nature of cyber systems. By incorporating temporal information on vertices, hyperedges, or incidences, _temporal hypergraphs_[6, 12, 21] offer a solution. Temporal hypergraphs allow vertices and hyperedges to appear and disappear over time and connect different sets of vertices at different points in time. As such, they provide a suitable framework for studying dynamical systems of complex relations. Cyber log data falls squarely into this category as temporal information is present on all log records, and each record or collection of records captures complex relationships among groups of network entities, e.g., hosts, IPs, ports, users, and executable files.
One approach to representing temporal hypergraphs that we implement in this work is as a sequence, one hypergraph per sliding time window, representing the state of the system during that time. This sequential representation of the temporal hypergraph allows one to treat the sequence as a dynamical process \(G_{t}\mapsto G_{t+1}\) gaining a dynamical systems perspective. Each hypergraph in the sequence is a set of vertices and a multiset of hyperedges. Each vertex and hyperedge represents a distinct named entity (e.g., an IP, port, user, program executable). All vertices are the same type (e.g., all IPs), as are all edges, but edges and vertices represent different types. The hyperedge corresponding to a specific entity can include different sets of vertices at different times. This will be made more concrete in Section II.
Our primary objective is to analyze temporal hypergraph representations of cyber log data to effectively detect malicious activity. Our claim is that malicious cyber activity will often exhibit unique attack patterns in the log data, resulting in topological changes in the representations over time. Specifically,
we investigate a hypergraph representation constructed with executables as vertices and destination ports as hyperedges. The dynamics of the topology of this hypergraph should be different during malicious times than it is during benign since malicious activity often has more complexity and quantity in the executables interactions changing at faster time scales compared to benign activity. This intuition will be illustrated in Section III-C. We considered many combinations of hyperedges and vertices for constructing hypergraphs and found the clearest malicious activity detection using this construction.
The evolution of hypergraph structure and topology over time naturally fits into a use case of zigzag persistence, a tool from TDA. With temporal hypergraphs providing a valuable framework for capturing complex dynamical systems we need to build an understanding of the complex patterns and structural changes in these temporal hypergraphs, and this is where zigzag persistence comes into play. Zigzag persistence captures how, when, and for how long topological features at multiple dimensions persist. For example, is a distinct component seen over a long time, and if so, is it always present, or does it intermittently appear?
Zigzag persistence has been previously used for studying temporal graph models [20] of transportation networks and for intermittency detection. This method has also been recently extended to study temporal hypergraphs for both cyber and social network data [19]. By leveraging the power of zigzag persistence, one is able to delve deep into the intricate temporal dynamics of (hyper)graphs, unveiling hidden trends, detecting critical events, and revealing the underlying structural transformations that shape the system's behavior.
To determine the viability of this approach we implement an autoencoder as a form of anomaly detection on a vectorization of the resulting zigzag persistence barcodes. We train the model to detect suspicious activity and investigate vectors that have high reconstruction loss. We chose to use an autoencoder based on the assumption that a large proportion of traffic on the network is typical benign activity, whereas malicious activity is fairly uncommon.
We begin in Section II by introducing notation and definitions for temporal hypergraphs, zigzag persistence, and how we use zigzag persistence to study temporal hypergraphs. We also introduce the concept of an autoencoder. In Section III we describe the cyber data, our experimental design, and some intuition behind using dynamic topology to identify anomalous behavior. We then demonstrate the ability of the pipeline to detect malicious activity in Section IV. We provide future goals and conclusions on this work in Section V.
## II Computational Tools
The process of computing zigzag persistence for a temporal hypergraph begins with a sequence of representative hypergraphs. We then transform each hypergraph into an abstract simplicial complex and examine the appearance and disappearance of topological features across multiple dimensions in this sequence using zigzag persistence. In the final step of our pipeline we vectorize the zigzag persistence barcode and use an autoencoder to identify anomalous barcodes. In order to describe our experimental design in the context of cyber log data in Section III-B, we begin by first introducing the necessary definitions and background in a general setting.
### _Hypergraphs and Abstract Simplicial Complexes_
A hypergraph, \(G=(V,E)\), analogous to a graph, is represented by a set of vertices, \(V\) and a family of (hyper)edges \(E\). The main difference between a hypergraph and a classical graph is that an edge \(e\in E\) can be an arbitrary subset of vertices \(e\subseteq V\) as opposed to a pair. If \(|e|=k\) then we say that \(e\) is a \(k\)-edge. A temporal hypergraph is a sequence of \(n\) hypergraphs, denoted as \(\mathcal{G}=G_{0},G_{1},G_{2},\ldots,G_{n-1}\), where \(G_{i}=\langle V,E_{i}\rangle\). The sequence can be viewed as a discrete dynamical process, where \(G_{t}\) transitions to \(G_{t+1}\), enabling us to gain insights into the dynamics of the underlying system.
An abstract simplicial complex (ASC), denoted as \(K\), is a non-empty collection of non-empty sets that is closed under taking subsets. Formally, \(K=\{\sigma\}\) is an ASC if whenever \(\tau\subset\sigma\in K\) then \(\tau\in K\). Each set, \(\sigma\), is called a simplex, and if \(|\sigma|=k\) then \(\sigma\) has dimension \(k-1\) and is called a \((k-1)\)-simplex. Geometrically, 0-simplices represent points or vertices, 1-simplices represent lines or edges, 2-simplices represent filled-in triangles, 3-simplices filled in tetrahedrons, and so on for arbitrary hyper-tetrahedrons (see Figure 1). For \(\tau,\sigma\in K\) we say that \(\tau\neq\emptyset\) is a face of \(\sigma\) if \(\tau\subseteq\sigma\). The definition of an ASC implies that every simplex is closed under the face relation, meaning it includes all of its faces (except for the empty set) as defined by the power set of the simplex.
Note that an ASC can be thought of as a hypergraph with an extra requirement on the edges, but the reverse is not true: a general hypergraph need not be an ASC. Although various methods exist for constructing an ASC from a hypergraph [13] in this paper we consider the associated ASC of a hypergraph[23]. The associated ASC consists of a simplex for each hyperedge. In other words, the associated ASC of a hypergraph \(G\) contains all subsets of all hyperedges:
\[K(G)=\{\sigma\subseteq V:\exists e\in E,\sigma\subseteq e\}.\]
As many real-world hypergraphs have some large hyperedges, constructing \(K(G)\) can be costly, and unnecessary if computing only low dimensional homology. In practice, to reduce computational complexity, we keep only those simplicies up to a small maximum dimension, \(p=2\) or \(3\).
### _Simplicial Homology_
Simplicial homology is an algebraic approach to analyze the structure of an ASC by quantifying the number of \(p\)-dimensional features. 0-dimensional features are connected components, 1-dimensional features are graph cycles, 2-dimensional features are hollow tetrahedra, and so on. The \(p\)-dimensional simplicial homology of an ASC, \(K\), denoted \(H_{p}(K)\), is a vector space whose basis represents the \(p\)-dimensional features of \(K\). The rank of \(H_{p}(K)\) then counts the number of \(p\)-dimensional features. This rank is denoted \(\beta_{p}\) and called the \(p^{th}\) Betti number of \(K\). The algebraic details
of simplicial homology computations and Betti numbers can be found in [15].
While Betti numbers provide valuable insights into the changing topology of hypergraph snapshots, they do not capture the relationships between the topology of consecutive snapshots. In other words, Betti numbers alone do not reveal if a feature persists throughout the entire sequence. To address this limitation and track the changes in homology and their interconnections across a sequence of ASCs, we employ the technique of zigzag persistent homology.
### _Persistent and Zigzag Homology_
This section provides an introduction to persistent homology (PH) [26] and how it generalizes to zigzag persistent homology. For a detailed introduction to PH we suggest [18, 22], for zigzag see [10].
PH is used to obtain a sense of the shape and size of a data set at multiple scale resolutions. To gain some intuition on what this means we describe a common setting in which PH is applied, that of a point cloud \(X\subseteq\mathbb{R}^{n}\). At a given scale (i.e., distance value) we connect points in \(X\) within the given distance to form an ASC. As that scale increases so does the ASC and topological features are born (appear) and die (are filled in). PH tracks the birth and death of these features as the distance scale varies to form a topological fingerprint. Short-lived features may indicate noise while long-lived ones often indicate meaningful features. The birth and death thresholds provide an idea of the general size or geometry of each feature, which can in turn provide intuition and interpretation back into the data itself. For example, the presence of a 1-dimensional loop might mean that the data is cyclical or repetitive whereas the presence of multiple 0-dimensional components could indicate strong clustering of the data.
A point cloud is not the only setting for PH. In general, only a sequence of nested ASCs1, often referred to as a _filtration_, is necessary:
Footnote 1: In fact, persistent homology can be applied in even more general settings but for the purposes of this paper we won’t consider arbitrary topological spaces or chain complexes.
\[\mathcal{K}=K_{0}\subseteq K_{1}\subseteq K_{2}\subseteq\ldots\subseteq K_{n}. \tag{1}\]
For a given dimension \(p\) we can calculate \(H_{p}(K_{i})\) for each \(K_{i}\). In order to capture how the homology changes from \(K_{i}\) to \(K_{i+1}\) we rely on the fact that \(K_{i}\) is a sub-complex of \(K_{i+1}\) and so the components of the topological features found in \(K_{i}\) (e.g., the vertices, edges, and higher dimensional simplices) must also be found in \(K_{i+1}\). If these components also form a topological feature in \(K_{i+1}\) then the feature persists. If they do not form a feature in \(K_{i+1}\) then the feature _dies_. In Figure 1 we see a filtration with a 1-dimensional feature in \(K_{1}\) consisting of the edges \((a,b),(a,c),(b,c)\). These edges are present in \(K_{2}\) but they no longer form a 1-dimensional feature because of the presence of the triangle \((a,b,c)\). The appearance and disappearance of \(p\)-dimensional features in the filtration is tracked in a summary known as a persistence barcode, a collection of intervals, one for each topological feature identified. Each feature has an associated interval \([b,d]\) that indicates the index of the appearance of the feature, its _birth_ threshold \(b\), and its disappearance, its _death_ threshold \(d\). If a feature is present in the final ASC in the sequence we say its death is \(\infty\) because it does not die within the filtration. We denote the barcode for dimension \(p\) of a sequence \(\mathcal{K}\) as \(D_{p}(\mathcal{K})=\{[b_{i},d_{i}]\}\), or simply \(D_{p}\) is the sequence is clear from context. In the example in Figure 1 the 1-dimensional feature is born at \(b=1\) and dies at \(d=2\). The algebraic mechanics of tracking features across spaces via their inclusions is best left to the references cited above. For the purposes of this paper only the intuition is necessary.
Given a temporal hypergraph sequence we can construct \(K_{i}:=K(G_{i})\). If we are lucky enough to have a sequence in which \(K_{i}\subseteq K_{i+1}\) for all \(i\) then we can apply PH directly. However, this is rarely the case. There are plenty of examples in which hypergraph vertices and edges are both added _and_ removed over time. This is where zigzag homology, which extends the concept of PH to handle ASC sequences with addition and removal of simplices, can be applied. Given an arbitrary sequence of ASCs, \(K_{0},K_{1},\ldots,K_{n}\), we can form an augmented sequence with interwoven unions2:
Footnote 2: Zigzag persistence is also defined for intersections, with the subset containments flipped. Here we explore only the union case.
\[K_{0}\subseteq K_{0}\cup K_{1}\supseteq K_{1}\subseteq K_{1}\cup K_{2} \cdots K_{n-1}\cup K_{n}\supseteq K_{n}.\]
The idea of zigzag homology is similar to PH. Even though the inclusions are not in the same direction throughout the augmented sequence their presence still allows us to track whether a feature in one ASC is the same as a feature in the next. In Figure 2 we show an example sequence of three ASCs with interwoven unions. There is a 1-dimensional feature in all three ASCs but through the use of zigzag we can see that they are all different loops. The barcode consists of three intervals: [0,1] for loop \((a,b),(a,c),(b,c)\), [0.5, 2] for loop \((a,c),(a,d),(c,d)\), and [1.5, \(\infty\)] for loop \((a,b),(a,e),(b,e)\). If a loop is born (resp. dies) at a union step between \(i\) and \(i+1\) we say that it is born (resp. dies) at the midpoint, \(i+\frac{1}{2}\).
For a more detailed introduction to zigzag persistence in the context of studying temporal hypergraphs, we refer the reader to [19], which includes an example illustrating the procedure.
Fig. 1: Example of a nested zigzag sequence of ASCs. (Left) Two graph edges (2-simplices) forming a graph chain. (Center) Three graph edges (2-simplices) forming a graph cycle. (Right) A 2-simplex (filled triangle). The \(D_{0}\) and \(D_{1}\) PH barcodes are shown below.
### _Vectorization of Persistence Barcodes_
To implement an autoencoder for studying zigzag persistence barcodes we need to create a faithful vector representation of the barcode. While there are many methods for vectorizing a barcode for machine learning applications, such as persistence images [1] and persistence landscapes [9], these are often high dimensional making the autoencoder training more burdensome. In this work we use Adcock-Carlsson Coordinates (ACCs) [2] as they are computationally and storage efficient and have been shown to provide comparable performance to the more advanced vectorization methods for classification tasks [7]. The ACCs are calculated as
\[ACC(D_{p})=\Big{[} \sum_{i}b_{i}(d_{i}-b_{i}),\sum_{i}(d_{\max}-d_{i})(d_{i}-b_{i}),\] \[\sum_{i}b_{i}^{2}(d_{i}-b_{i})^{4},\sum_{i}(d_{\max}-d_{i})^{2}(d _{i}-b_{i})^{4}\Big{]}. \tag{2}\]
We then stacked the ACCs for each dimension \(p\in[0,1]\) into a single eight-dimensional vector.
### _Autoencoder_
One of the ways to leverage the power of neural networks to perform anomaly detection on a dataset is through the use of autoencoders. An autoencoder is a particular kind of feed-forward neural network that takes in data, compresses it via encoding layers, and then attempts to reconstruct the original representation from the compressed form through decoding layers as shown in Fig. 3. The metric used to quantify the difference between the reconstructed version and the original data is called the reconstruction loss.
If an autoencoder is trained on "typical" data, then the reconstruction loss for unseen typical data should be low whereas the reconstruction loss for "atypical" data will be much greater. This is the motivation for utilizing autoencoders to detect anomalies in data. More precisely, if the reconstruction loss of unseen data is above a chosen threshold then the unseen data is considered anomalous.
## III Methodology
### _Data and Data Preparation_
The Operationally Transparent Cyber (OpTC) dataset [3] used in our experiments was created by the Defense Advanced Research Projects Agency (DARPA) as part of a mission to test scaling of cyber attack detection. The data consists of log records of both benign and malicious activity, with an associated ground truth document describing the attack events. The attack events include downloading and executing malicious PowerShell Empire payloads, privilege escalation, credential theft, network scanning, and lateral movement. The data contains both flow and host logs. The elements of each record vary depending on the type of log but the format is standardized allowing for easy analysis across log-types. In this paper we consider only the flow subset of records and only 4 of the 58 data fields available. In future work we plan to complete a more comprehensive analysis. The subset of keys in the flow records we use are time, destination port, source IP, and image path (i.e., executable). A sample of records, restricted to those fields, is shown in Table I.
We focus our analysis of the data on the first day of malicious activity, September 23, on a sampling of both benign and malicious hosts, see Table II. Here we classify a host as malicious if there is any malicious activity that occurred on that host, according to the ground truth document. We chose
\begin{table}
\begin{tabular}{l l l l}
**Time** & **Dest. Port** & **Source IP** & **Image Path** \\ \hline
9/23/19 11:25 & 80 & 142.20.56.202 & powershell.exe \\
9/23/19 11:25 & 5355 & 10.20.1209 & svchost.exe \\
9/23/19 11:25 & 5355 & 10.20.1209 & svchost.exe \\
9/23/19 11:25 & 5355 & 142.20.56.149 & svchost.exe \\
9/23/19 11:25 & 5355 & 142.20.56.139 & svchost.exe \\
9/23/19 11:25 & 8000 & 142.20.56.202 & firefox.exe \\
9/23/19 11:25 & 5355 & 10.20.2.67 & svchost.exe \\ \hline \multicolumn{3}{c}{�} & \multicolumn{3}{c}{�} \\ \(\vdots\) & \multicolumn
hosts 201, 402, and 660 as our malicious hosts. The data from these hosts forms our test set. For the benign set, we identified hosts that did not appear in the ground truth document and then chose a subset of those hosts with varying levels of activity relative to the malicious hosts. In particular, hosts 0005, 0006, 0010, 0012 had significantly less (approximately half as much) activity, hosts 0162, 0304, 0461, 0906 had comparable amounts of activity, and hosts 0071, 0213, 0222, 0274 had more activity compared to the malicious hosts. Data from the benign hosts forms our training set.
We performed selective filtering of the data as an initial preprocessing step. In particular, we filtered out actions where the image path or source IP address were missing and where the source IP address corresponded to local host activity. Since the network traffic data in the dataset is unidirectional, we also filtered out actions where the destination port was ephemeral thereby focusing on flow records where the source IP is the likely originator of the communication. Ephemeral ports, also called dynamic ports, are port numbers above 49152 that are not formally assigned to a service designation and are often used by the originator of the communication.
### _Experimental Design_
We designed an experiment with the aim to identify source IPs that are responsible for malicious activity captured on a host and the particular time window in which the malicious activity occurs, by using the topology of the interactions of the IPs with image paths. We create hypergraphs for a given source IP and sequence of timeframes, and then vectorize the hypergraph sequences in two ways: 1) using zigzag persistence and 2) a more naive hypergraph property embedding. In order to understand the viability of zigzag persistence diagrams to encode differences in the topological dynamics of benign and malicious activity we trained two autoencoders, one on the vectors derived from zigzag persistence and a second on the hypergraph property vectors. We then perform autoencoder-based anomaly detection separately on the two vectorizations and examine how the anomalies align with the ground truth document. If our zigzag autoencoder successfully identifies malicious activity on the network, this provides evidence that the topological information encoded by the zigzag persistence barcodes can aid in cybersecurity efforts. We use the autoencoder trained on hypergraph property vectors as a comparison.
The details and pipeline of these experiments are illustrated in Fig. 4. Our experimental design begins with the log data, see the box labeled _Log Data_ in Fig. 4. We show a small set of the OpTC log data including the specific columns needed: timestamp, source IP, destination port, and image path (executable). Using the timestamps, we break this log data into 10-minute windows that overlap by 5 minutes. We then filter down the 10 minutes of data from each window by a source IP to construct a hypergraph for each window where vertices are the executable files and hyperedges are the destination ports. Specifically, for the hypergraph pertaining to source IP \(X\) the vertex for executable \(t\) is contained in the destination port edge \(r\) if there is a record with the (source IP, destination port, executable) tuple \((X,t,r)\).
For each source IP we apply zigzag persistence to the temporal sequence of hypergraph snapshots, as shown in Fig. 4 in the box labeled _Zigzag Persistence_, resulting in a barcode for each dimension (0 and 1). This full time barcode is further broken into sub-barcodes over 1 hour sub-windows. Each of these sub-barcodes are vectorized using the ACCs described in Section II-D. We trained the zigzag autoencoder on these ACC vectors from IPs in the benign host list from Table II and tested on those from the evaluation hosts. We initialized the autoencoder with random weights using random seed 0. For each source IP we calculated the time series of mean squared error reconstruction loss as an indicator of abnormal or malicious activity. This is shown in Fig. 4, in the box labeled _Autoencoder_.
The zigzag autoencoder contains one fully-connected neural layer as the encoder and decoder. The input zigzag vectors are 8-dimensional, the autoencoder compresses the data into 2-dimensional vectors, and decompresses them back into 8-dimensions, as illustrated in Figure 3. We chose this shallow single layer encoder/decoder schema due to the low dimensionality of the ACC vectors. The encoder and decoder of the model learn by minimizing the mean squared error between the original vector and the reconstructed vector.
We trained a second autoencoder on some standard summary statistics of the hypergraphs as a feature vector on the collection of hypergraphs that occurred during the 1 hour sub-windows. For each of the hypergraphs during each sub-window we calculated the number of edges, number of vertices, number of components, and diameter of the largest component and then concatenated them together. This results in a 48-dimensional feature vector for each 1 hour window that also should capture the dynamics. The autoencoder again had a latent space of 2-dimensions to make a fair comparison to the first autoencoder.
By analyzing the reconstruction loss of the two autoencoders, we can compare the ability of the zigzag persistence barcodes and standard summary statistics to detect malicious activity.
### _Intuition_
Before we transition to the results of our experiment we provide some intuition, through an example, for why the _dynamics_ of the hypergraph topology, and not just the static topology of each snapshot, are important for detecting malicious activity. Fig. 5 shows hypergraphs from one benign and one malicious time period for source IP 142.20.56.202 on Host 201. It is apparent that the structural configuration of the hypergraph during benign activity differs from that during malicious activity, but from a topological perspective,
\begin{table}
\begin{tabular}{c|c} Benign (Training) Hosts: & 0005, 0006, 0010, 0012, 0071, 0162 \\
0213, 0222, 0274, 0304, 0461, 0906 \\ \hline Malicious (Testing) Hosts: & 0201, 0402, 0660 \\ \end{tabular}
\end{table} TABLE II: Subset of hosts used for training and testing
the two snapshots are equivalent. Both hypergraphs exhibit two components and no higher-dimensional homology, indicating a similarity in their topological properties.
However, the two isolated snapshots do not tell the entire story of the topology. While the snapshots are topologically equivalent they do not account for the underlying dynamics of the topology (e.g., do these two components persist for long periods of time or do they quickly evolve?). By looking beyond the isolated snapshots we can gain a quick insight that, in fact, the dynamics of the malicious activity change at a much higher rate than the dynamics of the benign activity. Figure 6 shows the sequence of image path (executables) for source IP 142.20.56.202 on host 201 during the same 20 minute benign and malicious activity windows associated to the hypergraphs in Fig. 5. As shown, during the benign activity the only executables used were System and svchost.exe and tend to be executed every few minutes, while in the malicious activity many executables are used and are executed much more frequently. During the benign activity the processes are associated with operating system control and inter-process communications. For the malicious activity, powershell.exe and python.exe are present which are indicative of PowerShell Empire (the C2 capability listed in the ground truth). Furthermore, the Iss.exe process is present which can be associated with benign activity such as logging into a computer; however, frequently the process is co-opted by malicious actors to harvest credentials. During malicious activity there is an increase over the normal activity due to the addition of activity without the removal of normal activity. We do note that the OpTC dataset is a test environment (i.e., the normal activity is simulated) and as such the benign user activity threshold may be lower than standard traffic. However, they were simulated to be representative of typical system usage.
We show in the results that these dynamics of the topology are captured by the zigzag barcode vectorizations allowing us to detect a difference between benign and malicious activity patterns.
## IV Results
Here we demonstrate the ability of both the zigzag persistence and summary statistics to detect malicious activity for an example source IP. Namely, we demonstrate these results for source IP 142.20.56.202 for malicious activity and source IP 142.20.56.175 for benign activity on host 201 on September 23, 2019. We chose this malicious source IP and host to
Fig. 4: Experimental design pipeline for study OpTC log data with autoencoder trained on the ACC vectors (e.g., \(ACC_{0},\ldots,ACC_{14}\)) of the subwindowed zigzag persistence barcodes. An additional autoencoder trained on summary statistics of the hypergraph snapshots was also used for comparison without that pipeline shown.
Fig. 5: Hypergraphs formed during malicious and benign activity for source IP 142.20.56.202 on host 201 using destination ports as hyperedges and image path executables as vertices.
Fig. 6: Sequence of image path executables for source IP 142.20.56.202 during 20 minute benign and malicious activity windows on host 201 (same windows as used to generate the hypergraphs in Fig. 5).
demonstrate the effectiveness of this autoencoder due to the variety of attacks during this time as shown in the ground truth data provided in the GitHub repository3. While there are limited malicious source IPs during this time window there are a very large number of benign source IPs. We chose source IP 142.20.56.175 as an exemplary benign source IP, but we found very similar dynamics and reconstruction loss values for other benign source IPs.
Footnote 3: See [https://github.com/FiveDirections/OpTC-data](https://github.com/FiveDirections/OpTC-data) for red team ground truth data
Figure 7 shows the zigzag persistence barcode (7a) and the reconstruction losses over time for the ACC vectors and the hypergraph summary statistics (7b). In both plots we have highlighted each of the malicious events from the OpTC ground truth diary as red vertical bars. From these events it is clear that there are two main sequences of attacks: the first from approximately 11:30 to 12:00 and the second from 13:00 to 13:30. The first group of attacks consist of a password collection attempt through Mimikatz to elevate the agent and then attempts at injecting into the LSASS process using psinject. The second group of attacks is based on scanning procedures including a ping sweep and ARP scan.
The main takeaway from Figure 7 is that while both ACCs and hypergraph summary statistics seem to show an anomaly during the malicious activity through a peak in the reconstruction loss, the autoencoder trained on the ACCs more precisely detects the first group of malicious activity. The summary statistics show a broad range in time when the reconstruction loss is high (approximately 9:30 to 14:30) which is larger than the range occupied by the malicious activity. On the other hand, the autoencoder trained on the ACCs is able to accurately detect the first sequence of attacks with a sharp spike in reconstruction loss from approximately 11:00 to 11:40, which closely correlates to when the first attack sequence occurred. However, there is no peak during the second sequence of attacks which was dominated by ARP scans and ping sweeps. We believe these were not clearly detected due to the specific hypergraph construction we chose: hyperedges as destination ports and image paths as vertices. Our hypergraph construction is not sensitive to this attack as many of the lines in the log data corresponding to ARP scans and ping sweeps are not labeled with a source IP. And when the log data is associated with a source IP they are repetitive (e.g., the ping responses repeatedly have image path System and destination port 0) and do not show up as significant changes in the hypergraph's topology. In future work we plan to use our same pipeline with different hypergraph constructions to better identify different attack types.
As a point of comparison we show the same zigzag and reconstruction loss plots for an exemplary benign source IP in Fig. 8. From the zigzag barcode (8a) we see that there are typically no 1-dimensional features, as evidenced by the empty \(D_{1}\) barcode, for benign activity. Moreover, the 0-dimensional features have a predictable, periodic behavior. This is further substantiated by the reconstruction loss for both the ACCs and summary statistics being very low (compare the \(y\)-axis scales in Fig. 8b to those in Fig. 7b).
To quantify these results across more benign data and demonstrate the consistency we lastly compare the 25th, 50th (median) and 75th percentiles of the distributions of reconstruction losses for both the ACC and summary statistic trained autoencoders during benign and then malicious activity on host 201 on the 23rd as shown in Table III. Additionally, we calculated these same statistics for these autoencoders tested on the training hosts (benign) on the 24th as a point of comparison to the benign activity on the 23rd. By comparing these percentiles we are able to quantitatively confirm the performance of the autoencoders.
From Table III it is clear that the interquartile interval from the 25th to the 75th percentiles for host 201 for benign and malicious activity do not overlap for both the ACCs and summary statistics resulting in both autoencoders being able to accurately distinguish between the two states. This is shown with the \(75^{\rm th}\) percentile of the benign source IP reconstruction loss being less than (6 times less than) the \(25^{\rm th}\) percentile of the malicious source IPs.
We also compare the benign activity on host 201 to the training hosts on both the 23rd and 24th to demonstrate the benign reconstruction loss is similar across hosts and that the autoencoder was not over trained. This is shown with the autoencoder loss distributions trained on ACCs and summary statistics being similar with all having their inter-quartile intervals significantly overlapping.
Lastly, based on the medians for both the summary statistics and the ACCs it seems the ACCs more clearly detect malicious activity the the median loss being approximately 35.6 times greater during malicious compared to benign activity on host 201 and only 7.5 times greater for the summary statistics.
## V Conclusion
The work we present in this paper shows that the dynamics of topology of hypergraphs representing cyber log data can be effective for distinguishing malicious activity from benign. However, we have noted some limitations that we plan to explore in future work. In particular, the ACC vectorization strategy for persistence barcodes is rather coarse. We plan to evaluate more complex representations like persistence images and landscapes for this vectorization step. We additionally plan to study the sensitivity of the autoencoder to both the
\begin{table}
\begin{tabular}{l|c c c|c c c} Host(s) & \multicolumn{3}{c}{ACC (\(\times 10^{-3}\))} & \multicolumn{3}{c}{Summary Statistics} \\
**25\%** & **50\%** & **75\%** & **25\%** & **50\%** & **75\%** \\ \hline
201 (Benign IPs) & 0.04 & 0.11 & 0.19 & 0.68 & 0.92 & 1.19 \\
201 (Malicious IPs) & 1.21 & 3.93 & 7.96 & 5.31 & 6.96 & 8.81 \\ Training Hosts (24th) & 0.07 & 0.14 & 0.26 & 0.76 & 1.03 & 1.34 \\ Training Hosts (23rd) & 0.06 & 0.12 & 0.26 & 0.72 & 1.01 & 1.39 \\ \hline \end{tabular}
\end{table} TABLE III: The \(25^{\rm th}\), \(50^{\rm th}\) (median), and \(75^{\rm th}\) percentiles of the autoencoder (both ACC and hypergraph summary statistics trained autoencoders) reconstruction loss distributions on host 201 on the 23rd for malicious and benign IP addresses and for all the training hosts on both the 23rd and the 24th.
[MISSING_PAGE_POST]
initialized random weights and the training data (e.g., mixing in some malicious data into training data). We are also aware that our hypergraph construction linking executables to destination ports does not capture all types of malicious behavior. We will experiment with additional hypergraph constructions to understand how other malicious behavior can be encoded. Along those lines we additionally plan to test our methods on data sets beyond OpTC to ensure generalizability of the approach, and compare to other approaches of studying hypergraph data including hypergraph neural networks. Finally, in order for cyber analysts to trust the results of our pipeline we must be able to provide some interpretation of the specific topological features in \(H_{0}\) and \(H_{1}\) in the context of the log data and ground truth malicious activity. This is ongoing work and provides an exciting opportunity for collaboration between cybersecurity researchers and mathematicians.
|
2301.13574 | The mixed-state entanglement in holographic p-wave superconductor model | In this paper, we investigate the mixed-state entanglement in a model of
p-wave superconductivity phase transition using holographic methods. We
calculate several entanglement measures, including holographic entanglement
entropy (HEE), mutual information (MI), and entanglement wedge cross-section
(EWCS). Our results show that these measures display critical behavior at the
phase transition points, with the EWCS exhibiting opposite temperature behavior
compared to the HEE. Additionally, we find that the critical exponents of all
entanglement measures are twice those of the condensate. Moreover, we find that
the EWCS is a more sensitive indicator of the critical behavior of phase
transitions than the HEE. Furthermore, we uncover a universal inequality in the
growth rates of EWCS and MI near critical points in thermal phase transitions,
such as p-wave and s-wave superconductivity, suggesting that MI captures more
information than EWCS when a phase transition first occurs. | Zhe Yang, Fang-Jing Cheng, Chao Niu, Cheng-Yong Zhang, Peng Liu | 2023-01-31T11:59:38Z | http://arxiv.org/abs/2301.13574v2 | # The mixed-state entanglement in holographic p-wave superconductor model
###### Abstract
In this paper, we investigate the mixed-state entanglement in a model of p-wave superconductivity phase transition using holographic methods. We calculate several entanglement measures, including holographic entanglement entropy (HEE), mutual information (MI), and entanglement wedge cross-section (EWCS). Our results show that these measures display critical behavior at the phase transition points, with the EWCS exhibiting opposite temperature behavior compared to the HEE. Additionally, we find that the critical exponents of all entanglement measures are twice those of the condensate. Moreover, we find that the EWCS is a more sensitive indicator of the critical behavior of phase transitions than the HEE. Furthermore, we uncover a universal inequality in the growth rates of EWCS and MI near critical points in thermal phase transitions, such as p-wave and s-wave superconductivity, suggesting that MI captures more information than EWCS when a phase transition first occurs.
###### Contents
* I Introduction
* II Holographic setup for p-wave superconductor and Holographic information-related quantities
* II.1 The holographic p-wave superconductor model
* II.2 The phase diagram of holographic p-wave superconductor model
* II.3 The holographic quantum information
* III The computation of the holographic quantum information
* III.1 The holographic entanglement entropy and mutual information
* III.2 The minimum entanglement wedge cross section
* IV The Scaling behavior of the quantum information
* V The growth rate of the holographic quantum information
* VI Discussion
## I Introduction
Quantum entanglement is the most crucial characteristic of the quantum system and lays the key foundation of quantum information theory. Recently, quantum information has been attracting heavy attention from numerous fields, such as holographic theory, quantum many-body systems, and condensed matter theory. According to recent research, quantum information can detect quantum phase transitions and play a key role in spacetime emergence [1; 2; 3; 4; 5].
In recent years, a variety of measures of quantum information have been proposed, such as entanglement entropy (EE), mutual information (MI), and Renyi entropy. EE is a widely used quantity that describes the entanglement of pure states very well. However, EE is not
suitable for describing the entanglement of the more prevalent mixed states. To address this issue, new measures such as entanglement of purification (EOP), reflected entropy, quantum discord, and others have been suggested for mixed-state systems [6; 7]. However, calculating these measures of quantum information can be challenging, particularly in strongly correlated systems. The complexity of these calculations increases exponentially with the size of the quantum system.
The gauge/gravity duality theory has been proved powerful tool for studying strongly correlated quantum systems by dualizing such systems to classical gravitational systems [8; 9; 10; 11; 12]. It has been shown that the background geometry of the dual gravitational system encodes the quantum information of the dual field theory. For instance, the entanglement entropy (EE) is related to the minimum surface in the bulk, also known as the holographic entanglement entropy (HEE) [13]. The ability of HEE to detect quantum phase transitions and thermal phase transitions has been investigated in [14; 15; 16; 17]. Recently, the entanglement wedge cross-section (EWCS) has been proposed as a novel measure of mixed-state entanglement in holographic systems [18; 19]. Additionally, various types of mixed-state entanglement, such as reflected entropy, logarithmic negativity, balanced partial entanglement, and odd entropy have been linked to the EWCS in holographic systems [20; 21; 22; 23; 24; 25]. In conclusion, EWCS is a powerful tool for investigating mixed-state entanglement in strongly coupled field theories [26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36].
Holographic superconductivity is a key topic in the gauge/gravity theory, providing a novel approach to studying high-temperature superconductors [37; 38; 39; 40; 41]. The symmetry of the Cooper pair wave function allows for the classification of superconductors as s-wave, p-wave, d-wave, etc. The main characteristics of the phase transition in superconductors are spontaneous symmetry breaking and the emergence of order parameters. For instance, an s-wave holographic superconductor is thought to be the spontaneous scalarization of the black hole, a p-wave holographic superconductor requires a charged vector field in the bulk as the vector order parameter, and a d-wave model was built by introducing a charged massive spin two field propagating in the bulk [42; 43; 44]. Recent studies have shown that holographic quantum information can be used to detect the phase transition of s-wave superconductor [15; 28; 45; 46]. However, research on the effects of mixed-state entanglement in p-wave superconductors is currently lacking. Therefore, it would be interesting to investigate the connection between the holographic p-wave superconducting phase transition and mixed
state entanglement.
In this paper, we aim to systematically study the role of mixed-state entanglement during the p-wave superconductivity phase transition. The paper is organized as follows: In Sec. II, we introduce the holographic p-wave superconductor model, and the concepts of holographic quantum information, including holographic HEE, MI and EWCS. We explore the characteristics of mixed-state entanglement in Sec. III. In Sec. IV, we provide analytical and numerical analysis of the scaling behavior of mixed-state entanglement measures. Additionally, we uncover an inequality between EWCS and MI in Sec. V. Finally, in Sec. VI, we summarize our findings and conclusions.
## II Holographic setup for p-wave superconductor and holographic information-related quantities
We begin by presenting the model of a holographic p-wave superconductor and its phase diagram. Following that, we introduce HEE, as well as the mixed-state entanglement measures MI and EWCS.
### The holographic p-wave superconductor model
In the p-wave superconductor model, as the temperature drops to a specific critical value, spontaneous symmetry breaking occurs, resulting in a vector order parameter. The system then transits from the normal phase (absence of vector hair) to the superconducting phase (presence of vector hair). The holographic p-wave model is constructed by introducing a complex vector field into Einstein-Maxwell theory with a negative cosmological constant [47; 48],
\[S= \frac{1}{2\kappa^{2}}\int d^{4}x\sqrt{-g}\left(\mathcal{R}+\frac{ 6}{L^{2}}-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-\frac{1}{2}\rho_{\mu\nu}^{\dagger} \rho^{\mu\nu}-m^{2}\rho_{\mu}^{\dagger}\rho^{\mu}+iq\gamma\rho_{\mu}\rho_{\nu }^{\dagger}F^{\mu\nu}\right), \tag{1}\]
where \(\kappa^{2}=8\pi G\) is related to the gravitational constant, \(L\) the AdS radius that we set as 1. \(A\) is the gauge field and the field strength \(F_{\mu\nu}=\nabla_{\mu}A_{\nu}-\nabla_{\nu}A_{\mu}\). \(\rho_{\mu}\) is a complex vector field with mass \(m\) and charge \(q\). The tensor \(\rho_{\mu\nu}=D_{\mu}\rho_{\nu}-D_{\nu}\rho_{\mu}\) with covariant derivative defined as \(D_{\mu}=\nabla_{\mu}-iqA_{\mu}\). The last term in the action is the non-minimum coupling term between the Maxwell field and the complex vector field. In this paper, we only consider the
case without an external magnetic field. The equation of motion (EOM) can be read as,
\[\begin{split}\nabla^{\nu}F_{\nu\mu}=& iq(\rho^{\nu}\rho^{ \dagger}_{\nu\mu}-\rho^{\nu\dagger}\rho_{\nu\mu})+iq\gamma\nabla^{\nu}(\rho_{\nu }\rho^{\dagger}_{\mu}-\rho^{\dagger}_{\nu}\rho_{\mu}),\\ D^{\nu}\rho_{\nu\mu}-& m^{2}\rho_{\mu}+iq\gamma \rho^{\nu}F_{\nu\mu}=0,\\ \mathcal{R}_{\mu\nu}-\frac{1}{2}\mathcal{R}g_{\mu\nu}-\frac{3}{L ^{2}}g_{\mu\nu}=&\frac{1}{2}F_{\mu\lambda}F^{\lambda}_{\nu}+ \frac{1}{2}\left(-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-\frac{1}{2}\rho^{\dagger}_{ \mu\nu}\rho^{\mu\nu}-m^{2}\rho^{\dagger}_{\mu}\rho^{\mu}+iq\gamma\rho_{\mu} \rho^{\dagger}_{\nu}F^{\mu\nu}\right)g_{\mu\nu}+\\ &\frac{1}{2}\{[\rho^{\dagger}_{\mu\lambda}\rho^{\lambda}_{\nu}+m ^{2}\rho^{\dagger}_{\mu}\rho_{\nu}-iq\gamma(\rho_{\mu}\rho^{\dagger}_{\lambda }-\rho^{\dagger}_{\mu}\rho_{\lambda})F^{\lambda}_{\nu}]+\mu\leftrightarrow\nu\}. \end{split} \tag{2}\]
We solve the EOM with this ansatz,
\[\begin{split} ds^{2}=\frac{1}{z^{2}}\left(-p(z)(1-z)U(z)dt^{2}+ \frac{1}{p(z)(1-z)U(z)}dz^{2}+V_{1}(z)dx^{2}+V_{2}(z)dy^{2}\right),\\ A_{\nu}dx^{\nu}=\mu(1-z)a(z)dt,\qquad\rho_{\nu}dx^{\nu}=\rho_{x} (z)dx,\end{split} \tag{3}\]
where \(p(z)\equiv 1+z+z^{2}-\frac{\mu^{2}z^{3}}{4}\). \(\mu\) is the chemical potential of the dual field theory. The radius axis is denoted by \(z\), which ranges from \(0\) to \(1\), with \(z=0\) and \(z=1\) representing the AdS boundary and horizon, respectively. In our ansatz, there are five unknown functions, \(U(z)\), \(V_{1}(z)\), \(V_{2}(z)\), \(a(z)\), and \(\rho_{x}(z)\), which can be obtained by solving the EOM. The ansatz (3) reduces to the AdS-RN black brane solution when \(U=V_{1}=V_{2}=a=1\) and \(\rho_{x}=0\). The expansion of the \(\rho_{x}\) near the AdS boundary is
\[\rho_{x}=\rho_{x_{-}}z^{\Delta_{-}}+\rho_{x_{+}}z^{\Delta_{+}}+\cdots, \tag{4}\]
where the scaling dimension \(\Delta_{\pm}=\frac{1\pm\sqrt{1+4m^{2}}}{2}\) and we set the source \(\rho_{x_{-}}=0\) for the condensate arise spontaneously. After solving the EOM, we can obtain the condensate \(\langle J_{x}\rangle\) by extracting the coefficient \(\rho_{x_{+}}\). The condensate \(\langle J_{x}\rangle\) emerges at a specific temperature when varying \(m^{2}\) and \(q\). Consequently, in the dual quantum field, the vector operator acquires a non-zero vacuum expectation value and spontaneously breaking the U(1) symmetry and rotational symmetry. Therefore, \(\langle J_{x}\rangle\) can be used as the order parameter of p-wave superconducting phase transition.
The Hawking temperature of this model is \(\tilde{T}=\frac{12-\mu^{2}}{16\pi}\). The system is invariant under the following rescaling,
\[\begin{split}(t,x,y)\rightarrow\alpha^{-1}(t,x,y),\quad(U,V_{1},V_{2})\rightarrow\alpha^{2}(U,V_{1},V_{2}),\\ \mu\rightarrow\alpha\mu,\quad\tilde{T}\rightarrow\alpha\tilde{T},\quad\rho_{x_{+}}\rightarrow\alpha^{\Delta_{+}+1}\rho_{x_{+}}.\end{split} \tag{5}\]
In this paper, we adopt the chemical potential \(\mu\) as the scaling unit, which is equivalent to treating the dual system as a field theory described by the giant canonical ensemble. The dimensionless Hawking temperature \(T=\tilde{T}/\mu\).
### The phase diagram of holographic p-wave superconductor model
This holographic p-wave superconductor model can exhibit zeroth-order, first-order, and second-order phase transitions depending on the values of \(m\) and \(q\). For example, a second-order phase transition can occur at \(q=1.5\), \(m^{2}=3/4\) with a critical temperature of \(T_{c}\approx 0.01791\). In Fig. 1, we demonstrate the relationship between the condensate \(\langle J_{x}\rangle^{2/5}\) and temperature by plotting the scaling relationship,
\[\delta(\langle J_{x}\rangle)\sim\left(1-\frac{T}{T_{c}}\right)^{\alpha_{c}}. \tag{6}\]
Theoretical calculations predict that the critical exponent is \(\alpha=1/2\)[47]. Our numerical results also indicate that \(\alpha_{c}\approx 0.500106\).
A first-order phase transition can occur at \(q=1.2\) and \(m^{2}=3/4\) with a critical temperature of \(T_{c}\approx 0.003382\). To better visualize the phase structure, we plot the effective
Figure 1: Left plot: The second-order phase transition occurs as the temperature falls below the critical value. The inset plot illustrates the scaling behavior of the condensate \(\langle J_{x}\rangle\). Right plot: The first-order phase transition occurs when the temperature falls below the critical temperature, which represents by the black dashed line. The inset plot illustrates the effective free energy density \(\Omega\) versus the temperature \(T/T_{c}\), revealing that the superconducting phase is thermodynamically favored.
free energy density (as shown in Fig.1). The effective free energy density is defined as \(\tilde{\Omega}=M-Ts\), where \(T\) is the Hawking temperature, \(s\) is the entropy density, and \(M\) is the mass density of the black brane [49]. The mass density of the black brane can be obtained by using the AdS asymptotic behavior of \(g_{tt}\) in our ansatz,
\[\frac{(1-z)U(z)\left(-\frac{1}{4}\mu^{2}z^{3}+z^{2}+z+1\right)}{z^{2}}\sim \frac{1}{z^{2}}+Mz+Qz^{2}+\cdots. \tag{7}\]
The free energy of the superconducting phase is lower than the normal phase when the temperature drops below the critical temperature \(T_{c}\). As a result, the system will abruptly transition from the normal phase to the superconducting phase.
To more thoroughly understand the behavior of p-wave superconductivity, we present the phase diagram in Fig. 2. The phase diagram is constructed by identifying the critical points, which can be found by examining the emergence of condensation as a perturbation near these points. The linearized equations of motion can be transformed into an eigenvalue problem that we solve using numerical methods
\[\begin{split}\frac{1}{32\mu^{2}(z-1)z^{2}}(\mu^{2}z^{3}-4z^{2}-4 z-4)(2z^{2}(z^{2}(\mu^{2}(4z-3)-12)\rho^{\prime}_{x}(z)+\\ \left(\mu^{2}z^{4}-\left(\mu^{2}+4\right)z^{3}+4\right)\rho^{ \prime\prime}_{x}(z))-6\rho_{x}(z))=-q^{2}\rho_{x}(z).\end{split} \tag{8}\]
By analyzing the eigenvalues, we can determine the upper or lower bounds of the critical points, which correspond to the boundaries of the different phases in the diagram.
Figure 2: The phase diagram of holographic p-wave superconductor model with positive \(m^{2}\). The solid lines are the critical points.
### The holographic quantum information
Quantum entanglement is a fundamental characteristic of quantum systems. EE is a well-known measure of entanglement, which quantifies the correlation between a subsystem and its complement for pure states. It is defined in terms of the reduced density matrix \(\rho_{A}\)[50],
\[S_{A}(|\psi\rangle)=-\text{Tr}[\rho_{A}\text{log}(\rho_{A})],\qquad\rho_{A}= \text{Tr}_{B}(|\psi\rangle\langle\psi|). \tag{9}\]
The HEE was proposed to be dual to the area of the minimum surface in the gravitational system [51]. In this paper, we consider the HEE of the configuration with an infinitely long strip along the \(y\)-axis (see Fig. 3). HEE typically diverges due to the asymptotic AdS boundary. The regulation is implemented by subtracting the divergent term from the HEE. It should be noted that HEE is not suitable for describing the mixed-state entanglement. For example, EE of the quantum system characterized by the direct product state \(\mathcal{H}_{A}\otimes\mathcal{H}_{B}\) is not equal to zero, but the entanglement of the subsystems is vanishing. This is because EE contains both quantum and classical correlation. Therefore, as the dual of EE, HEE is also affected by thermodynamic entropy in mixed-state systems [52; 53].
To better solve the problem of mixed-state entanglement measurement, numerous novel entanglement measures have been proposed. One popular measure is mutual information (MI), which quantifies the correlation between two subsystems \(A\) and \(C\) that are separated by a subsystem \(B\). According to the definition of MI, it is calculated as [54; 55],
\[I(a:c)=S(a)+S(c)-\min(S(a\cup c)), \tag{10}\]
where \(S(x)\) denotes the entanglement entropy of subsystem \(x\). Unlike entanglement entropy,
Figure 3: Left plot: the minimum surface for a subsystem (red region). Right plot: the minimum cross-section (green surface) of the entanglement wedge.
MI for direct product states \(\mathcal{H}_{A}\otimes\mathcal{H}_{B}\) is always zero, making it a more appropriate measure for mixed-state entanglement. In the holographic context, the dual of MI is the difference in area between red (disconnected configuration) and blue surfaces (connected configuration), as shown in Fig. 4. As the subsystem \(A,\,C\) becomes smaller or when the separation \(B\) becomes larger, MI decreases and eventually reaches zero, indicating a disentangling phase transition. However, MI has some limitations as a mixed-state entanglement measure as it is directly related to entanglement entropy and can be dominated by it in some cases [27]. Therefore, it is important to explore other mixed-state entanglement measures.
Recently, the minimum cross-section of the entanglement wedge (EWCS) is proposed as a novel holographic mixed-state entanglement measure [18]. EWCS is considered to be the duality of reflected entropy, logarithmic negativity, and odd entropy. The definition of EWCS is as follows,
\[E_{w}(\rho_{AB})=\min_{\Sigma_{AB}}\left(\frac{\text{Area}(\Sigma_{AB})}{4G_{N} }\right). \tag{11}\]
Fig. 3 is an illustration of EWCS in a bipartite system \(a\cup c\) divided by \(b\). The area bounded by the minimum surface of the disconnected configuration is known as the entanglement wedge. It is important to note that entanglement between subsystems only exists when the total correlation is not zero, which means the MI does not vanish.
Although EWCS plays a significant role in measuring the entanglement of mixed-state systems, it is still challenging to solve it [26]. First, it is hard to solve the highly nonlinear EOM of the minimum surface. Second, the minimum cross-section is obtained by scanning a two-dimensional parameter space, which is a hard task. Last but not least, the coordinate singularity close to the AdS boundary with the asymptotic AdS can hinder numerical precision. We have proposed an efficient algorithm for solving EWCS based on the requirement
Figure 4: The illustration of the holographic mutual information.
that the minimum cross-section is locally orthogonal to the boundaries of the entanglement wedge [28]. Fig. 5 shows the illustration of the key concept for our numerical algorithm. We consider EWCS of the infinite strip along the \(y\)-direction in a homogeneous background
\[ds^{2}=g_{tt}dt^{2}+g_{zz}dz^{2}+g_{xx}dx^{2}+g_{yy}dy^{2}. \tag{12}\]
The minimum surfaces of the connected configuration can be represented as \(C_{1}(\theta_{1})\) and \(C_{2}(\theta_{2})\). The minimum surfaces intersect with the cross-section at points \(p_{1}\) and \(p_{2}\), and the area of this local minimum surface (the red curve in Fig. 5) is,
\[A=\int_{C_{p_{1},p_{2}}}\sqrt{g_{xx}g_{yy}x^{\prime}(z)^{2}+g_{zz}g_{yy}}dz. \tag{13}\]
Variating (13), we obtain the EOM determining the local minimum surface,
\[x^{\prime}(z)^{3}\left(\frac{g_{xx}g_{yy}^{\prime}}{2g_{yy}g_{zz}}+\frac{g_{xx }^{\prime}}{2g_{zz}}\right)+x^{\prime}(z)\left(\frac{g_{xx}^{\prime}}{g_{xx}} +\frac{g_{yy}^{\prime}}{2g_{yy}}-\frac{g_{zz}^{\prime}}{2g_{zz}}\right)+x^{ \prime\prime}(z)=0. \tag{14}\]
Remind that the global minimum cross-section is locally orthogonal to the entanglement wedge, which implies that
\[\left\langle\frac{\partial}{\partial_{z}},\frac{\partial}{\partial\theta_{1} }\right\rangle_{p_{1}}=0,\quad\left\langle\frac{\partial}{\partial_{z}},\frac{ \partial}{\partial\theta_{2}}\right\rangle_{p_{2}}=0 \tag{15}\]
where \(\left\langle\cdot,\cdot\right\rangle\) represents the vector product with metric \(g_{\mu\nu}\). We can normalize the orthogonal relation,
\[Q_{1}(\theta_{1},\theta_{2})\equiv\left.\frac{\left\langle\frac{\partial}{ \partial z},\frac{\partial}{\partial\theta_{1}}\right\rangle}{\sqrt{\left\langle \frac{\partial}{\partial z},\frac{\partial}{\partial\theta_{1}}\right\rangle \left\langle\frac{\partial}{\partial\theta_{1}},\frac{\partial}{\partial \theta_{1}}\right\rangle}}\right|_{p_{1}}=0,\quad Q_{2}(\theta_{1},\theta_{2}) \equiv\left.\frac{\left\langle\frac{\partial}{\partial z},\frac{\partial}{ \partial\theta_{1}}\right\rangle}{\sqrt{\left\langle\frac{\partial}{ \partial z},\frac{\partial}{\partial z}\right\rangle\left\langle\frac{ \partial}{\partial\theta_{2}},\frac{\partial}{\partial\theta_{2}}\right\rangle }}\right|_{p_{2}}=0. \tag{16}\]
Figure 5: The illustration of the numerical algorithm for EWCS.
Finding the cross-section located at the minimum surface at \((\theta_{1},\theta_{2})\) where (16) is satisfied, we obtain the minimum cross-section. To this end, we adopt the Newton-Raphson method to locate the endpoints satisfying the local perpendicular conditions. Based on the above techniques, we can study the relationship between the holographic p-wave superconductor and the EWCS [28].
## III The computation of the holographic quantum information
### The holographic entanglement entropy and mutual information
In Fig. 6, we show the relationship between the HEE and temperature \(T/T_{c}\) during second-order and first-order phase transitions. For \(q=1.5\) and \(m^{2}=3/4\), where the second-order phase transition occurs, the HEE increases with increasing temperature. For \(q=1.2\) and \(m^{2}=3/4\), where the first-order phase transition occurs, the HEE jumps abruptly when crossing the critical point. To understand this behavior, we can examine the relationship between HEE and thermodynamic entropy, as when the configuration is large or the temperature is high enough, the minimum surface will approach the horizon of the black brane and HEE will primarily be determined by thermodynamic entropy. Therefore, we will next analyze the thermodynamic entropy behavior of the black brane to better understand
Figure 6: The holographic entanglement entropy \(S_{E}\) vs temperature \(T/T_{c}\) with various strip width \(l\). The critical point is indicated by the black dashed line. The stable and metastable states are depicted by solid and transparent lines, respectively. Left plot: The second-order phase transition at \(T_{c}\approx 0.01791\). Right plot: The first-order phase transition at \(T_{c}\approx 0.003382\).
the behavior of HEE [52; 53].
The entropy density is given by,
\[\tilde{s}=\frac{2\pi A}{\kappa^{2}}=\frac{2\pi\sqrt{V_{1}(z)V_{2}(z)}}{\kappa^{2} }\hat{V}, \tag{17}\]
where \(A\) is the area of the horizon and \(\hat{V}=\int dxdy\) is the corresponding area of the region in the dual field theory [56]. Dividing the entropy by the area \(\hat{V}\) and \(\mu^{2}\), we have the dimensionless entropy density \(s=\frac{\kappa^{2}\tilde{s}}{2\pi\hat{V}\mu^{2}}\). The plot of the entropy density near the critical point can be seen in Fig. 7. The above phenomena show that both HEE and entropy density can detect the critical behavior of the holographic p-wave superconducting phase transitions. Similar phenomena of the HEE in the superconducting phase transition can see in [15; 16; 28; 57].
MI is one of the mixed-state entanglement measures that can extract the total correlation of the systems. Since MI is directly defined by HEE (see (10)), it also can diagnose the phase transition. Moreover, a disentangling phase transition occurs when MI is zero and entanglement exists only when MI is greater than zero. Fig. 8 illustrates the behavior of the disentangling phase transition for various configurations. However, in certain cases, MI is determined by the thermodynamic entropy [27; 28; 49]. Therefore, it is necessary to investigate other mixed-state entanglement measures.
Figure 7: The entropy density \(s\) vs temperature \(T/T_{c}\). The black dashed line represents the entropy density of the normal phase. The entropy density of the superconducting phase is represented by the purple line. Left plot: The second-order phase transition of holographic p-wave superconductor model. The inset plot depicts the logarithm between \(s\) and \(1-\frac{T}{T_{c}}\). Right plot: The first-order phase transition of holographic p-wave superconductor model. The critical temperature is indicated by the red dashed line.
### The minimum entanglement wedge cross section
We begin by examining the EWCS during a second-order phase transition. Fig. 9 shows that EWCS can diagnose the critical behavior of holographic p-wave superconducting phase transitions. At the critical point of a second-order phase transition, EWCS is continuous, but its first derivative is discontinuous. In the superconducting phase, EWCS always decreases with increasing temperature. However, we find that the EWCS in the normal phase is configuration-dependent. In large configurations, it behaves similarly to the HEE, showing a monotonically increasing trend with temperature. In contrast, for small configurations, the EWCS of the normal phase exhibits a monotonically decreasing trend with temperature, opposite to the behavior of the HEE.
Next, we investigate the behavior of the EWCS during a first-order phase transition. Fig. 10 illustrates EWCS behavior during this phase transition, with the inset plot showing the derivative of EWCS with respect to temperature (\(\partial_{T}E_{w}\)) versus temperature \(T\). The inset plot illustrates that in normal phase the EWCS decreases with increasing temperature. Unlike the HEE, the EWCS of the superconducting phase always decreases with temperature. When the temperature falls below a critical point, EWCS abruptly jumps from the normal phase to the superconducting phase, this sudden change in EWCS suggests that it can capture the first-order phase transition, similar to the HEE and MI.
In addition to diagnosing the critical points, it is also important to investigate the scaling
Figure 8: The critical configuration for disentangling phase transition. The critical temperature is indicated by the black dashed line. The solid and translucent lines represent stable and metastable states. Left plot: When \(b\) is above the \(b_{c}\), the disentangling phase transition occurs. Right plot: When \(c\) is below \(c_{c}\), the disentangling phase transition occurs.
behavior of the holographic quantum information. Next, we analyze the critical behavior of the quantum information-related quantities during the p-wave superconductivity phase transitions.
Figure 10: The EWCS \(E_{w}\) versus the temperature \(T/T_{c}\) in the first-order phase transition. The dashed black line represents the critical temperature when \(T_{c}\approx 0.003382\). The translucent line represents the metastable state, whereas the solid line represents the stable state. Left plot: We set \(a=2\) and \(b=0.5\) with varying \(c\) values. Right plot: We set \(a=2\) and \(c=1.8\) while varying \(b\) values.
Figure 9: The EWCS \(E_{w}\) vs the temperature \(T/T_{c}\). The inset graph depicts \(\partial_{T}E_{w}\). The black dashed line depicts the critical temperature. Left plot: The \(\partial_{T}E_{w}\) of the normal phase is greater than zero when \(a=2\) and \(b=0.5\). Right plot: The \(\partial_{T}E_{w}\) of the normal phase is less than zero when \(a=1\) and \(b=0.2\).
## IV The scaling behavior of the quantum information
As the critical point marks the bifurcation point between the normal and superconducting phases, to study the critical behavior, we compare the quantum information quantities of the normal phase to those of the superconducting phase by subtracting the former from the latter,
\[\delta S_{E}=S_{E}^{\rm cond}-S_{E}^{\rm normal},\quad\delta E_{w}=E_{w}^{\rm cond }-E_{w}^{\rm normal}. \tag{18}\]
We propose the following critical behaviors for the HEE and EWCS,
\[\delta S\sim\left(1-\frac{T}{T_{c}}\right)^{\rm onee},\quad\delta E_{w}\sim \left(1-\frac{T}{T_{c}}\right)^{\rm\alpha EWCS}, \tag{19}\]
where \(\alpha_{\rm HEE}\) and \(\alpha_{\rm EWCS}\) are the critical exponent of the HEE and EWCS, respectively. We plot the critical scaling behavior in Fig. 11, from which we find that both EWCS and HEE exhibit excellent scaling behavior near the critical point. More importantly, they both have the same critical exponent,
\[\alpha_{\rm HEE}\approx\alpha_{\rm EWCS}\approx 1. \tag{20}\]
It is important to note that the vector field \(\rho_{\mu}\) is always zero at temperatures higher than the critical temperature. At temperatures slightly below the critical point, the condensate vacuum expectation value of \(\langle J_{x}\rangle\) is small and can be analyzed using perturbation theory. We can expand the vector field \(\rho_{\mu}\) and the metric function near the critical point as [58; 59; 60],
Figure 11: The scaling behavior of HEE and EWCS. The inset plot shows the slope of the holographic quantum information. Left plot: \(\ln(\delta S_{E})\) versus \(\ln(\delta(1-\frac{T}{Tc}))\) with different width of \(l\). Right plot: we set \(a=2\) and \(b=0.5\) and \(\ln(\delta E_{w})\) versus \(\ln(\delta(1-\frac{T}{T_{c}}))\) with different values of \(c\).
\[\rho_{x} =\epsilon\rho^{(1)}+\epsilon^{3}\rho^{(3)}+\epsilon^{5}\rho^{(5)}+\cdots,\] \[U =1+\epsilon^{2}U^{(1)}+\epsilon^{4}U^{(4)}+\cdots, \tag{21}\] \[V_{1} =1+\epsilon^{2}V_{1}^{(1)}+\epsilon^{4}V_{1}^{(4)}+\cdots.\]
From (21), we can deduce that the critical exponent of the metric function \(U\), \(V_{1}\), is twice that of the condensate \(\langle J_{x}\rangle\). This can be understood by noting that holographic quantum information is represented by geometric objects that depend only on the metric. Their critical exponent can be written as,
\[\delta(S_{E})\sim\delta(E_{w})\sim\delta(\langle J_{x}\rangle)^{2}\sim\left(1- \frac{T}{T_{c}}\right)^{2\alpha_{c}}. \tag{22}\]
Therefore, the theoretical critical exponent of holographic quantum information should be twice that of the condensate \(\langle J_{x}\rangle\),
\[\alpha_{\rm HEE}=\alpha_{\rm EWCS}=2\alpha_{c}. \tag{23}\]
Although EWCS and HEE have the same critical exponent in the critical region, they do not tend to the scaling law at the same rate in the critical region. To better investigate this phenomenon near the critical point, we define the quasi-critical exponent (QCE) as
\[\alpha\equiv\frac{d\ln(\delta S)}{d\ln\left(1-\frac{T}{T_{c}}\right)}. \tag{24}\]
QCE is a function of \(\ln\left(1-\frac{T}{T_{c}}\right)\). Apparently, the QCE \(\alpha\) behavior along \(\ln\left(1-\frac{T}{T_{c}}\right)\) can measure the extent to which a wide range of \(S\) can converge to the scaling law.
We show the QCE of HEE and EWCS in Fig. 12. From the left plot of Fig. 12 we find that the width \(l\) has an impact on the scaling behavior of HEE. As the width \(l\) increases, the scaling behavior of HEE is closer to the theoretical scaling behavior. As the temperature moves away from the critical point or the width \(l\) decreases, however, the scaling behavior of the HEE begins to deviate from the theoretical result.
The QCE of EWCS is depicted in the right plot of Fig. 12. Comparing the left plot and the right plot of Fig. 12, EWCS converges to the theoretical scaling law over a broader range. As the separation \(b\) decreases, the scaling behavior of EWCS becomes close to the theoretical results. This behavior suggests that EWCS, as a measure for mixed-state entanglement, can more accurately describe the scaling behavior during superconductivity phase transitions than HEE.
## V The growth rate of the holographic quantum information
Several important inequalities involving the EWCS have been proposed in the literature [19; 61; 62], such as the inequality \(E_{w}(\rho_{AC})\geq\frac{1}{2}I(A,c)\), which states that the EWCS cannot be smaller than half of the MI. These inequalities are crucial in the study of mixed-state entanglement measures, particularly in testing the validity of holographic duals of certain quantum information. In this paper, we find a new inequality behavior of EWCS and MI related to the superconductivity phase transition: near the phase transition point, the relative growth rate of MI along the temperature axis is always greater than that of EWCS.
When the temperature drops below the critical temperature, the EWCS and the MI of the superconducting phases are always larger than those of the normal phases. To take a closer look at the relationship between the EWCS and the MI, we define the relative values of the MI and the EWCS,
\[\tilde{E}_{w}=\frac{E_{w,\mathrm{cond}}}{E_{w,\mathrm{norm}}},\quad\tilde{I}= \frac{I_{\mathrm{cond}}}{I_{\mathrm{norm}}}. \tag{25}\]
With this definition, \(\tilde{E}_{w}\) and \(\tilde{I}\) are fixed at 1 at the critical point. In Fig. 13, we depict the relationship between the \(\tilde{I}\) and \(\tilde{E}_{w}\). Contrary to the inequality [19; 61; 62], the relative MI is always larger than the relative values of EWCS in the critical region. To describe this relationship quantitatively, we examine the fact that,
\[\delta(Q)\simeq A(Q)\left(1-\frac{T}{T_{c}}\right)^{\alpha}, \tag{26}\]
Figure 12: The QCE of the HEE and EWCS. The red dashed line represents the twice QCE of the condensate \(\langle J_{x}\rangle\). Left plot: The QCE of the HEE near the critical points. Right plot: The QCE of the EWCS near the critical points, when we fix the \(a=2\) and \(c=1.5\).
where \(Q\) stands for any physical quantity possessing critical behaviors. From (26) we find that,
\[\tilde{E}_{w}=1+A(\tilde{E}_{w})\left(1-\frac{T}{T_{c}}\right)^{\alpha},\quad \tilde{I}=1+A(\tilde{I})\left(1-\frac{T}{T_{c}}\right)^{\alpha}. \tag{27}\]
Accordingly, it can be seen that \(A\) actually measures the increasing phenomenon of holographic quantum information in Fig. 12, and hence we call \(A\) the growth rate. We work out \(A(\tilde{E}_{w})\) and \(A(\tilde{I})\) for several different configurations and list them in Table 1. From these numerical results we conclude a new inequality between the EWCS and MI growth rates near the critical point,
\[A(\tilde{I})>A(\tilde{E}_{w}). \tag{28}\]
The growth rate of MI is always greater than that of EWCS near the critical point. Furthermore, the difference between the growth rates of EWCS and MI increases as the subsystem separation \(b\) increases. Near the critical point, the entanglement of the system changes rapidly, and MI is more sensitive to these changes than EWCS. This tendency could be attributed to MI's ability to capture the total correlation of the system, which exceeds the information captured by EWCS. Additionally, we have examined this inequality in other models of thermal phase transitions, including the holographic s-wave superconductor model, and propose that this inequality may be universal in thermal phase transitions.
Figure 13: The relative value of MI and EWCS near the critical point. The dashed lines represent \(\tilde{I}\) and the solid lines represent \(\tilde{E}_{w}\). The inset plot is the slope of \(\tilde{I}\) and \(\tilde{E}_{w}\). Left plot: We fix the subsystems \(a=c=2\) and change the separation \(b\). Right plot: We fix the separation \(b=0.1\) and change the subsystem \(a\) and \(c\).
## VI Discussion
In this study, we investigate mixed-state entanglement measures, including HEE, MI and EWCS, in a holographic p-wave superconductor model. The model exhibits both second and first-order phase transitions when varying system parameters. We find that HEE and EWCS can accurately diagnose the critical behavior of these phase transitions. Additionally, we observe that the behavior of HEE is related to thermodynamic entropy as the subsystem configuration increases. However, as a mixed-state entanglement measure, EWCS exhibits the opposite behavior from HEE in the superconducting phase. Specifically, HEE always increases with temperature, whereas EWCS in the superconducting state decreases with temperature. In the case of first-order phase transitions, the holographic quantum information experiences sudden changes. However, the EWCS behavior in the normal phase is dependent on the subsystem configuration. This behavior demonstrates that EWCS can not only detect phase transitions but also capture more information than HEE.
In addition to diagnosing phase transitions, we also examine the scaling behaviors of the condensate and the holographic quantum information. Through analyzing the scaling behav
\begin{table}
\begin{tabular}{|c|c|c|} \hline Configuration & A(\(\tilde{E}_{w}\)) & A(\(\tilde{I}\)) \\ \hline \hline \(a=c=2\), \(b=0.20\) & 0.0426 & 0.0741 \\ \hline \(a=c=2\), \(b=0.47\) & 0.1162 & 0.2947 \\ \hline \(a=c=2\), \(b=0.73\) & 0.2094 & 0.9855 \\ \hline \(a=c=2\), \(b=1.00\) & 0.3212 & 10.8712 \\ \hline \(a=0.8\), \(b=0.2\), \(c=0.4\) & 0.00218 & 0.00525 \\ \hline \(a=0.8\), \(b=0.2\), \(c=0.8\) & 0.00520 & 0.00803 \\ \hline \(a=0.8\), \(b=0.2\), \(c=1.2\) & 0.00862 & 0.01336 \\ \hline \(a=0.8\), \(b=0.2\), \(c=1.6\) & 0.01209 & 0.01975 \\ \hline \(a=0.5\), \(b=0.2\), \(c=0.4\) & 0.00116 & 0.00364 \\ \hline \(a=1.0\), \(b=0.2\), \(c=0.9\) & 0.00794 & 0.01182 \\ \hline \(a=1.5\), \(b=0.2\), \(c=1.4\) & 0.02016 & 0.03120 \\ \hline \(a=3.0\), \(b=0.2\), \(c=2.9\) & 0.06042 & 0.11959 \\ \hline \end{tabular}
\end{table}
Table 1: The growth rate \(A(\tilde{E}_{w})\) and \(A(\tilde{I})\) at different configurations.
ior of various holographic quantum information measures, we find that HEE and EWCS not only detect the critical point but also exhibit scaling behaviors. We show both numerically and analytically that the critical exponent of holographic quantum information is twice that of the condensate. Furthermore, we observe that compared to HEE, EWCS provides a more sensitive characterization of the scaling behavior, making it more suitable as a measure for mixed-state entanglement in superconductivity phase transitions. Additionally, we propose a novel inequality for EWCS and MI in phase transitions and provide numerical evidence for this result. The relative growth rate of MI is always larger than that of EWCS near the critical point.
Next, we point out several directions worth further investigation. The investigation of topological and quantum phase transitions is an important area of research in condensed matter theory [63; 64; 65; 66; 67]. In addition, the relationship between HEE and quantum phase transitions has been studied under holographic framework in previous works [52; 53]. Further research into the mixed-state entanglement in quantum phase transitions and topological quantum phase transitions is therefore desirable. Additionally, it would be interesting to test the inequality (28) in other thermal phase transition models, such as the \(d\)-wave superconductivity model and the massive gravity model. We are working on these directions.
###### Acknowledgements.
Peng Liu would like to thank Yun-Ha Zha for her kind encouragement during this work. Zhe Yang appreciates Feng-Ying Deng's support and warm words of encouragement during this work. We are also very grateful to Chong-Ye Chen, Mu-Jing Li, and Wei Xiong for their helpful discussion and suggestions. This work is supported by the Natural Science Foundation of China under Grant No. 11905083, 12005077 and 11805083, as well as the Science and Technology Planning Project of Guangzhou (202201010655) and Guangdong Basic and Applied Basic Research Foundation (2021A1515012374).
|
2309.17130 | GRANDE: Gradient-Based Decision Tree Ensembles for Tabular Data | Despite the success of deep learning for text and image data, tree-based
ensemble models are still state-of-the-art for machine learning with
heterogeneous tabular data. However, there is a significant need for
tabular-specific gradient-based methods due to their high flexibility. In this
paper, we propose $\text{GRANDE}$, $\text{GRA}$die$\text{N}$t-Based
$\text{D}$ecision Tree $\text{E}$nsembles, a novel approach for learning hard,
axis-aligned decision tree ensembles using end-to-end gradient descent. GRANDE
is based on a dense representation of tree ensembles, which affords to use
backpropagation with a straight-through operator to jointly optimize all model
parameters. Our method combines axis-aligned splits, which is a useful
inductive bias for tabular data, with the flexibility of gradient-based
optimization. Furthermore, we introduce an advanced instance-wise weighting
that facilitates learning representations for both, simple and complex
relations, within a single model. We conducted an extensive evaluation on a
predefined benchmark with 19 classification datasets and demonstrate that our
method outperforms existing gradient-boosting and deep learning frameworks on
most datasets. The method is available under:
https://github.com/s-marton/GRANDE | Sascha Marton, Stefan Lüdtke, Christian Bartelt, Heiner Stuckenschmidt | 2023-09-29T10:49:14Z | http://arxiv.org/abs/2309.17130v3 | # GRANDE: Gradient-Based Decision Tree Ensembles
###### Abstract
Despite the success of deep learning for text and image data, tree-based ensemble models are still state-of-the-art for machine learning with heterogeneous tabular data. However, there is a significant need for tabular-specific gradient-based methods due to their high flexibility. In this paper, we propose GRANDE, **GR**A**die**Nt-Based **D**ecision Tree **E**nsembles, a novel approach for learning hard, axis-aligned decision tree ensembles using end-to-end gradient descent. GRANDE is based on a dense representation of tree ensembles, which affords to use backpropagation with a straight-through operator to jointly optimize all model parameters. Our method combines axis-aligned splits, which is a useful inductive bias for tabular data, with the flexibility of gradient-based optimization. Furthermore, we introduce an advanced instance-wise weighting that facilitates learning representations for both, simple and complex relations, within a single model. We conducted an extensive evaluation on a predefined benchmark with 19 classification datasets and demonstrate that our method outperforms existing gradient-boosting and deep learning frameworks on most datasets. The method is available under: [https://github.com/s-marton/GRANDE](https://github.com/s-marton/GRANDE)
## 1 Introduction
Heterogeneous tabular data is the most frequently used form of data (Chui et al., 2018; Shwartz-Ziv and Armon, 2022) and is indispensable in a wide range of applications such as medical diagnosis (Ulmer et al., 2020; Somani et al., 2021), estimation of creditworthiness (Clements et al., 2020) and fraud detection (Cartella et al., 2021). Therefore, enhancing the predictive performance and robustness of models can bring significant advantages to users and companies (Borisov et al., 2022). However, tabular data comes with considerable challenges like noise, missing values, class imbalance, and a combination of different feature types, especially categorical and numerical data. Despite the success of deep learning in various domains, recent studies indicate that tabular data still poses a major challenge and tree-based models like XGBoost and CatBoost outperform them in most cases (Borisov et al., 2022; Grinsztajn et al., 2022; Shwartz-Ziv and Armon, 2022). At the same time, employing end-to-end gradient-based training provides several advantages over traditional machine learning methods (Borisov et al., 2022). They offer a high level of flexibility by allowing an easy integration of arbitrary, differentiable loss functions tailored towards specific problems and support iterative training (Sahoo et al., 2017). Moreover, gradient-based methods can be incorporated easily into multimodal learning, with tabular data being one of several input types (Lichtenwalter et al., 2021; Polsterl et al., 2021). Therefore, creating tabular-specific, gradient-based methods is a very active field of research and the need for well-performing methods is intense (Grinsztajn et al., 2022).
Recently, Marton et al. (2023) introduced GradTree, a novel approach that uses gradient descent to learn hard, axis-aligned decision trees (DTs). This is achieved by reformulating DTs to a dense rep
resentation and jointly optimizing all tree parameters using backpropagation with a straight-through (ST) operator. Learning hard, axis-aligned DTs with gradient descent allows combining the advantageous inductive bias of tree-based methods with the flexibility of a gradient-based optimization. In this paper, we propose GRANDE, **GRA**die**Nt-Based **D**ecision Tree **E**nsembles, a novel approach for learning decision tree ensembles using end-to-end gradient descent. Similar to Marton et al. (2023), we use a dense representation for split nodes and the ST operator to deal with the non-differentiable nature of DTs. We build upon their approach, transitioning from individual trees to a weighted tree ensemble, while maintaining an efficient computation. As a result, GRANDE holds a significant advantage over existing gradient-based methods. Typically, deep learning methods are biased towards smooth solutions (Rahaman et al., 2019). As the target function in tabular datasets is usually not smooth, deep learning methods struggle to find these irregular functions. In contrast, models that are based on hard, axis aligned DTs learn piece-wise constant functions and therefore do not show such a bias (Grinsztajn et al., 2022). This important advantage is one inherent aspect of GRANDE, as it utilizes hard, axis-aligned DTs. This is a major difference to existing deep learning methods for hierarchical representations like NODE, where soft and oblique splits are used (Popov et al., 2019). Furthermore, we introduce instance-wise weighting in GRANDE. This allows learning appropriate representations for simple and complex rules within a single model, which increases the performance of the ensemble. Furthermore, we show that our instance-wise weighting has a positive impact on the local interpretability relative to other state-of-the-art methods.
More specifically, our contributions are as follows:
* We extend GradTree (Marton et al., 2023) from individual trees to an end-to-end gradient-based tree ensemble, maintaining efficient computation (Section 3.1).
* We introduce softsign as a differentiable split function and show the advantage over commonly used alternatives (Section 3.2).
* We propose a novel weighting technique that emphasizes instance-wise estimator importance (Section 3.3).
We conduct an extensive evaluation on 19 binary classification tasks (Section 4) based on the predefined tabular benchmark proposed by Bischl et al. (2021). GRANDE outperforms existing methods for both, default and optimized hyperparameters. The performance difference to other methods is substantial on several datasets, making GRANDE an important extension to the existing repertoire of tabular data methods.
## 2 Background: Gradient-Based Decision Trees
GRANDE builds on gradient-based decision trees (GradTree) at the level of individual trees in the ensemble. Hence, we summarize the relevant aspects and notation of GradTree in this section and refer to Marton et al. (2023) for a complete overview.
Traditionally, DTs involve nested concatenation of rules. In GradTree, DTs are formulated as arithmetic functions based on addition and multiplication to facilitate gradient-based learning. Thereby both, GradTree and GRANDE focus on learning fully-grown (i.e., complete, full) DTs which can be pruned post-hoc. A DT of depth \(d\) is formulated with respect to its parameters as:
\[t(\mathbf{x}|\mathbf{\lambda},\mathbf{\tau},\mathbf{\iota})=\sum_{l=0}^{2^{d}-1}\lambda_{l} \operatorname{\mathbb{L}}(\mathbf{x}|l,\mathbf{\tau},\mathbf{\iota}) \tag{1}\]
where \(\operatorname{\mathbb{L}}\) is a function that indicates whether a sample \(\mathbf{x}\in\mathbb{R}^{n}\) belongs to a leaf \(l\), \(\mathbf{\lambda}\in\mathcal{C}^{2^{d}}\) denotes class membership for each leaf node, \(\mathbf{\tau}\in\mathbb{R}^{2^{d}-1}\) represents split thresholds and \(\mathbf{\iota}\in\mathbb{N}^{2^{d}-1}\) the feature index for each internal node.
To support a gradient-based optimization and ensure an efficient computation via matrix operations, a novel dense DT representation is introduced in GradTree. Traditionally, the feature index vector \(\mathbf{\iota}\) is one-dimensional, but GradTree expands it into a matrix form. Specifically, this representation one-hot encodes the feature index, converting \(\mathbf{\iota}\in\mathbb{R}^{2^{d}-1}\) into a matrix \(\mathbf{I}\in\mathbb{R}^{2^{d}-1}\times\mathbb{R}^{n}\). Similarly, for split thresholds, instead of a single value for all features, individual values for each feature are
stored, leading to a matrix representation \(\mathbf{T}\in\mathbb{R}^{2^{d}-1}\times\mathbb{R}^{n}\). By enumerating the internal nodes in breadth-first order, we can redefine the indicator function \(\mathbb{L}\) for a leaf \(l\), resulting in
\[g(\mathbf{x}|\mathbf{\lambda},T,I)=\sum_{l=0}^{2^{d}-1}\lambda_{l}\,\mathbb{L}(\mathbf{x}|l, \mathbf{T},\mathbf{I}) \tag{2}\]
\[\text{where}\quad\mathbb{L}(\mathbf{x}|l,\mathbf{T},\mathbf{I})=\prod_{j=1}^{d}\left(1- \mathfrak{p}(l,j)\right)\,\mathbb{S}(\mathbf{x}|\mathbf{I}_{(l,j)},\mathbf{T}_{l(l,j)})+ \mathfrak{p}(l,j)\,\left(1-\mathbb{S}(\mathbf{x}|\mathbf{I}_{l(l,j)},\mathbf{T}_{l(l,j)})\right) \tag{3}\]
Here, \(\mathfrak{i}\) is the index of the internal node preceding a leaf node \(l\) at a certain depth \(j\) and \(\mathfrak{p}\) indicates whether the left (\(\mathfrak{p}=0\)) or the right branch (\(\mathfrak{p}=1\)) was taken.
Typically, DTs use the Heaviside step function for splitting, which is non-differentiable. GradTree reformulates the split function to account for reasonable gradients:
\[\mathbb{S}(\mathbf{x}|\mathbf{\iota},\mathbf{\tau})=\lfloor S\left(\mathbf{\iota}\cdot\mathbf{x}- \mathbf{\iota}\cdot\mathbf{\tau}\right)\rceil \tag{4}\]
Where \(S(z)=\frac{1}{1+e^{-z}}\) represents the logistic function, \(\lfloor z\rfloor\) stands for rounding a real number \(z\) to the nearest integer and \(\mathbf{a}\cdot\mathbf{b}\) denotes the dot product between two vectors \(\mathbf{a}\) and \(\mathbf{b}\). We further need to ensure that \(\mathbf{\iota}\) is a one-hot encoded vector to account for axis-aligned splits. This is achieved by applying a hardmax transformation before calculating \(\mathbb{S}\). Both rounding and hardmax operations are non-differentiable. To overcome this, GradTree employs the straight-through (ST) operator during backpropagation. This allows the model to use non-differentiable operations in the forward pass while ensuring gradient propagation in the backward pass.
## 3 GRANDE: Gradient-Based Decision Tree Ensembles
One core contribution of this paper is the extension of GradTree to tree ensembles (Section 3.1). In Section 3.2 we propose softsign as a differentiable split function to propagate more reasonable gradients. Furthermore, we introduce an instance-wise weighting in Section 3.3 and regularization techniques in Section 3.4. As a result, GRANDE can be learned end-to-end with gradient descent, leveraging the potential and flexibility of a gradient-based optimization.
### From Decision Trees to Weighted Tree Ensembles
One advantage of GRANDE over existing gradient-based methods is the inductive bias of axis-aligned splits for tabular data. Combining this property with an end-to-end gradient-based optimization is at the core of GRANDE. This is also a major difference to existing deep learning methods for hierarchical representations like NODE, where soft, oblique splits are used (Popov et al., 2019). Therefore, we can define GRANDE as
\[G(\mathbf{x}|\mathbf{\omega},\mathbf{L},\mathbf{\mathsf{T}},\mathbf{\mathsf{I}})=\sum_{e=0}^{E} \omega_{e}\,g(\mathbf{x}|\mathbf{L}_{e},\mathbf{\mathsf{T}}_{e},\mathbf{\mathsf{I}}_{e}) \tag{5}\]
where \(E\) is the number of estimators in the ensemble and \(\mathbf{\omega}\) is a weight vector. By extending \(\mathbf{L}\) to a matrix and \(\mathbf{\mathsf{T}},\mathbf{\mathsf{I}}\) to tensors for the complete ensemble instead of defining them individually for each tree, we can leverage parallel computation for an efficient training.
As GRANDE can be learned end-to-end with gradient descent, we keep an important advantage over existing, non-gradient-based tree methods like XGBoost and CatBoost. Both, the sequential induction of the individual trees and the sequential combination of individual trees via boosting are greedy. This results in constraints on the search space and can favor overfitting, as highlighted by Marton et al. (2023). In contrast, GRANDE learns all parameters of the ensemble jointly and overcomes these limitations.
### Differentiable Split Functions
The Heaviside step function, which is commonly used as split function in DTs, is non-differentiable. To address this challenge, various studies have proposed the employment of differentiable split functions. A predominant approach is the adoption of the sigmoid function, which facilitates soft decisions (Jordan and Jacobs, 1994; Irsoy et al., 2012; Frosst and Hinton, 2017). A more recent development
in this field originated with the introduction of the entmax transformation (Peters et al., 2019). Researchers utilized a two-class entmax (entmoid) function to turn the decisions more sparse (Popov et al., 2019). Further, Chang et al. (2021) proposed a temperature annealing procedure to gradually turn the decisions hard. Marton et al. (2023) introduced an alternative method for generating hard splits by using a straight-through (ST) operator after a sigmoid split function to generate hard splits. While this allows using hard splits for calculating the function values, it also introduces a mismatch between the forward and backward pass. However, we can utilize this to incorporate additional information: By using a sigmoid function, the distance between a feature value and the threshold is used as additional information during gradient computation. Accordingly, the gradient behavior plays a pivotal role in ensuring effective differentiation, especially in scenarios where input values are close to the decision threshold. The traditional sigmoid function can be suboptimal due to its smooth gradient decline. Entmoid, although addressing certain limitations of sigmoid, still displays an undesirable gradient behavior. Specifically, its gradient drops to zero when the difference in values is too pronounced. This can hinder the model's ability to accommodate samples that exhibit substantial variances from the threshold. Therefore, we propose using a softsign function, scaled to \((0,1)\), as a differentiable split function:
\[S_{\text{ss}}(z)=\frac{1}{2}\left(\frac{z}{1+|z|}+1\right) \tag{6}\]
The distinct gradient characteristics of the softsign, which are pronounced if samples are close to the threshold, reduce sharply but maintain responsive gradients if there is a large difference between the feature value and the threshold. These characteristics make it superior for differentiable splitting. This concept is visualized in Figure 1. Besides the intuitive advantage of using a softsign split function, we also show empirically that this is the superior choice (Table 4).
### Instance-Wise Estimator Weights
One challenge of ensemble methods is learning a good weighting scheme of the individual estimators. The flexibility of an end-to-end gradient-based optimization allows including learnable weight parameters to the optimization. A simple solution would be learning one weight for each estimator and using for instance a softmax over all weights, resulting in a weighted average. However, this forces a very homogeneous ensemble, in which each tree aims to make equally good predictions for all samples. In contrast, it would be beneficial if individual trees can account for different areas of the target function, and are not required to make confident predictions for each sample.
To address this, we propose an advanced weighting scheme that allows calculating instance-wise weights that can be learned within the gradient-based optimization. Instead of using one weight per _estimator_, we use one weight for each _leaf_ of the estimator as visualized in Figure 2 and thus define the weights as \(\mathbf{W}\in\mathbb{R}^{E}\times\mathbb{R}^{2^{d}}\) instead of \(\mathbf{\omega}\in\mathbb{R}^{E}\). We define \(p(\mathbf{x}|\mathbf{L},\mathbf{\mathsf{T}},\mathbf{\mathsf{I}}):\mathbb{R}^{n}\to\mathbb{R}^ {E}\) as a function to calculate a vector comprising the individual prediction of each tree. Further, we define a function \(w(\mathbf{x}|\mathbf{W},\mathbf{L},\mathbf{\mathsf{T}},\mathbf{\mathsf{I}}):\mathbb{R}^{n}\to \mathbb{R}^{E}\) to calculate a weight vector with one weight for each tree based on the leaf which the current sample is assigned to. Subsequently, a softmax is applied on these chosen weights for each sample. The process of multiplying the post-softmax weights by the
Figure 1: **Differentiable Split Functions. The sigmoid function’s gradient declines smoothly, while entmoid’s gradient decays more rapidly but becomes zero for large values. The scaled softsign activation has high gradients for small values but maintains a responsive gradient for large values, offering greater sensitivity.**
predicted values from each tree equates to computing a weighted average. This results in
\[G(\mathbf{x}|\mathbf{W},\mathbf{L},\mathbf{\mathsf{T}},\mathbf{\mathsf{I}})=\sigma\left(w(\mathbf{x}|\mathbf{W },\mathbf{L},\mathbf{\mathsf{T}},\mathbf{\mathsf{I}})\right)\cdot p(\mathbf{x}|\mathbf{L},\mathbf{ \mathsf{T}},\mathbf{\mathsf{I}}) \tag{7}\]
and \(\sigma(\mathbf{x})\) is the softmax function. It is important to note that when calculating \(\mathbb{L}\) (see Equation 3), only the value for the leaf to which the sample is assigned in a given tree is non-zero.
We want to note that our weighting scheme permits calculating instance-wise weights even for unseen samples. Furthermore, our weighting allows GRANDE to learn representations for simple and complex rules withing one model. In our evaluation, we demonstrate that instance-wise weights significantly enhance the performance of GRANDE and emphasize local interpretability.
### Regularization: Feature Subset, Data Subset and Dropout
The combination of tree-based methods with a gradient-based optimization opens the door for the application of numerous regularization techniques. For each tree in the ensemble, we select a feature subset. Therefore, we can regularize our model and simultaneously, we solve the poor scalability of GradTree with an increasing number of features. Similarly, we select a subset of the samples for each estimator. Furthermore, we implemented dropout by randomly deactivating a predefined fraction of the estimators in the ensemble and rescaling the weights accordingly.
## 4 Experimental Evaluation
As pointed out by Grinsztajn et al. (2022), most papers presenting a new method for tabular data have a highly varying evaluation methodology, with a small number of datasets that might be biased towards the authors' model. As a result, recent surveys showed that tree boosting methods like XGBoost and CatBoost are still state-of-the-art and outperform new architectures for tabular data on most datasets (Grinsztajn et al., 2022; Shwartz-Ziv & Armon, 2022; Borisov et al., 2022). This highlights the necessity for an extensive and unbiased evaluation, as we will carry out in the following, to accurately assess the performance of a new method and draw valid conclusions. We want to emphasize that recent surveys and evaluation on predefined benchmarks indicate that there is no "one-size-fits-all" solution for all tabular datasets. Consequently, we should view new methods as an extension to the existing repertoire and set our expectations in line with this perspective.
Figure 2: **GRANDE Architecture**. This figure visualizes the structure and weighting of GRANDE for an exemplary ensemble with two trees of depth two. For each tree in the ensemble, and for every sample, we determine the weight of the leaf which the sample is assigned to. Subsequently, a softmax is applied on these chosen weights. Multiplying the post-softmax weights by the predictions equates a weighted average of the individual estimators.
### Experimental Setup
Datasets and PreprocessingFor our evaluation, we used a predefined collection of datasets that was selected based on objective criteria from OpenML Benchmark Suites and comprises a total of 19 binary classification datasets (see Table 5 for details). The selection process was adopted from Bischl et al. (2021) and therefore is not biased towards our method. A more detailed discussion on the selection of the benchmark can be found in Appendix A. We one-hot encoded low-cardinality categorical features and used leave-one-out encoding for high-cardinality categorical features (more than 10 categories). To make them suitable for a gradient-based optimization, we gaussianized features using a quantile transformation, as it is common practice (Grinsztajn et al., 2022). In line with Borisov et al. (2022), we report the mean and standard deviation of the test performance over a 5-fold cross-validation to ensure reliable results.
MethodsWe compare our approach to XGBoost and CatBoost, which achieved superior results according to recent studies, and NODE, which is most related to our approach. With this setup, we have one state-of-the-art tree-based and one gradient-based approach for each tree type (see Table 1). For a more extensive comparison of tree-based and gradient-based approaches, we refer to Borisov et al. (2022), Grinsztajn et al. (2022) and Shwartz-Ziv & Armon (2022). Our method is available under [https://github.com/s-marton/GRANDE](https://github.com/s-marton/GRANDE).
HyperparametersWe optimized the hyperparameters using Optuna (Akiba et al., 2019) with 250 trials and selected the search space as well as the default parameters for related work in accordance with Borisov et al. (2022). The best parameters were selected based on a 5x2 cross-validation as suggested by Raschka (2018) where the test data of each fold was held out of the HPO to get unbiased results. To deal with class imbalance, we further included class weights. Additional information along with the hyperparameters for each approach are in Appendix C.
### Results
GRANDE outperforms existing methods on most datasetsWe evaluated the performance with optimized hyperparameters based on the macro F1-Score in Table 2 to account for class imbalance.
\begin{table}
\begin{tabular}{|c|c|c|} \hline & **Standard DTs** & **Oblivious DTs** \\ \hline
**Tree-based** & XGBoost & CatBoost \\ \hline
**Gradient-based** & GRANDE & NODE \\ \hline \end{tabular}
\end{table}
Table 1: **Categorization of Approaches**
\begin{table}
\begin{tabular}{l c c c c} \hline & GRANDE & XGB & CatBoost & NODE \\ \hline dresses-sales & **0.612 \(\pm\) 0.049 (1)** & 0.581 \(\pm\) 0.059 (3) & 0.588 \(\pm\) 0.036 (2) & 0.564 \(\pm\) 0.051 (4) \\ climate-simulation-crashes & **0.853 \(\pm\) 0.070 (1)** & 0.763 \(\pm\) 0.064 (4) & 0.778 \(\pm\) 0.050 (3) & 0.802 \(\pm\) 0.035 (2) \\ cylinder-bands & **0.819 \(\pm\) 0.032 (1)** & 0.773 \(\pm\) 0.042 (3) & 0.801 \(\pm\) 0.043 (2) & 0.754 \(\pm\) 0.040 (4) \\ wdbc & **0.975 \(\pm\) 0.010 (1)** & 0.953 \(\pm\) 0.030 (4) & 0.963 \(\pm\) 0.023 (3) & 0.966 \(\pm\) 0.016 (2) \\ ipd & **0.657 \(\pm\) 0.042 (1)** & 0.632 \(\pm\) 0.043 (3) & 0.643 \(\pm\) 0.053 (2) & 0.526 \(\pm\) 0.069 (4) \\ tokylo1 & 0.921 \(\pm\) 0.004 (3) & 0.915 \(\pm\) 0.011 (4) & **0.927 \(\pm\) 0.013 (1)** & 0.921 \(\pm\) 0.010 (2) \\ qsar-biodeg & **0.854 \(\pm\) 0.022 (1)** & 0.853 \(\pm\) 0.020 (2) & 0.844 \(\pm\) 0.023 (3) & 0.836 \(\pm\) 0.028 (4) \\ ozone-level-8hr & **0.726 \(\pm\) 0.020 (1)** & 0.688 \(\pm\) 0.021 (4) & 0.721 \(\pm\) 0.027 (2) & 0.703 \(\pm\) 0.029 (3) \\ madelon & 0.803 \(\pm\) 0.010 (3) & 0.833 \(\pm\) 0.018 (2) & **0.861 \(\pm\) 0.021 (1)** & 0.571 \(\pm\) 0.022 (4) \\ Bioresponse & 0.794 \(\pm\) 0.008 (3) & 0.799 \(\pm\) 0.011 (2) & **0.801 \(\pm\) 0.014 (1)** & 0.780 \(\pm\) 0.011 (4) \\ wilt & 0.936 \(\pm\) 0.015 (2) & 0.911 \(\pm\) 0.010 (4) & 0.919 \(\pm\) 0.007 (3) & **0.937 \(\pm\) 0.017 (1)** \\ churn & 0.914 \(\pm\) 0.017 (2) & 0.900 \(\pm\) 0.017 (3) & 0.869 \(\pm\) 0.021 (4) & **0.930 \(\pm\) 0.011 (1)** \\ phoneme & 0.846 \(\pm\) 0.008 (4) & 0.872 \(\pm\) 0.007 (2) & **0.876 \(\pm\) 0.005 (1)** & 0.862 \(\pm\) 0.013 (3) \\ SpeedDating & **0.723 \(\pm\) 0.013 (1)** & 0.704 \(\pm\) 0.015 (4) & 0.718 \(\pm\) 0.014 (2) & 0.707 \(\pm\) 0.015 (3) \\ PhishingWebsites & **0.969 \(\pm\) 0.006 (1)** & 0.968 \(\pm\) 0.006 (2) & 0.965 \(\pm\) 0.003 (4) & 0.968 \(\pm\) 0.006 (3) \\ Amazon\_employee_access & 0.665 \(\pm\) 0.009 (2) & 0.621 \(\pm\) 0.008 (4) & **0.671 \(\pm\) 0.011 (1)** & 0.649 \(\pm\) 0.009 (3) \\ nomaoao & 0.958 \(\pm\) 0.002 (3) & **0.965 \(\pm\) 0.003 (1)** & 0.964 \(\pm\) 0.002 (2) & 0.956 \(\pm\) 0.001 (4) \\ adult & 0.709 \(\pm\) 0.006 (4) & **0.798 \(\pm\) 0.004 (1)** & 0.796 \(\pm\) 0.004 (2) & 0.794 \(\pm\) 0.004 (3) \\ numerai28.6 & **0.519 \(\pm\) 0.003 (1)** & 0.518 \(\pm\) 0.001 (3) & 0.519 \(\pm\) 0.002 (2) & 0.503 \(\pm\) 0.010 (4) \\ \hline \end{tabular}
\end{table}
Table 2: **Performance Comparison. We report the test macro F1-score (mean \(\pm\) stdev for a 5-fold CV) with optimized parameters and the ranking of each approach in parentheses. The datasets are sorted based on the data size.**
Additionally, we report the accuracy and ROC-AUC score in the Appendix B, which are consistent with the results presented in the following. GRANDE outperformed existing methods and achieved the highest mean reciprocal rank (MRR) of 0.702 and the highest average performance of 0.807. CatBoost yielded the second-best results (MRR of 0.570 and mean of 0.801) followed by XGBoost (MRR of 0.417 and mean of 0.792) and NODE (MRR of 0.395 and mean of 0.775). Yet, our findings are in line with existing work, indicating that there is no universal method for tabular data. However, on several datasets such as _climate-simulation-crashes_ and _cylinder-bands_ the performance difference to other methods was substantial, which highlights the importance of GRANDE as an extension to the existing repertoire. Furthermore, as the datasets are sorted by their size, we can observe that the results of GRANDE are especially good for small datasets, which is an interesting research direction for future work.
GRANDE is computationally efficient for large and high-dimensional datasetsGRANDE averaged 45 seconds across all datasets, with a maximum runtime of 107 seconds. Thereby, the runtime of GRANDE is robust to high-dimensional (37 seconds for _Bioresponse_ with 1,776 features) and larger datasets (39 seconds for _numerai28.6_ with \(\approx\)100,000 samples). GRANDE achieved a significantly lower runtime compared to our gradient-based benchmark NODE, which has an approximately three times higher average runtime of 130 seconds. However, it is important to note that GBDT frameworks, especially XGBoost, are highly efficient when executed on the GPU and achieve significantly lower runtimes compared to gradient-based methods. The complete runtimes are listed in the appendix (Table 9).
GRANDE outperforms existing methods with default hyperparametersMany machine learning methods, especially deep learning methods, are heavily reliant on a proper hyperparameter optimization. Yet, it is a desirable property that a method achieves good results even with their default setting. GRANDE achieves superior results with default hyperparameters, and significantly outperforms existing methods on most datasets. More specifically, GRANDE has the highest average performance (0.7931) and the highest MRR (0.6404) as summarized in Table 3.
Softsign improves performanceAs discussed in Section 3.2, we argue that employing softsign as split index activation propagates informative gradients beneficial for the optimization. In Table 4 we support these claims by showing a superior performance of GRANDE with a softsign activation compared to sigmoid as the default choice as well as an entmoid function which is commonly used in related work (Popov et al., 2019; Chang et al., 2021).
Instance-wise weighting increases model performanceGRANDE uses instance-wise weighting to assign varying weights to estimators for each sample based on selected leaves. This promotes ensemble diversity and encourages estimators to capture unique local interactions. We argue that the ability to learn and represent simple, local rules with individual estimators in our ensemble can have a positive impact on the overall performance as it simplifies the task that has to be solved by the remaining estimators. As a result, GRANDE can efficiently learn compact representations for simple rules, where complex models usually tend to learn overly complex representations. In the
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{Differentiable Split Function} & \multicolumn{2}{c}{Weighting Technique} \\ \cline{2-6} & Sofisign & Entmoid & Sigmoid & Leaf Weights & Estimator Weights \\ \hline Mean \(\uparrow\) & **0.8071 (1)** & 0.7990 (2) & 0.7959 (3) & **0.8071 (1)** & 0.7857 (2) \\ Mean Reciprocal Rank (MRR) \(\uparrow\) & **0.8246 (1)** & 0.5526 (2) & 0.4561 (3) & **0.9211 (1)** & 0.5789 (2) \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Ablation Study Summary. Left: Comparison of different options for differentiable split functions (complete results in Table 10). Right: Comparison of our instance-wise weighting based on leaf weights with a single weight for each estimator (complete results in Table 11).**
\begin{table}
\begin{tabular}{l c c} \hline \hline & Mean \(\uparrow\) &
\begin{tabular}{c} Mean Reciprocal \\ Rank (MRR) \(\uparrow\) \\ \end{tabular} \\ \hline GRANDE & **0.7931 (1)** & **0.6404 (1)** \\ XGB & 0.7877 (3) & 0.5175 (3) \\ CatBoost & 0.7925 (2) & 0.5219 (2) \\ NODE & 0.7663 (4) & 0.4035 (4) \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Default Hyperparameter Performance Summary. The results are based on the test macro f1-score with the default setting. Complete results are listed in Table 8.**
following case study, we demonstrate the ability of GRANDE to learn compact representations for simple rules within a complex ensemble:
The _PhishingWebsites_ dataset is concerned with identifying malicious websites based on metadata and additional observable characteristics. Although the task is challenging (i.e., it is not possible to solve it sufficiently well with a simple model, as shown in Table 12), there exist several clear indicators for phishing websites. Thus, some instances can be categorized using simple rules, while assigning other instances is more difficult. Ideally, if an instance can be easily categorized, the model should follow simple rules to make a prediction. One example of such a rule, which holds universally in the given dataset, is that an instance can be classified as _phishing_ if a prefix or suffix was added to the domain name. By assessing the weights for an exemplary instance fulfilling this rule, we can observe that the DT visualized in Figure 3 accounts for 94% of the prediction. Accordingly, GRANDE has learned a very simple representation and the classification is derived by applying an easily comprehensible rule. Notably, for the other methods, it is not possible to assess the importance of individual estimators out-of-the-box in a similar way, as the prediction is either derived by either sequentially summing up the predictions of all trees (e.g. XGBoost and CatBoost) or equally weighting all estimators (e.g. for random forests). Furthermore, the results in Table 4 show that this has a significant positive impact on the average performance of GRANDE compared to using one single weight for each estimator.
Instance-wise weighting can be beneficial for local interpretabilityIn addition to the performance increase, our instance-wise weighting has a notable impact on the local interpretability of GRANDE. For each instance, we can assess the weights of individual estimators and inspect the estimators with the highest importance to understand which rules have the greatest impact on the prediction. For the given example, we only need to observe a single tree of depth two (Figure 3) to understand why the given instance was classified as _phishing_, even though the complete model is very complex. In contrast, existing ensemble methods require a global interpretation of the model and do not provide simple, local explanations out-of-the-box.
However, similar explanations can be extracted using Anchors (Ribeiro et al., 2018). Anchors, as an extension to LIME (Ribeiro et al., 2016), provides model-agnostic explanations by identifying conditions (called "anchors") which, when satisfied, guarantee a certain prediction with a high probability (noted as precision). These anchors are interpretable, rules-based conditions derived from input features that consistently lead to the same model prediction. Figure 4 shows the extracted rules for each approach. We can clearly see that the anchor extracted for GRANDE matches the rule we have identified based on the instance-wise weights in Figure 3. Furthermore, it is evident that the prediction derived by GRANDE is much simpler compared to any other approach, as it only
Figure 4: **Anchors Explanations. This figure shows the local explanations generated by Anchors for the given instance. The explanation for GRANDE only comprises a single rule. In contrast, the corresponding explanations for the other methods have significantly higher complexity, which indicates that these methods are not able to learn simple representations within a complex model.**
Figure 3: **Highest-Weighted Estimator. This figure visualizes the DT from GRANDE (1024 total estimators) which has the highest weight for an exemplary instance.**
comprises a single rule. Notably, this comes without suffering a loss in the precision, which is 1.00 for all methods. Furthermore, the rule learned by GRANDE has a significantly higher coverage, which means that the rule applied by GRANDE is more broadly representative. The corresponding experiment with additional details can be found in the supplementary material.
## 5 Related Work
Tabular data is the most frequently used type of data, and learning methods for tabular data are a field of very active research. Existing work can be divided into tree-based, deep learning and hybrid methods. In the following, we categorize the most prominent methods based on these three categories and differentiate our approach from existing work. For a more comprehensive review, we refer to Borisov et al. (2022), Shwartz-Ziv and Armon (2022) and Grinsztajn et al. (2022).
Tree-Based MethodsTree-based methods have been widely used for tabular data due to their interpretability and ability to capture non-linear relationships. While individual trees usually offer a higher interpretability, (gradient-boosted) tree ensembles are commonly used to achieve superior performance (Friedman, 2001). The most prominent tree-based methods for tabular data improve the gradient boosting algorithm by for instance introducing advanced regularization (XGBoost (Chen and Guestrin, 2016)), a special handling for categorical variables (CatBoost (Prokhorenkova et al., 2018)) or a leaf-wise growth strategy (LightGBM (Ke et al., 2017)). Regarding the structure, GRANDE is similar to existing tree-based models. The main difference is the end-to-end gradient-based training procedure, which offers additional flexibility, and the instance-wise weighting.
Deep Learning MethodsWith the success of deep learning in various domains, researchers have started to adjust deep learning architectures, mostly transformers, to tabular data (Gorishniv et al., 2021; Arik and Pfister, 2021; Huang et al., 2020; Cai et al., 2021; Kossen et al., 2021). According to recent studies, Self-Attention and Intersample Attention Transformer (SAINT) is the superior deep learning method for tabular data using attention over both, rows and columns (Sompealli et al., 2021). Although GRANDE, similar to deep learning methods, uses gradient descent for training, it has a shallow, hierarchical structure comprising hard, axis-aligned splits.
Hybrid MethodsHybrid methods aim to combine the strengths of a gradient-based optimization with other algorithms, most commonly tree-based methods (Abutbul et al., 2020; Hehn et al., 2020; Chen, 2020; Ke et al., 2019, 2018; Katzir et al., 2020). One prominent way to achieve this is using soft DTs to apply gradient descent by replacing hard decisions with soft ones, and axis-aligned with oblique splits (Frosst and Hinton, 2017; Kontschieder et al., 2015; Luo et al., 2021). Neural Oblivious Decision Ensembles (NODE) is one prominent hybrid method which learns ensembles of oblivious DTs with gradient descent and is therefore closely related to our work (Popov et al., 2019). Oblivious DTs use the same splitting feature and threshold for each internal node at the same depth, which allows an efficient, parallel computation and makes them suitable as weak learners. In contrast, GRANDE uses standard DTs as weak learners. GRANDE can also be categorized as a hybrid method. The main difference to existing methods is the use of hard, axis-aligned splits, which prevents overly smooth solution typically inherent in soft, oblique trees.
Recent studies indicate that, despite huge effort in finding high-performant deep learning methods, tree-based models still outperform deep learning for tabular data (Grinsztajn et al., 2022; Borisov et al., 2022; Shwartz-Ziv and Armon, 2022). However, they also highlight the need for gradient-based methods due to their flexibility. One main reason for the superior performance of tree-based methods lies in the use of axis-aligned splits that are not biased towards overly smooth solutions (Grinsztajn et al., 2022). Therefore, GRANDE aligns with this argument and utilizes hard, axis-aligned splits, while incorporating the benefits and flexibility of a gradient-based optimization.
## 6 Conclusion and Future Work
In this paper, we introduced GRANDE, a new method for learning hard, axis-aligned tree ensembles with gradient-descent. GRANDE combines the advantageous inductive bias of axis-aligned splits with the flexibility offered by gradient descent optimization. In an extensive evaluation on a predefined benchmark, we demonstrated that GRANDE achieved superior results. Both with optimized and default parameters, it outperformed existing state-of-the-art methods on most datasets. Further
more, we showed that the instance-wise weighting of GRANDE emphasizes learning representations for simple and complex relations within a single model, which increases the local interpretability compared to existing methods.
Currently, the proposed architecture is a shallow ensemble and already achieves state-of-the-art performance. However, the flexibility of a gradient-based optimization holds potential e.g., by including categorical embeddings, stacking of tree layers and an incorporation of tree layers to deep learning frameworks, which is subject to future work.
|
2309.10893 | Hamilton-Jacobi Reachability Analysis for Hybrid Systems with Controlled
and Forced Transitions | Hybrid dynamical systems with nonlinear dynamics are one of the most general
modeling tools for representing robotic systems, especially contact-rich
systems. However, providing guarantees regarding the safety or performance of
nonlinear hybrid systems remains a challenging problem because it requires
simultaneous reasoning about continuous state evolution and discrete mode
switching. In this work, we address this problem by extending classical
Hamilton-Jacobi (HJ) reachability analysis, a formal verification method for
continuous-time nonlinear dynamical systems, to hybrid dynamical systems. We
characterize the reachable sets for hybrid systems through a generalized value
function defined over discrete and continuous states of the hybrid system. We
also provide a numerical algorithm to compute this value function and obtain
the reachable set. Our framework can compute reachable sets for hybrid systems
consisting of multiple discrete modes, each with its own set of nonlinear
continuous dynamics, discrete transitions that can be directly commanded or
forced by a discrete control input, while still accounting for control bounds
and adversarial disturbances in the state evolution. Along with the reachable
set, the proposed framework also provides an optimal continuous and discrete
controller to ensure system safety. We demonstrate our framework in several
simulation case studies, as well as on a real-world testbed to solve the
optimal mode planning problem for a quadruped with multiple gaits. | Javier Borquez, Shuang Peng, Yiyu Chen, Quan Nguyen, Somil Bansal | 2023-09-19T19:31:43Z | http://arxiv.org/abs/2309.10893v2 | # Hamilton-Jacobi Reachability Analysis for Hybrid Systems with Controlled and Forced Transitions
###### Abstract
Hybrid dynamical systems with non-linear dynamics are one of the most general modeling tools for representing robotic systems, especially contact-rich systems. However, providing guarantees regarding the safety or performance of such hybrid systems can still prove to be a challenging problem because it requires simultaneous reasoning about continuous state evolution and discrete mode switching. In this work, we address this problem by extending classical Hamilton-Jacobi (HJ) reachability analysis, a formal verification method for continuous non-linear dynamics in the presence of bounded inputs and disturbances, to hybrid dynamical systems. Our framework can compute reachable sets for hybrid systems consisting of multiple discrete modes, each with its own set of non-linear continuous dynamics, discrete transitions that can be directly commanded or forced by a discrete control input, while still accounting for control bounds and adversarial disturbances in the state evolution. Along with the reachable set, the proposed framework also provides an optimal continuous and discrete controller to ensure system safety. We demonstrate our framework in simulation on an aircraft collision avoidance problem, as well as on a real-world testbed to solve the optimal mode planning problem for a quadruped with multiple gaits.
## I Introduction
Hybrid dynamical system is a popular and versatile tool to model robotic systems that exhibit both continuous and discrete dynamics [1, 2]. For example, for a legged robot, the swinging of a leg has continuous dynamics, whereas contacts with ground are well-modeled as discrete events. However, as with many advanced modeling tools, their very complexity and richness presents challenges, particularly when it comes to ensuring safety and performance for such systems. Designing safe controllers for hybrid dynamical systems demands simultaneous reasoning about the continuous evolution of states and the discrete mode transitions, a task that can quickly become computationally intensive and conceptually challenging.
Reachability analysis is a powerful and effective approach for analyzing and controlling hybrid dynamical systems. It provides a comprehensive understanding of the system's behavior by characterizing the Backward Reachable Tube (BRT) of the system - the set of all possible initial states that can eventually reach some set of target states under optimal control actions. If the target set represents the undesirable states for the system, the BRT represents the set of states that are potentially unsafe for the system and should be avoided. The converse of the BRT thus provides a safe operation region for the system. Reachability analysis for hybrid dynamical systems has been studied using a variety of different frameworks, such as zonotopes [3, 4, 5, 6], hybrid denotable sets [7, 8], Taylor models [9, 10, 11], satisfiability modulo theory [12, 13], support functions [14, 15, 16], timed automata [17] and linear hybrid automata [17, 18, 19, 20], among others. However, these methods typically impose restrictive assumptions on the system, such as limiting the analysis to linear continuous dynamics, not accounting for discrete control switches, bounding the number of mode changes, not allowing continuous controls, or not considering disturbances and dynamics uncertainty. Moreover, most of this methods only provide an over-approximation of the BRT.
Another approach for computing BRT for hybrid dynamical systems is via Hamilton-Jacobi (HJ) Reachability analysis. Its advantages include compatibility with general non-linear system dynamics, formal treatment of bounded disturbances, and the ability to deal with state and input constraints [21]. Several works have addressed the control and safety analysis of hybrid dynamical systems through HJ Reachability [22, 23, 24, 25, 26]. However, these methods often rely on hand and hard-coded heuristics to account for discrete mode switching. Moreover, the BRT computation relies on an iterative refinement that may not necessarily converge. One particular work for the computation of region-of-attraction and stabilizing controllers for walking robots has at its core a generalization of the HJ reachability framework to account for discontinuous state changes originating from state resets [27]. However, generalizing the HJ reachability framework for multiple discrete modes and accounting for discrete controls still remains challenging.
In this work, we propose a general HJ reachability framework for hybrid dynamical systems. Our framework formulates the BRT computation for hybrid systems as a robust optimal control problem. Using the principle of dynamic programming, we simultaneously reason about optimal continuous and discrete control inputs for this optimal control problem, resulting in a generalized version of Hamilton-Jacobi-Bellman equation. We demonstrate that the existing numerical algorithms to compute BRTs can easily be extended to compute this generalized value function, thereby providing both the BRT as well as the safe discrete and continuous control inputs for the hybrid system. Our framework can handle hybrid systems consisting of multiple discrete modes with nonlinear dynamics, discrete control switches that allow or force transitions between different discrete modes, (discontinuous) state resets upon discrete switches, control bounds, as well as, potentially adversarial, disturbances. We demonstrate the efficacy of our framework in simulation and on a real-world quadruped testbed.
## II Problem Formulation
Consider a hybrid dynamical system with a finite set of discrete modes \(Q=\{q_{1},q_{2},\ldots,q_{N}\}\), connected through
controlled or forced switches. We also refer to \(q_{i}\) as the discrete state of the system.
Let \(x\in X\subset\mathbb{R}^{n_{x}}\) be the continuous control input, and \(d\in D\subset\mathbb{R}^{n_{d}}\) be the continuous disturbance in the system. In each discrete mode \(q_{i}\in Q\), the continuous state evolves according to the dynamics: \(\dot{x}=f_{i}(x,u,d)\), as long as \(x\in S_{i}\subset X\). Here, \(S_{i}\) is the valid operation domain for mode \(q_{i}\). \(q_{i}\) also has discrete control switches \(\sigma_{ij}\) that allow _controlled transition_ into another discrete mode \(q_{j}\), where \(j\in\{j_{1},j_{2},\ldots,j_{n}\}\subset\{1,2,\ldots,N\}\). Note that the number of discrete modes the system can transition into (that is, \(n\)) may vary across different \(q_{i}\). As \(x\) approaches \({S_{i}}^{C}\) (denoted \(x\to{S_{i}}^{C}\)), the system must take one of the forced control switches, \(\varsigma_{ik}\). This leads to a _forced transition_ into another discrete modes \(q_{k}\) where \(k\in\{k_{1},k_{2},...,k_{m}\}\subset\{1,2,\ldots,N\}\). Each discrete transition from \(q_{i}\) to \(q_{j}\), whether controlled or forced, might lead to a state reset \(x^{+}=R_{ij}(x)\), where \(x\) is the state before the transition, \(x^{+}\) is the state after the transition.
Our key objective in this work is to compute the **Backward Reachable Tube (BRT)** of this hybrid dynamical system, defined as the set of initial discrete modes and continuous states \((x,q)\) of the system for which the agent acting optimally and under worst-case disturbances will eventually reach a target set \(\mathcal{L}\) within the time horizon \([t,T]\). Mathematically, the BRT is defined as:
\[\mathcal{V}(t)=\{(x_{0},q_{0}):\forall d(\cdot),\exists u(\cdot),\sigma(\cdot ),\varsigma(\cdot),\exists s\in[t,T],x(s)\in\mathcal{L}\},\]
where \(x(s)\) denotes the (continuous) state of the system at time \(s\), starting from state \(x_{0}\) and discrete mode \(q_{0}\) under continuous control profile \(u(\cdot)\), discrete control switch profile \(\sigma(\cdot)\)1, forced control switch profile \(\varsigma(\cdot)\), and disturbance \(d(\cdot)\). Along with the BRT, we are interested in finding the optimal discrete and continuous control inputs that steer the system to the target set.
Footnote 1: If there is no discrete control being leveraged at time \(s\), \(\sigma(s)\) or \(\varsigma(s)\) can be thought of a “dummy” discrete control that keeps the system in the current discrete mode.
Conversely, if \(\mathcal{L}\) represents the set of undesirable states for the system (such as obstacles for a navigation robot), we are interested in computing the BRT with the role of control and disturbance switched, i.e., finding the set of initial states from which getting into \(\mathcal{L}\) is unavoidable, despite best control effort.
**Running example _(DogID)_: To illustrate our approach, we will use the following running example throughout this paper. We consider very simplified, high-level dynamics for a robot quadruped moving in one dimension in a room with an obstacle (table).
The robot goal is to reach the marked area on the other side of the table as shown in Fig. 2. The hybrid system representation of this system is shown in Fig. 3:
Here, the continuous state \(x\in\mathbb{R}\) is the position of the robot and \(q_{i}\) with \(i\in\{1,2,3\}\) are the discrete modes of the system. \(u\in[0,1]\) is the continuous control of the system. The target set is given as the set of positions \(9\leq x\leq 10\).
In this example, \(q_{1}\) represents a walking gait, \(q_{3}\) represents frozen dynamics when the walking robot comes in contact with the table obstacle, and \(q_{2}\) is a crawling gait that allows the robot to move under the table but at a slower rate compared to walking. The BRT for this example corresponds to all the combinations of continuous states and discrete modes \((x,q)\) from which the quadruped can reach the target set within a \(5\) second time horizon.
## III Background: Hamilton-Jacobi Reachability
One way to compute the BRT for continuous dynamical systems is through Hamilton-Jacobi (HJ) reachability analysis. To illustrate HJ reachability, we assume that the system has only one discrete mode, i.e., there are no discrete transitions.
In HJ reachability, the BRT computation is formulated as a zero-sum game between control and disturbance. This results in a robust optimal control problem that can be solved using the dynamic programming principle. First, a target function \(l(x)\) is defined whose sub-zero level set is the target set \(\mathcal{L}\), i.e. \(\mathcal{L}=\{x:l(x)\leq 0\}\). The BRT seeks to find all states that could enter \(\mathcal{L}\) at any point within the time horizon. This is computed by finding the minimum distance to \(\mathcal{L}\) over time:
\[J(x,t,u(\cdot),d(\cdot))=\min_{\tau\in[t,T]}l\left(x(\tau)\right) \tag{1}\]
Our goal is to capture this minimum distance for optimal system trajectories. Thus, we compute the optimal control that minimizes this distance (drives the system towards the target) and the worst-case disturbance signal that maximizes the distance. The value function corresponding to this robust optimal control problem is:
\[V(x,t)=\sup_{d(\cdot)}\inf_{u(\cdot)}\{J(x,t,u(\cdot),d(\cdot))\}. \tag{2}\]
Fig. 1: Hybrid dynamical system with controlled and forced transitions.
Fig. 3: Dog1D hybrid system formulation.
Fig. 2: A simple 1D longitudinal navigation scenario.
The value function in (2) can be computed using dynamic programming, which results in the following final value Hamilton-Jacobi-Isaacs Variational Inequality (HJI-VI):
\[\begin{split}\min\left\{D_{t}V(x,t)+\mathcal{H}(x,t),l(x)-V(x,t) \right\}=0\\ V(x,T)=l(x)\end{split} \tag{3}\]
\(D_{t}\) and \(\nabla\) represent the time and spatial gradients of the value function. \(\mathcal{H}\) is the Hamiltonian, which optimizes over the inner product between the spatial gradients of the value function and the dynamics to compute the optimal control and disturbance:
\[\mathcal{H}(x,t)=\max_{d\in D}\min_{u\in U}\nabla V(x,t)\cdot f(x,u,d). \tag{4}\]
The term \(l(x)-V(x,t)\) in (3) restricts system trajectories that enter and leave the target set, enforcing that any trajectory with a negative distance at any time will continue to have a negative distance for the rest of the time horizon. Once the value function is obtained, the BRT is given as the sub-zero level set of the value function:
\[\mathcal{V}(t)=\left\{x:V(x,t)\leq 0\right\} \tag{5}\]
The corresponding optimal control can be derived as:
\[u^{*}(x,t)=\operatorname*{arg\,min}_{u\in U}\max_{d\in D}\nabla V(x,t)\cdot f (x,u,d) \tag{6}\]
The system can guarantee reaching the target set as long as it starts inside the BRT and applies the optimal control in (6) at the BRT boundary. The optimal adversarial disturbance can be similarly obtained as (6).
This formulation can be used to provide safety guarantees by switching the roles of the control and disturbance in (4). In that case, the BRT represents the states that will be driven into the target by optimal disturbance even if we apply optimal control, so safety can be maintained as long as the system stays outside the BRT and applies optimal control at the boundary.
However, one of the key limitations of HJ reachability analysis is that it assumes continuous dynamics and does not account for discrete mode switching or state resets. Overcoming this limitation is the core focus of this work.
**Running example (DogID):** Computing the BRT for the running example using the classical Hamilton-Jacobi formulation with no optimal discrete mode switches gives the results shown in Fig. 4.
For the crawling gait it can be observed that the BRT (the blue shaded area) propagates through the obstacle and the BRT includes \(x=4m\) because the system can move at \(1m/s\) reaching the boundary of the target in exactly \(5s\). For the walking wait, even though the system is allowed to move faster, the BRT stops at the boundary of the obstacle as the walking gait gets stuck, so only states that start from the right side of the obstacle can reach the target set.
If we imagine the system being able to transition optimally between the discrete modes, it would only crawl for states in the obstacle and walk elsewhere, allowing to reach states farther to the left. As we demonstrate later, the proposed framework can reason about such optimal transitions.
## IV HJ Reachability for General Hybrid Systems
The core contribution of this work is Theorem 1, an extension of the classical HJ reachability framework to hybrid dynamical systems with controlled and forced transitions, as well as state resets.
**Theorem 1**: _For a hybrid dynamical system, the value function \(V(x,q_{i},t)\) that captures the BRT for an implicit target function \(l(x)\) is given by solving the following constrained VI. If \(x\in S_{i}\):_
\[\begin{split}\min\{l(x)-V(x,q_{i},t),\min_{\sigma_{ij}}V(R_{ij}(x ),q_{j},t)-V(x,q_{i},t),\\ D_{t}V(x,q_{i},t)+\max_{d\in D}\min_{u\in U}\nabla V(x,q_{i},t) \cdot f_{i}(x,u,d)\}=0,\end{split}\]
_and if \(x\in S_{i}^{C}\), \(V(x,q_{i},t)=\min_{\varsigma_{ik}}V(R_{ik}(x),q_{k},t)\), with terminal time condition:_
\[V(x,q_{i},T)=l(x) \tag{7}\]
Intuitively, Theorem 1 updates the value function for each discrete mode \(q_{i}\) with the optimal value across the discrete modes it can transition to. For states inside the operation domain \(S_{i}\), this corresponds to taking one of the control switches \(\sigma_{ij}\) with their corresponding reset maps \(R_{ij}(x)\) or staying in the operation mode \(q_{i}\) itself, which will lead to a continuous flow of the value function, similar to (3). In contrast, for states outside the operation mode domain \(\delta S_{i}^{C}\), the system has to take a forced switch \(\varsigma_{ik}\) with their corresponding reset maps \(R_{ik}(x)\). Thus, the optimal value function is given by the one that takes the system to the minimum value upon the discrete transition. The proof of Theorem 1 formalizes this intuition using the Bellman principle of optimality, the detail of the proof can be found on the Appendix. Once we compute the value function, the BRT for the hybrid system is given as:
\[\mathcal{V}(t)=\left\{(x,q):V(x,q,t)\leq 0\right\} \tag{8}\]
Finally, the optimal discrete and continuous controls at state \((x,q_{i})\) at time \(t\) are given as:
\[\begin{split} u^{*}(x,q_{i},t)&=\operatorname*{ arg\,min}_{u\in U}\max_{d\in D}\nabla V(x,q_{i},t)\cdot f_{i}(x,u,d)\\ \sigma_{ij^{*}}(x,q_{i},t)&=\operatorname*{arg\,min }_{\sigma_{ij}}V(R_{ij}(x),q_{j},t)\text{ if }x\in S_{i}\\ \varsigma_{ik^{*}}(x,q_{i},t)&=\operatorname*{arg\, min}_{\varsigma_{ik}}V(R_{ik}(x),q_{k},t)\text{ if }x\in S_{i}^{C},\end{split} \tag{9}\]
Figure 4: (Top) Value function for crawling gait mode. (Bottom) Value function for walking gait mode. Obstacle is shown in shaded red. Blue shade shows BRT as the subzero level of the value function. Green shade shows target set as the subzero level of \(l(x)\).
where, for completeness, we add a "dummy" discrete control \(\sigma_{ii}\) that keeps the system in the same discrete mode.
**Remark 1**: _Note that the proposed framework can easily be extended to scenarios where the forced transitions cannot be controlled and represent adversarial or uncertain transitions instead. In this case, we can use \(\max\) instead of \(\min\) over \(\varsigma_{ik}\) in Theorem 1 to account for the worst-case behavior._
**Numerical implementation:** We now present a numerical algorithm that can be used to calculate the value function in Theorem 1. It builds upon the value function calculation for the classical HJI-VI in (2), which is solved using currently available level set methods [28]. Specifically, the value function is computed over a discretized state-space grid and is propagated in time using a small timestep \(\delta\). After each propagation step, the value function is updated for all \((x,q)\) using Theorem 1, until the time horizon \(T\) is reached.
```
Input:\(l(x),T,\delta\) Output:\(V(x,q_{i},t)\forall i\) Initialization:\(V(x,q_{i},T)=l(x)\forall i\); \(t=T\) while\((t>0)\)do foreach\((q_{i}\in Q)\)do \(V(x,q_{i},t-\delta)=V(x,q_{i},t)+\) \(\max\limits_{a}\min\limits_{u}\nabla V(x,q_{i},t)\cdot\) \(f_{i}(x,u,d)\delta\) foreach\((q_{i}\in Q)\)do if\((x\in S_{i})\)then \(V(x,q_{i},t-\delta)=\min\limits_{\sigma_{ij}}V(R_{ij}(x),q_{j},t-\) \(\delta)\}\) elseif\((x\in S_{i}^{C})\))then \(V(x,q_{i},t-\delta)=\min\limits_{\varsigma_{ik}}V(R_{ik}(x),q_{k},t-\delta)\) \(t=t-\delta\)
```
**Algorithm 1**Value function computation for hybrid dynamical system
It is important to stress the remarkable similarity of this new hybrid reachability algorithm to its continuous counterpart in (2). Indeed, there are only two key differences: (1) we now propagate \(N\) value functions simultaneously (corresponding to \(N\) discrete modes), as opposed to just one value function. (2) We "adjust" the value function for each discrete mode to account for discrete switches (the second for-loop in the algorithm). As a result, the proposed hybrid reachability algorithm leads to a quadratic (in \(N\)) growth in the worst-case computational complexity of its continuous system counterpart.
## V Case Studies
### _Running example (Dog1D):_
We now apply the proposed method to compute the BRT for the running example. To implement Algorithm 1, we use a modified version of helperOC library and the level set toolbox, ToolboxLS [28], both of which are used to solve classical HJI-VI. We use a grid of 301 points over the \(x\) dimension for each of the 3 discrete operation modes. The overall calculation takes 12.125s running on an Intel Core i5-6200U CPU @ 2.30GHz. Slices of the BRT starting in mode \(q_{1}\) (the walking gait) for different time horizons are shown in Fig. 5:
The intuitive solution for the optimal decision-making in this scenario is to use the walking gait everywhere except in the obstacle area. The fast walking gait allows the system to reach the target quicker, while crawling allows to expand the reachable states beyond the obstacle. The computed solution using our algorithm indeed aligns with this intuition. For example, if we consider the top value function for the 3-second time horizon in Fig. 5, we can observe that the limit of the BRT is \(7m\) away from the target set, which coincides with the intuitive solution of walking for \(1s\) covering \(3m\), then crawling under the table for \(1s\) covering its \(1m\) width, to finally go back to walking for \(1s\) covering other \(3m\).
The bottom value function in Fig. 5 shows the BRT corresponding to a \(5s\) time horizon. Compared to Fig. 4, where the BRT stopped growing beyond the obstacle boundary, we can see how the proposed algorithm can optimally leverage discrete transitions along with continuous control to reach the target set from a wider set of initial states.
### _Two-Aircraft Conflict Resolution:_
We next consider the two-aircraft conflict resolution example presented in [26]. Here, two aircraft flying at a fixed altitude are considered. Each aircraft controls its speed; we assume that the first aircraft is controllable, while the other aircraft's speed is uncertain (within a known interval) and hence modeled as an adversarial disturbance to ensure safety. Using relative coordinates, the continuous dynamics of the system are described by:
\[\dot{x}_{r}=-u+d\cos\psi_{r}+\omega_{u}y_{r},\ \dot{y}_{r}=d\sin\psi_{r}- \omega_{u}x_{r},\ \dot{\psi}_{r}=\omega_{d}-\omega_{u},\]
where state \([x_{r},y_{r},\varphi_{r}]^{T}\) is the relative \(xy\) position and heading between the aircraft. \(u\) and \(d\) are the velocities of the two aircraft, and \(\omega_{u}\) and \(\omega_{d}\) are their turning rates.
The conflict resolution protocol consists of three different operation modes, shown in Fig. 6. The aircraft begin in mode \(q_{1}\) with a straight flight (\(\omega_{u}=\omega_{d}=0\)), keeping a costant relative heading. In this mode, the aircraft are on a collision course. At some time, the aircraft can begin
Fig. 5: Value functions for states starting in a walking gait for a \(3\) and \(5\) seconds time horizon. Area within the obstacle is shown in shaded red. Blue shade shows BRT as the subzero level of the value function. Green shade shows the target set as the subzero level of the implicit target function \(l(x)\).
the collision avoidance maneuver by taking the switch \(\sigma_{12}\), transitioning the system into mode \(q_{2}\) and an instantaneous heading change of \(\pi/2\). In mode \(q_{2}\), the aircraft undergo a circular flight path, where both aircraft use the same constant turning rate (\(\omega_{u}=\omega_{d}=1\)). Once the aircraft undergo a heading change of \(\pi\) radians, the aircraft are forced to switch to mode \(q_{3}\), making another instantaneous \(\pi/2\) turn and resuming their straight flight (\(\omega_{u}=\omega_{d}=0\)). The two aircraft are now on a collision-free path. To keep track of the transition time on configuration \(q_{2}\) we add an additional timer state \(z\).
For this system, we are interested in finding the set of all initial states in mode \(q_{1}\) from which a collision cannot be averted between the two aircraft under the above protocol, despite the best control effort. Thus, the complement of the BRT will represent the safe states for the system. For BRT calculations, we consider a circular collision target set of 5 units around the controlled aircraft. This corresponds to the two aircraft being in close proximity of each other, also referred to as "loss of separation". We use an initial relative heading of \(\varphi_{r}=2\pi/3\) (shown in Fig. 6) and a time horizon of \(t=5s\). For aircraft velocities we consider \(u\in[1.5,3]\) and \(d\in[2,4]\). The calculations are carried on a \([x_{r},y_{r},z]\) grid of \([201,201,101]\) points, \(\varphi_{r}\) has null dynamics in all operation modes and is not considered in the grid.
Results for BRT starting from operation mode \(q_{1}\) are shown in Fig.7. They can be interpreted as follows: if the relative coordinates of the aircraft belong to a state outside the BRT, the collision avoidance maneuver can be initiated while ensuring a collision can always be avoided as long as the optimal discrete and continuous control is applied.
The left BRT exactly replicates the results of the case presented in [26] where both aircraft keep their speed constant at their maximum value \(u=3\) and \(d=4\), where the inside of the BRT correspond to states where the adversarial airplane can place itself behind the controlled aircraft and use its higher velocity to force a collision. However, unlike [26], we do not use any hard-coded heuristics to account for the discrete transitions during the BRT computation. Furthermore, the right BRT considers the scenario where, in addition to the control on the transition, both aircraft control their velocities optimally to avoid/force a collision with the other aircraft. The changes in the BRT are reflected in the shaded areas, where the growth in the BRT close to the target corresponds to cases where the adversarial aircraft slows down to align itself with the controlled aircraft and then use its higher velocity to force a collision. The decrease in size near the tail corresponds to states where the blue airplane slows down to avoid a collision.
## VI Hardware Experiments
We next apply our method for task-based, high-level mode planning on a real-world quadruped robot, to reach a goal position in a terrain consisting of various obstacles. The quadruped has different walking modes, such as normal walking, walking on a slope, etc., each of which is modeled as a discrete mode in the hybrid system (see Fig. 9).
Within each discrete mode, we use simplified continuous dynamics to describe center-of-mass evolution:
\[\dot{x}=k_{x}V_{N}\sin\varphi+d_{x};\quad\dot{y}=k_{y}V_{N}\cos\varphi+d_{y}; \quad\dot{\varphi}=k_{\omega}\omega_{N},\]
where the state \([x,y,\varphi]^{T}\) represent the \(xy\) position and heading angle of the robot. \(V_{N}=0.25m/s\) is the fixed (normalized) linear velocity and \(\omega_{N}\in[-0.5,0.5]rad/s\) is the angular velocity control. \(k_{x}\), \(k_{y}\), and \(k_{\omega}\) are velocity gains for different operation modes. \(d_{x}\) and \(d_{y}\) are bounded disturbances on \(xy\) velocity. Also, we use discrete state \([z,\theta,\xi]^{T}\) to set the body's height, pitch, and roll angle in different operation modes for overcoming various obstacles.
Each operation mode matches a specific obstacle or terrain shown in Fig. 8(b). The quadruped begins in mode \(q_{1}\) for fast walking. If the robot reaches the boundary of the slope area, the system will be forced to switch to mode \(q_{2}\) with a \(20^{\circ}\) body pitch angle for slope climbing. At some time, the control may switch to mode \(q_{3}\) to walk slowly with a lower body height, this allows the robot to make a narrow turn or crawl under obstacles. If the robot encounters a tilted obstacle, the system will go through a forced switch into mode \(q_{4}\) to walk with a tilted body. Whenever the robot touches a ground obstacle, the system will stay in mode \(q_{5}\) permanently with frozen dynamics. The quadruped needs to reason about optimally switching between these different
Fig. 6: Hybrid formulation for the two-aircraft conflict resolution protocol.
Fig. 7: Backward reachable tube for two-aircraft conflict resolution starting from mode \(q_{1}\). (Left) BRT for fixed speed of aircraft. (Right) BRT for aircraft under optimal control and disturbance.
Fig. 9: Hybrid control modes for the quadruped system.
walking modes in order to reach its goal area (the \(xy\) area marked as the pink cylinder in Fig. 8(a, b)) as quickly as possible without colliding with any obstacles.
BRT for this experiment is calculated on a \([x,y,\varphi]\) grid of \([40,40,72]\) points, over a time horizon of \(t=45s\). Having the BRT, we can acquire optimal velocity control, operation mode, and the desired discrete body states at each time. The quadruped is equipped with an Intel T265 tracking camera, allowing it to estimate its global state via visual-inertial odometry. The high-level optimal velocity control and discrete body states provided by our framework are then tracked by a low-level MPC controller that uses the centroidal dynamics of the quadruped [29].
Our experiment results are shown in Fig. 8 and can also be seen in the accompanying video. In our environment setup, the optimal (fastest) route to reach the target is via leveraging the slope on the right. The corresponding trajectory is shown in orange in Fig. 8(a) and (b). Specifically, the robot remains in the normal walking mode (blue boxes in Fig. 8(a)) and switches to the slope walking mode once near the slope (magenta boxes). When the robot approaches the end of the slope, it needs to make a tight left turn to avoid a collision with the wall ahead. Our framework is able to recognize that and makes a transition to the slow walking mode to allow for a tighter turn (green boxes). Once the turn is complete, the robot goes back in the faster walking mode. Some first-person RGB images along the robot's path are in Fig. 8(c).
In our second experiment, we put some papers on the slope, making it more slippery. This is encoded by adding a higher disturbance in the slope mode \(q_{2}\). The new BRT makes the slope path infeasible and hence the robot needs to reach the target via the (slower) ground route. The new robot trajectory is shown in light green in Fig. 8(a) and (b). Once again, apart from selecting a new safe route, the proposed framework is able to switch between different walking modes along the route to handle tight turns and slanted obstacles, as shown by the activation of the tilt walking mode (cyan boxes in Fig. 8(a)).
Finally, to test the closed-loop robustness of our method, we add human disturbance in the ground route experiment. The robot is kicked and dragged by a human during the locomotion process, as shown by red arrows in Fig. 10. Specifically, the robot was dragged to face backward, forcing it near the tilted obstacle. However, the reachability controller is able to ensure safety by reactivating the tilt walking mode (Fig. 10(b)), demonstrating that the proposed framework is able to reason about both closed-loop continuous _and_ discrete control inputs. Nevertheless, it should be emphasized that the presented algorithm does not solve the entirety of legged locomotion planning, but rather only provides the top level of a hierarchical planning architecture that typically include a robust footstep planner, a wholebody controller, and reliable state estimation [30]. We will further explore integrating our framework within such architectures in future.
## VII Discussion And Future Work
We present an extension of the classical HJ reachability framework to hybrid dynamical systems with controlled and forced transitions, and state resets. Along with the BRT, the proposed framework provides optimal continuous and discrete control inputs for the hybrid system. Simulation studies and hardware experiments demonstrate the proposed method, both to reach goal areas and in maintaining safety. Our work opens up several exciting future research directions. First, we rely on grid-based numerical methods to compute the BRT, whose computational complexity scales exponentially with the number of continuous states, limiting a direct use of our framework to relatively low-dimensional systems. We will explore recent advances in learning-based methods to solve HJI-VI [31, 32] to overcome this challenge. Another interesting direction would be to extend our framework to the cases involving uncertainty in the transition surface \(S_{i}\) or where controlled switches have different transition surfaces. Finally, we will apply our framework on a broader range of robotics applications and systems.
Fig. 8: (a) Trajectories of the quadruped safely navigating through the slope or narrow ground obstacles to reach the goal area (pink cylinder). The goal area is independent of the \(z\) position, so the robot can reach it via slope or ground route. (b) Overlaid trajectories in the real-world obstacle setup. (c, d) First-person view along the slope route (c) and the ground route (d).
Fig. 10: (a) Robot trajectory with human disturbances. (b) Partial trajectory shows the robot autonomously switches from \(q_{3}\) to \(q_{4}\) after being pushed into the tilt obstacle area to ensure safety. |
2309.11555 | Limitations in odour recognition and generalisation in a neuromorphic
olfactory circuit | Neuromorphic computing is one of the few current approaches that have the
potential to significantly reduce power consumption in Machine Learning and
Artificial Intelligence. Imam & Cleland presented an odour-learning algorithm
that runs on a neuromorphic architecture and is inspired by circuits described
in the mammalian olfactory bulb. They assess the algorithm's performance in
"rapid online learning and identification" of gaseous odorants and odorless
gases (short "gases") using a set of gas sensor recordings of different odour
presentations and corrupting them by impulse noise. We replicated parts of the
study and discovered limitations that affect some of the conclusions drawn.
First, the dataset used suffers from sensor drift and a non-randomised
measurement protocol, rendering it of limited use for odour identification
benchmarks. Second, we found that the model is restricted in its ability to
generalise over repeated presentations of the same gas. We demonstrate that the
task the study refers to can be solved with a simple hash table approach,
matching or exceeding the reported results in accuracy and runtime. Therefore,
a validation of the model that goes beyond restoring a learned data sample
remains to be shown, in particular its suitability to odour identification
tasks. | Nik Dennler, André van Schaik, Michael Schmuker | 2023-09-20T18:00:05Z | http://arxiv.org/abs/2309.11555v1 | # Limitations in odour recognition and generalisation in a neuromorphic olfactory circuit
###### Abstract
Neuromorphic computing is one of the few current approaches that have the potential to significantly reduce power consumption in Machine Learning and Artificial Intelligence. Imam & Cleland presented an odour-learning algorithm that runs on a neuromorphic architecture and is inspired by circuits described in the mammalian olfactory bulb. They assess the algorithm's performance in "rapid online learning and identification" of gaseous odorants and odorless gases (short "gases") using a set of gas sensor recordings of different odour presentations and corrupting them by impulse noise. We replicated parts of the study and discovered limitations that affect some of the conclusions drawn. First, the dataset used suffers from sensor drift and a non-randomised measurement protocol, rendering it of limited use for odour identification benchmarks. Second, we found that the model is restricted in its ability to generalise over repeated presentations of the same gas. We demonstrate that the task the study refers to can be solved with a simple hash table approach, matching or exceeding the reported results in accuracy and runtime. Therefore, a validation of the model that goes beyond restoring a learned data sample remains to be shown, in particular its suitability to odour identification tasks.
Imam & Cleland's [1] algorithm takes inspiration from the neural pathways of the external plexiform layer of the mammalian olfactory bulb. Gas representations are built by an iterative approach of applying spike-time dependent plasticity rules to sequential gamma-frequency spike packages, on the basis of a dataset consisting of recordings from 72 Metal Oxide (MOx) gas sensors mounted in a wind tunnel [2] (Fig. S1a). They validate the model's capability to learn and robustly identify gases by computing and thresholding the Jaccard similarity coefficient between clean gases and representations arising from artificially occluded sensor recordings. Further aspects such as neuromodulation, contextual priming and neurogenesis are explored. The implementation and operation of the algorithm on Intel's Loihi neuromorphic platform [3] presents a major milestone in neuromorphic computing due to the high complexity and biological realism of the underlying network model. The authors claim to describe a gas identification framework that is superior to other models in terms of common classification metrics, that generalises "broadly" beyond experience and that can be deployed into environments containing unknown contaminants and other sources of interference [1]. In addition, the study has been referred to as a demonstration on how a neuromorphic network can learn and discriminate odours [4; 5; 6; 7; 8; 9]. Below we demonstrate limitations of the study that call these statements into question.
The first limitation of the study relates to restrictions in the dataset used to validate the olfactory bulb network. MOx sensors are highly prone to sensor drift, causing short- and long-term fluctuations in the sensors' baseline conductance and their responsiveness [10]. The most effective way of tackling the effect of drift on sensor response is to randomise the gas presentations over time during the recording. The dataset used here does not have this property: Recordings were acquired in gas-specific batches over the course of nine months (Fig. S1b). The non-randomness, together with the dominating presence of sensor drift contaminations, allows for successful gas classification _before_ the gas is presented, which renders this dataset largely unsuitable for classification tasks [11]. Drift contaminations could be partly mitigated by subtracting the baseline, i.e., the sensor response right before gas exposure [12]. No baseline subtraction was performed in the discussed study, suggesting that the reported findings about odour learning and recognition may be skewed, and potentially invalid, due to the limitations of the dataset.
We repeated the simulations described by the authors for a range of conditions. As in the original work, the model was trained on 10 gases, and tested on 10 occlusions for each gas, i.e., 100 samples total. If not otherwise stated, 60% of the data was occluded by impulse noise when testing. We successfully replicated the authors' Jaccard similarity coefficient plot ([1], Fig. 4b), using the same raw data points for composing training and test sets, and sampling the recordings at \(t=90s\) (Fig. S1a, 1a). The result appears to demonstrate robust recognition of the Toluene gas instance. Paradoxically, the same level
of "recognition" of Toluene can be obtained in the absence of gas, using samples obtained at \(t=15s\), _before_ the release of odour into the wind tunnel at \(t=20s\) (Fig. S1a, 1b). Therefore, the high Jaccard score for Toluene should be considered an artefact of sensor drift, and is unsuitable to substantiate a capability of the model to recognise odours [11].
In addition to the dataset's limitations, we found restrictions in the model's capability to generalise over different recordings of the same stimulus. Generalisation is an important property of any pattern recognition system [13]. The authors convincingly show that the model can restore input patterns corrupted by impulse noise. However, in most instances the authors tested recognition on the same sample that was used for training, occluding 60% of the sample with noise. 40% of each training sample were present unchanged in the corresponding testing sample. A real odorant recognition and signal restoration system would rarely encounter the exact same stimulus twice, once in a clean and once in a corrupted version. Therefore, assessing the model's capability to recognise and restore patterns from separate recordings is essential to judge its relevance.
For most gas and parameter combinations, the dataset contains 20 repetitions. We repeated the experiment using separate repetitions for training and testing and found that
gas identity could not be recognised in occluded samples (Fig. 1c). Recognition scores were further reduced when subtracting the baseline from training and testing data (Fig. 1d). In this configuration, aimed at mitigating sensor drift, recognition across repetitions failed even for samples without any noise occlusion (Fig. 1e).
Figure 1: a) Jaccard similarity coefficient of the networks response to occluded Toluene and the learned odour representations, after five successive gamma cycles. Replicated from [1]. b) Spurious recognition in absence of gas in the wind tunnel. c,d,e) Recognition failure on different repetitions, c) without and d) with baseline subtraction, and e) without sample occlusion. Median and interquartile range across 10 Toluene representations are displayed, and only five out of 10 gases are depicted for clarity.
Finally, we demonstrate that the precise task undertaken in the study is trivial enough to be effectively addressed by using a simple hash table, such as a _Python_ dictionary, without the need for complex machine learning techniques. By storing the training samples in a hash table (i.e. one for each class, as in the original paper), and then computing and ranking the amount of overlap between a test sample and the stored training samples, one can estimate the most likely class (see Alg. 1&2). Our implementation employs a concise one-shot approach, consisting of just 8 lines of _Python_ code, yet it matches or surpasses the suggested EPL network in recognition accuracy, and outperforms it in runtime (Fig. 2&S2). In the light of these findings, the authors' claim of superior performance of the EPL network over advanced machine learning methods, derived from comparisons with multilayer denoising autoencoders and other methods, cannot be upheld in its generality.
We conclude that the capability of the proposed model to identify learned odourants appears to be limited to corrupted versions of the training data. It failed to generalise to data outside the training set: Repetitions of learned gases were not recognised if these repetitions were not part of the training data. Imam & Cleland's model is an elegant example for an implementation of a biologically plausible model on neuromorphic hardware that can restore learned signals corrupted by noise. However, due to the restricted generalisation capability of the model, and to the limitations of the data used, it cannot be claimed that it solves the problem of odour learning and identification under a realistic scenario. We hope that raising awareness about these limitations paves the way towards improved neuromorphic models for robust gas recognition that can solve real-world odour recognition tasks.
Figure 2: Jaccard similarity coefficient of the hash table denoiser’s response to occluded Toluene and the learned odour representations. Median and interquartile range across 10 Toluene representations are displayed, and only five out of 10 gases are depicted for clarity.
## Code and Data Availability
Our experiments were based on the code released by the authors with the original study. Our adaptations and instructions for replication, together with the used data, are available at [https://github.com/BioMachineLearning/EPLNetwork_ImamCleland2020](https://github.com/BioMachineLearning/EPLNetwork_ImamCleland2020).
## Author Contribution Statement
**Nik Dennler:** Conceptualisation, Investigation, Formal Analysis, Software, Visualisation, Writing - Original Draft, Writing - Review & Editing. **Andre van Schaik:** Conceptualisation, Writing - Review & Editing, Supervision. **Michael Schmuker:** Conceptualisation, Writing - Review & Editing, Funding acquisition, Supervision.
## Declaration of Competing Interest
The authors declare no competing interests.
## Acknowledgements
We thank N. Imam and T.A. Cleland for their valuable feedback and suggestions. Further, we thank D. Drix, M. Psarrou, S. Rastogi and S. Sutton for fruitful discussions. M.S. was funded from EU H2020 Grant Human Brain Project SGA3 (#945539). This project is supported by the NSF/CIHR/DFG/FRQ/UKRI-MRC Next Generation Networks for Neuroscience Program (NSF #2014217 / MRC #MR/T046759/1 "Odor2Action").
|
2309.06029 | Artificially Intelligent Opinion Polling | We seek to democratise public-opinion research by providing practitioners
with a general methodology to make representative inference from cheap,
high-frequency, highly unrepresentative samples. We focus specifically on
samples which are readily available in moderate sizes. To this end, we provide
two major contributions: 1) we introduce a general sample-selection process
which we name online selection, and show it is a special-case of selection on
the dependent variable. We improve MrP for severely biased samples by
introducing a bias-correction term in the style of King and Zeng to the
logistic-regression framework. We show this bias-corrected model outperforms
traditional MrP under online selection, and achieves performance similar to
random-sampling in a vast array of scenarios; 2) we present a protocol to use
Large Language Models (LLMs) to extract structured, survey-like data from
social-media. We provide a prompt-style that can be easily adapted to a variety
of survey designs. We show that LLMs agree with human raters with respect to
the demographic, socio-economic and political characteristics of these online
users. The end-to-end implementation takes unrepresentative, unsrtuctured
social media data as inputs, and produces timely high-quality area-level
estimates as outputs. This is Artificially Intelligent Opinion Polling. We show
that our AI polling estimates of the 2020 election are highly accurate, on-par
with estimates produced by state-level polling aggregators such as
FiveThirtyEight, or from MrP models fit to extremely expensive high-quality
samples. | Roberto Cerina, Raymond Duch | 2023-09-12T08:03:02Z | http://arxiv.org/abs/2309.06029v1 | # Artificially Intelligent Opinion Polling
###### Abstract
We seek to democratise public-opinion research by providing practitioners with a general methodology to make representative inference from cheap, high-frequency, highly unrepresentative samples. To this end, we provide two major contributions: 1) we introduce a general sample-selection process which we name _online selection_, and show it is a special-case of selection on the dependent variable. We improve MrP for severely biased samples by introducing a bias-correction term in the style of King & Zeng to the logistic-regression framework. We show this bias-corrected model outperforms traditional MrP under online selection, and achieves performance similar to random-sampling in a vast array of scenarios; 2) we present a protocol to use Large Language Models (LLMs) to extract structured, survey-like data from social-media. We provide a prompt-style that can be easily adapted to a variety of survey designs. We show that LLMs agree with human raters with respect to the demographic, socio-economic and political characteristics of these online users. The end-to-end implementation takes unrepresentative, unstructured social media data as inputs, and produces timely high-quality area-level estimates as outputs. This is _Artificially Intelligent Opinion Polling_. We show that our AI polling estimates of the 2020 election are highly accurate, on-par with estimates produced by state-level polling aggregators such as FiveThirtyEight, or from MrP models fit to extremely expensive high-quality samples.
Introduction
The use of increasingly unrepresentative samples contributes to systematic bias in the sub-national predictions of public opinion polling [41, 73]. Multilevel Regression and Post-Stratification (MrP) [24, 60] is a statistical technique that enables estimation of sub-national opinion from unrepresentative samples. It does so by adjusting estimates for a variety of non-response biases [22]. There are two distinct examples of MrP that have been used successfully to perform model-based pre-election opinion polling with unrepresentative samples. A foundational implementation exploits an extremely large unrepresentative sample of Xbox gamers [81] to generate state-level forecasts of the 2012 U.S. presidential election. MrP performed well at the sub-national level with an extremely large but unrepresentative low quality opt-in sample. A second example of successful MrP implementation employs smaller - although still large by conventional standards - higher quality samples consisting of self-selected online panelists [48, 46, 30]. These efforts have also successfully predicted sub-national election results. In both of these implementations characteristics of the data likely play an important role in the MrP success - either data quality and/or sample-size. But should the effectiveness of MrP polling models lie with Big Data, or the curation of a large, diverse and high quality online panel [76]? We believe the real promise of MrP modeling is a world in which the data collection protocol can be increasingly ignored and the reduction in bias can be assured by improvements in statistical modeling. We propose a number of methodological steps in this direction.
The goal of this paper is to democratise public-opinion research by providing practitioners with a general methodology to make representative inference from cheap, high-frequency, highly unrepresentative samples. To this end, we provide two major contributions: First, we introduce a general sample-selection process that we name _online selection_, and show it is a special-case of selection on the dependent variable. This method improves MrP for severely biased samples by introducing a bias-correction term in the style of King & Zeng [43] to the logistic-regression framework. This bias-corrected model outperforms traditional MrP under online selection and achieves performance similar to random-sampling in a vast array of scenarios. Second, we present a protocol to use Large Language Models (LLMs) [78, 16] to extract structured, survey-like data from social-media. The prompt-style we implement can be easily adapted to a variety of survey designs. We show that LLMs agree with human raters with respect to the demographic, socio-economic and political characteristics of these online users.
We illustrate the potential of our approach with data from the 2020 US election. In the run-up of the election we collect a large corpus of Tweets. We use Amazon Mechanical Turk (AMT) workers and LLMs to extract user-level information, such as demographics and voting intention. We then apply the bias-corrected MrP to the extracted survey-like object. We note this procedure is in principle fully automated: a series of R scripts were used to download the data from the Twitter streaming API via the rtweet[40] package;
the openai[65] package is used to access the OpenAI API and extract the survey-object from the corpus; bias-corrected MrP is implemented in Stan [12] via rtstan[68]. The end-to-end implementation takes unrepresentative, unsructured social media data as inputs, and produces timely high-quality state-level estimates of the vote as outputs. This is _Artificially Intelligent Opinion Polling_. We show that our AI polling estimates of the 2020 election are highly accurate; on-par with estimates produced by state-level polling aggregators such as FiveThirtyEight, or from MrP models fit to extremely expensive high-quality samples such as the American National Election Study (ANES).
The paper proceeds as follows: Section 2 outlines the statistical context under which we operate, including a description of the population of interest, the sampling mechanism, and the Hierarchical Bayesian structured MrP model we use to make representative inference; Section 3 presents the results of a simulation study to ascertain the properties of the chosen inferential framework; Section 4 presents our social media feature extraction approach, including our LLM prompt style and a description of various auxiliary datasets to enable a full comparison of the AI polling performance; Section 5 summarises the results from the application of AI polling to the 2020 US Presidential Election, and tentatively explores the connection between the quality of the underlying state-level estimates and the quality of the social media data annotations. We present a high-level discussion of the limitations of this study, the significance of the findings, and areas of future research in Section 6.
## 2 Statistical Context
A population of interest consists of \(N\) individuals, indexed by \(i\in\{1,\ldots,N\}\). The population is stratified according to \(M\) mutually exclusive _cells_, \(\mathcal{K}=\{k_{1},\ldots,k_{M}\}\). Each individual belongs to one of the cells:
\(\forall i,\exists!\ m\in\{1,...,M\}:i\in k_{m}\).
The number of individuals belonging to each cell is:
\[w_{m}=\sum_{i}^{N}\mathds{1}(i\in k_{m}). \tag{1}\]
Cells are defined over a set of attributes \(X\), such that \(X_{m}=\{x_{m1},\ldots,x_{mP}\}\). To keep this exposition general, we leave the definition of \(X\) and its components vague for now. This information is stored in a _stratification frame_, which is known to the researcher prior to the study. An extract from the stratification frame used in this paper, derived primarily from the American Community Survey (ACS) 5 years aggregation (2014 - 2019) is presented in Table 1.
Each member of the population is further subject to a discrete choice. Individual \(i\) considers set \(\mathcal{C}=\{c_{1},...,c_{J}\}\). Their choice is recorded in a random variable \(y\), such that \(y_{i}=c_{j}\) indicates the event: _individual \(i\) has opted for the \(j^{th}\) choice_.
In our application to the 2020 US election, we operationalise the choice-set facing voters as \(\mathcal{C}=\{\text{Republican, Democrat, Libertarian, Green, Stay Home}\}\). This separates us from Lauderdale et al. [48], who prefer a two-tiered choice-set, such that voters first choose whether to turn-out or stay-home, and further express a preference conditional on their choice of turnout. This allows for training turnout and vote-choice models on separate samples. Lauderdale et al. suggest this is an advantage as turnout can be better measured and verified a posteriori of an election, whilst self-reported turnout from surveys can be biased [37]. In this paper, we are interested in testing the viability of artificially intelligent opinion polling as a holistic methodology for surveying opinion, and hence focus on models entirely trained on AI polls.
We can aggregate the choices made by each member of the population to reveal the cell-level choice-probability:
\[\pi_{mj}=\frac{1}{w_{m}}\sum_{i\in g_{m}}\mathds{1}(y_{i}=c_{j}). \tag{2}\]
Cell-level choice-probabilities can be further aggregated to reveal the marginal distribution of choice over any combination of the \(P\) dimensions which define the cells. This is also known as a _stratified_ measure of preferences.
To illustrate, let \(l\in\{1,\ldots,L\}\) represent a set of categories to which a cell can belong to. Let \(\mathcal{O}=\{o_{1},...,o_{L}\}\) indicate the set of cells belonging to each category. Finally let \(x_{1}\) be a categorical variable, such that:
\(m\in o_{l}\iff x_{m1}=l\).
For a less abstract conceptualisation, take variable'state' in our stratification frame. This is a categorical variable taking any one of \(L=51\) values across the \(M=117,844\)
\begin{table}
\begin{tabular}{r|r r r r r r r r|r} \hline \hline \(m\) & _state_ & _gender_ & _ethnicity_ & _age_ & _college.degree_ & _household.income_ & _vote.2016_ & _state.R.vote.2016_ & \(\cdots\) & \(w\) \\ \hline
1 & Alabama & M & Black & \(55-64\) & \(0\) & \(50,000-75,000\) & R & 1.16 & \(\cdots\) & 198.43 \\
2 & Alabama & M & Black & \(55-64\) & \(0\) & \(50,000-75,000\) & D & 1.16 & \(\cdots\) & 109.81 \\
3 & Alabama & M & Black & \(55-64\) & \(0\) & \(50,000-75,000\) & other & 1.16 & \(\cdots\) & 11.37 \\
4 & Alabama & M & Black & \(55-64\) & \(0\) & \(50,000-75,000\) & stay home & 1.16 & \(\cdots\) & 65.75 \\
5 & Alabama & F & Black & \(55-64\) & \(0\) & \(50,000-75,000\) & R & 1.16 & \(\cdots\) & 211.25 \\ \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) & \(\vdots\) \\
117840 & Wyoming & F & Asian & 65+ & \(1\) & \(50,000-75,000\) & stay home & 1.60 & \(\cdots\) & 0.06 \\
117841 & Wyoming & M & Other & 65+ & \(1\) & \(25,000-50,000\) & R & 1.60 & \(\cdots\) & 0.87 \\
117842 & Wyoming & M & Other & 65+ & \(1\) & \(25,000-50,000\) & D & 1.60 & \(\cdots\) & 0.28 \\
117843 & Wyoming & M & Other & 65+ & \(1\) & \(25,000-50,000\) & other & 1.60 & \(\cdots\) & 0.14 \\
117844 & Wyoming & M & Other & 65+ & \(1\) & \(25,000-50,000\) & stay home & 1.60 & \(\cdots\) & 0.13 \\ \hline \hline \end{tabular} Note: An extract from the stratification frame. The variable _state_R.vote.2016_ is a standardised measure of the % of votes the Republican party obtained in a given state. More state-level covariates are available in the frame, but omitted here for clarity. \(w\) is not an integer since the frame has been extended according to the MrsP procedure [51] to include 2016 vote choice as an individual-level predictor.
\end{table}
Table 1: Stratification Frame Extract
cell in the frame: _state_\(\in\) {Alabama, Alaska,..., Wisconsin, Wyoming}. In our frame, the first \(2,392\) cells belong to Alabama, the next \(2,152\) to Alaska, and so on: \(\mathcal{O}=\) { \(o_{\textsc{al}}=[1;\ldots;2,392]\), \(o_{\textsc{ak}}=[2,393;\ldots;4,544]\),..., \(o_{\textsc{wi}}=[113,613;\ldots;116,008]\), \(o_{\textsc{wy}}=[113,614;\ldots;117,844]\) }.
The marginal probability of choosing option \(j\) over \(x_{1}\) is then:
\[\theta_{lj}=\frac{\sum_{m\in o_{l}}\pi_{mj}\times w_{m}}{\sum_{m\in o_{l}}w_{m}}. \tag{3}\]
In our application, \(\theta_{lj}\) represents the vote share of party \(j\) in state \(l\). \(\theta\) is the parameter we wish to estimate.
### Sampling
Unfortunately, \(\mathbf{\pi}_{j}\) is not known to the researcher prior to the study, and must be estimated to obtain \(\theta\). To generate plausible estimate of \(\pi\) we sample from the above population and observe a set of choices.
No specific sampling scheme is assumed at this stage. A sampling event is indicated for each individual by random variable \(\varrho_{i}\in\{0,1\}\). The set of sampled individuals is \(\mathcal{S}=\{s_{1},\ldots,s_{n}\}\). A complementary set \(\mathcal{S}^{\prime}=\{s^{\prime}_{1},\ldots,s^{\prime}_{N-n}\}\) represents non-sampled individuals, such that \(i\in\mathcal{S}\iff\varrho=1\), and \(i\in\mathcal{S}^{\prime}\iff\varrho=0\). Let \(\iota\in\{1,...,n\}\) index the sample observations, then:
\(\forall\iota,\exists!\ i:s_{\iota}=i\).
Our goal is to estimate the posterior distribution \(p(\mathbf{\pi}_{j}\mid\mathbf{y},\mathbf{\varrho}=1)\).
### Bayesian Inference
We seek to estimate \(p(\mathbf{\pi}_{j}\mid\mathbf{y},\mathbf{\varrho}=1)\) via Hierarchical Bayesian modeling [25]. We specify a DGP to be learned by our model. The model should return plausible estimates of the posterior that are independent of sample selection. The validity of our estimate of the population distribution of \(\mathbf{\pi}_{j}\) depends crucially on the _ignorability_ assumption [77]. To see this, let us reformulate the problem in the context of missing data.
Recall that the cell-level population probability depends on the individual-level choices (Equation 2). We observe a subset of those choices \(y_{i},\forall i\in\mathcal{S}\), which we call \(\mathbf{y}^{obs}\); we do not observe \(y_{i},\forall i\in\mathcal{S}^{\prime}\), denoted by \(\mathbf{y}^{mis}\). In order to produce a valid posterior distribution for \(\mathbf{\pi}_{j}\), we need to have a complete set of \(\mathbf{y}=(\mathbf{y}^{obs},\mathbf{y}^{mis})\). Hence we must generate plausible values of \(\mathbf{y}^{mis}\) from the posterior distribution \(p(\mathbf{y}^{mis}\mid\mathbf{y}^{obs},\mathbf{\varrho})\).
For the estimated posterior to be valid, we must be able to ignore the sampling selection model:
\[p(\mathbf{y}^{mis}\mid\mathbf{y}^{obs},\mathbf{\varrho})=p(\mathbf{y}^{mis}\mid\mathbf{y}^{ obs}),\] \[\therefore\ p(\mathbf{y}\mid\mathbf{y}^{obs},\mathbf{\varrho}=0)=p(\mathbf{y}\mid\mathbf{y}^ {obs},\mathbf{\varrho}=1).\]
If ignorability is violated, values generated from the estimated posterior predictive distribution of \(\mathbf{y}\) may be unrepresentative. This could lead to bias in our estimate of \(\mathbf{\pi}_{j}\) as we aggregate up to the cell-level. The phenomena which is generally considered responsible for violations of the ignorability assumption is _non-response bias_[27]. We will look at one example from this type of violation in Section 2.3. Note that because we ultimately aggregate \(\mathbf{y}\) up to the cell-level, we benefit from the law of large numbers [86]. Hence prediction errors at the individual level do not translate linearly to the cell-level.
We propose a Bayesian Hierarchical model to estimate the posterior distribution of choice probability:
\[p(\mathbf{\pi}_{j}\mid\mathbf{y})\propto p(\mathbf{y}\mid\mathbf{\pi}_{j})p(\mathbf{\pi}_{j}),\]
where \(p(\mathbf{\pi}_{j}\mid\mathbf{y})\) is the likelihood of the sample observation given \(\mathbf{\pi}_{j}\), and \(p(\mathbf{\pi}_{j})\) is the prior distribution.
#### 2.2.1 Likelihood Declaration
Let \(y_{\iota}\) be a nominal random variable indicating a discrete choice. Instead here we model \(y_{\iota}=c_{1}\), \(y_{\iota}=c_{2}\)\(\dots\) as a sequence of independent binary variables. This allows us to estimate \(\mathbf{\pi}_{j}\) separately for each \(j\), assuming conditionally independent Bernoulli likelihoods. Note: typically, the likelihood for this type of data would be modeled using a multinomial distribution. We find this approach can be computationally wasteful and does not seem to produce any gain over the proposed model at the desired level of stratification (see Section 3).
The probability of choosing any alternative within the choice set is conditionally independent of any other alternative:
\[\Pr(y_{\iota}=c_{1},\dots,y_{\iota}=c_{J}\mid X_{\iota})=\Pr(y_{\iota}=c_{1} \mid X_{\iota})\Pr(y_{\iota}=c_{2}\mid X_{\iota})\dots\Pr(y_{\iota}=c_{J} \mid X_{\iota}).\]
The event \(y_{\iota}=c_{j}\) can be re-coded as a binary variable \(q_{\iota j}=\mathds{1}(y_{\iota}=c_{j})\). We can then assume that \(q_{\iota j}\) follows a Bernoulli distribution, governed by a probability parameter \(\pi_{\iota j}=\Pr(q_{\iota j}=1)\). Throughout the paper we use \(\pi\) to indicate the probability of choice at different levels, where these levels are indicated by the indexing - so \(\pi_{\iota j}\) is the choice probability for an individual in our sample, \(\pi_{\iota j}\) for an individual in the population, \(\pi_{\iota j}\) for a cell in the population, \(\pi_{\iota j}\) is the population prevalence of choice \(c_{j}\), etc.. To satisfy our conditional independence assumption, we model a latent parameter \(\mathbf{\mu}_{j}\) as a linear function \(f\) of \(\Omega_{j}\) and \(X\). \(\Omega_{j}\) is a vector of choice-specific parameters which map the attribute-set \(X\) onto the latent parameter \(\mathbf{\mu}_{j}\).
\[q_{ij} \sim\text{Bernoulli}(\pi_{ij});\] \[\pi_{ij} =\frac{\exp(\mu_{ij})}{1+\exp(\mu_{ij})};\] \[\mu_{ij} =f(\Omega_{j},X_{\iota}).\]
#### 2.2.2 Prior Declaration
We embrace a parsimonious approach to prior specification that leverages prior knowledge to produce a structured posterior [21]. Our prior specification is further informed by the Penalised Complexity [67] paradigm.
Our priors facilitate compliance with the ignorability assumption. Bayesian hierarchical modeling can be a powerful tool to adjust for non-response bias [22]. It allows for weighted pooling of information from likelihood and priors. The effect of pooling information on the model parameters is referred to as _shrinkage_. Structuring priors produces shrinkage of parameters towards a desirable functional form of latent variable \(\mathbf{\mu}_{j}\). Structured priors allow us to leverage prior knowledge to regularise our linear predictor [21, 31]. While shrinkage is generally desirable, excessive pooling can lead to under-coverage and low-correlation in MrP estimates. To relax the partial pooling of coefficients up to an optimal standard, a robust set of unstructured fixed-effect predictors at the desired levels of analysis (e.g. areal and/or temporal units) is necessary [11, 49]. This level-specific predictor adds further structure to the latent-variable. MrP models can be extremely sensitive to proper specification of this predictor and efforts to automate its selection [10] merit further scrutiny.
Table 2 defines \(X\) and \(\Omega_{j}\) for our application. Given this set of covariates and parameters, we can express the latent propensity \(\mathbf{\mu}_{j}\) as a linear function:
\[\mu_{ij}= \alpha_{j}+\gamma^{\Lambda}_{l[\iota]j}+\gamma^{\Delta}_{d[ \iota]j}+\gamma^{A}_{a[\iota]j}+\gamma^{G}_{g[\iota]j}+\gamma^{R}_{r[\iota]j}+ \gamma^{H}_{h[\iota]j}+\gamma^{E}_{e[\iota]j}+\gamma^{V}_{v[\iota]j}+\sum_{b} ^{12}\beta_{bj}x^{\star}_{ib}; \tag{4}\]
where \(X^{\star}=\{x^{\star}_{1}=x_{9},\ldots,x^{\star}_{12}=x_{20}\}\) are the subset continuous level-specific predictors, detailed in Table 2.
**Global intercept.** Our global intercept parameter \(\alpha_{j}\) is assigned a weakly informative prior. No correlation structure amongst choices is used to inform the baseline rate of choice:
\[\alpha_{j}\sim N(0,10). \tag{5}\]
**Unstructured effects.** A number of classic random intercepts are used to describe the effects of nominal variables with no specific structure. Allowing slight abuse of notation, let \(U\) denote a given categorical predictor and \(u\) represent the levels within that predictor. The random intercept prior is then:
\[\gamma^{U}_{uj}\sim N(0,\sigma^{U}_{j}), \forall\ U\in\{G,R,E,V\}; \tag{6}\]
While sex and educational attainment are binary variables, they are modeled using the unstructured random intercept prior. For the sake of interpretation, we marginally prefer the soft sum-to-zero constraint obtained via sharing \(\sigma\) to the traditional corner-constraint. We further prefer this approach to generalise data-cleaning functions. In practice we do not expect these estimates to be significantly different from the fixed-effect estimates, as shrinkage is minimal under weakly-informative priors when the number of levels is \(<3\)[60].
Priors for the standard deviation parameters are assigned according to the recommendations of the Stan team [23], and are weakly-informative on the log-odds scale:
\[\sigma^{U}_{j}\sim N^{+}(0,1), \forall\ U\in\{G,R,E,V\}. \tag{7}\]
**Spatial structure.** Previous studies [21, 31] have suggested that explicit modeling of geographic proximity can improve estimates of the distribution of preferences across
\begin{table}
\begin{tabular}{c c c c c|c c} \hline \hline \multirow{2}{*}{_predictor_} & \multirow{2}{*}{_level_} & \multicolumn{2}{c}{\(X\)} & \multirow{2}{*}{_index_} & \multirow{2}{*}{_domain_} & \multicolumn{2}{c}{\(\Omega_{j}\)} \\ & & & & & & _prior correlation structure_ \\ \hline \hline
**1** & global & / & / & / & \(\alpha_{j}\) & iid \\ \hline \(x_{1}\) & state & state\_id & \(l\) & \(\{1,\ldots,51\}\) & \(\gamma^{U}_{j}\) & spatial (BYM2) \\ \hline \(x_{2}\) & day & day\_id & d & \(\{30,\ldots,0\}\) & \(\gamma^{U}_{j}\) & random walk \\ \hline \(x_{3}\) & & age\_id & a & \(\{1,\ldots,6\}\) & \(\gamma^{U}_{aj}\) & random-walk \\ \(x_{4}\) & & income\_id & h & \(\{1,\ldots,5\}\) & \(\gamma^{U}_{nj}\) & random-walk \\ \(x_{5}\) & & sex\_id & g & \(\{1,2\}\) & \(\gamma^{G}_{nj}\) & unstructured + shared variance \\ \(x_{6}\) & & race\_id & r & \(\{1,\ldots,5\}\) & \(\gamma^{R}_{nj}\) & unstructured + shared variance \\ \(x_{7}\) & & edu\_id & e & \(\{1,2\}\) & \(\gamma^{E}_{ej}\) & unstructured + shared variance \\ \(x_{8}\) & & vote16\_id & v & \(\{1,\ldots,4\}\) & \(\gamma^{V}_{nj}\) & unstructured + shared variance \\ \hline \(x_{9}\) & & 2016 \(j\) share & & & \(\beta_{1j}\) & \\ \(x_{10}\) & & 2012 \(j\) share & & & \(\beta_{2j}\) & \\ \(x_{11}\) & & \% white & & & \(\beta_{3j}\) & \\ \(x_{12}\) & state & \% evangelical & / & \(\mathbb{R}\) & \(\beta_{ij}\) & iid \\ \(x_{13}\) & & \% college degree & & & \(\beta_{ij}\) & \\ \(x_{14}\) & & region = ‘midwest’ & & & \(\beta_{ij}\) & \\ \(x_{15}\) & & region = ‘northeast’ & & & \(\beta_{7j}\) & \\ \(x_{16}\) & & region = ‘south’ & & & \(\beta_{8j}\) & \\ \(x_{17}\) & & region = ‘west’ & & & \(\beta_{3j}\) & \\ \hline \(x_{18}\) & day & economic index & / & \(\mathbb{R}\) & \(\beta_{10j}\) & iid \\ \(x_{19}\) & & incumbent approval & & & \(\beta_{11j}\) & \\ \hline \(x_{20}\) & state - day & cumulative COVID-19 deaths & / & \(\mathbb{R}\) & \(\beta_{12j}\) & iid \\ \hline \hline \end{tabular} Note: Predictors and parameters specific to our application. ‘iid’ refers to fully independent parameters, or ‘fixed’ effects [25]. ‘unstructured + shared variance’ priors refers to classic random-intercepts. Random-walk and spatial correlation structures are explained in detail below.
\end{table}
Table 2: Model Predictors and Parameters
states. An account of the distribution of spatial preferences is presented by the Besag-York-Mollie (BYM) [6] family of models. We focus on the BYM2 formulation [64]:
\[\gamma^{\Lambda}_{lj}= \ \sigma^{\Lambda}_{j}\left(\phi_{lj}\sqrt{(1-\xi_{j})}+\psi_{lj} \sqrt{(\xi_{j}/\epsilon)}\right); \tag{8}\] \[\phi_{lj}\sim N(0,1);\] (9) \[\psi_{lj}\mid\psi_{l^{\prime}j}\sim N\left(\frac{\sum_{l^{\prime}\neq l}\psi_{l^{\prime}j}}{\nu_{l}}, \frac{1}{\sqrt{\nu_{l}}}\right);\] (10) \[\xi_{j}\sim \text{Beta}\left(\frac{1}{2},\frac{1}{2}\right);\] (11) \[\sigma^{\Lambda}_{j}\sim N^{+}(0,1); \tag{12}\]
where the total spatial effect \(\gamma^{\Lambda}_{lj}\) is the convolution of unstructured random intercepts \(\phi_{lj}\) and intrinsic-conditionally-autoregressive (ICAR) effects \(\psi_{lj}\). The autoregressive element allows estimation of \(\psi_{lj}\) to be conditional on the average neighbourhood effect, where \(\psi_{l^{\prime}j}\) represents the effect of a neighbour. The neighbourhood structure is dictated by an adjacency matrix, typically derived from a map. The ICAR prior standard deviation decreases as the number of neighbours \(\nu_{l}\) increases. \(\xi_{j}\in(0,1)\) is a mixing parameter which imposes an identifiability constraint. This is necessary to optimise posterior exploration and sensibly assign variance amongst competing explanations. Spatial and unstructured effects share a standard deviation parameter \(\sigma^{\lambda}_{j}\). For this assumption to be sensible, \(\psi_{lj}\) and \(\phi_{lj}\) must be on the same scale. This is typically not the case, as the scale of \(\psi_{lj}\) is defined by the local neighbourhood, whilst \(\phi_{lj}\) is scaled across all areas. To ensure the shared-variance assumption holds, we calculate a scaling factor \(\epsilon\) from the adjacency matrix, and use it to re-scale our spatial effects appropriately. Notation for islands is omitted in the above, but note these are a special case of the model, for which \(\xi_{j}=0\)[19, 18]. The islands still contribute to partial-pooling for the unstructured effects, but are ignored for the spatial component.
**Random-walk structure.** Gao et al. [21] have shown improved accuracy of MrP estimates by including structures to account for correlations amongst effects of neighbouring levels in ordinal variables. We focus on the _random walk_ structure:
\[\gamma^{U}_{uj}\mid\gamma^{U}_{u-1\ j}\ldots\gamma^{U}_{1j}\sim N(\gamma^{U}_{u-1\ j},\sigma^{U}_{j}), \forall\ u>1,\ U\in\{\Delta,A,H\}; \tag{13}\] \[\sigma^{U}_{j}\sim N^{+}(0,1). \tag{14}\]
Again a sum-to-zero constraint \(\sum_{u}\gamma^{U}_{uj}=0\) is used to ensure identifiability.
**Independent linear Predictors.** The final set of priors to specify is for the fixed-effect regression coefficients \(\beta\). These are independet weakly informative priors on the log-odds scale:
\[\beta_{bj}\sim N(0,1), \forall\ b\ \in\{1,\ldots,12\}. \tag{15}\]
Traditional MrP approaches make use of fixed-effects at the area-level. We have an
interest in testing the ability of AI polls to capture not merely the cross-states distribution of vote-choICe, but also temporal trends. As such we introduce a day-level, and a state-by-day level, set of fixed-effects. The number of days-to-election levels (\(d\in\{1,\ldots,30\}\)) is large-enough, and there is enough variance across these, that we can expect a time-varying fixed-effect predictor to enrich our estimates of temporal dynamics.
#### 2.2.3 Implementation in Stan
We fit our models using the probabilistic programming language Stan [12, 69]. Stan performs Bayesian inference via the the 'No-U-Turn' sampler [34], a version of the Monte Carlo Markov Chain (MCMC) algorithm known as Hamiltonian Monte Carlo (HMC) [25]. At convergence, sampling from the full conditional distribution of each parameter will be equivalent to sampling from the joint posterior \(p(\Omega_{j}\mid\mathbf{q}_{j})\).
Prior to fitting the model, we perform a number of operations designed to encourage efficient posterior exploration. We standardise our continuous correlates \(X^{\star}\). A non-centered parametrisation is implemented for all our random-effects [59]. The ICAR prior is efficiently specified in Stan via the following improper (non-generative) prior [54]:
\[\log\,p(\mathbf{\psi}_{j})\propto \exp\left\{-\frac{1}{2}\sum_{l^{\prime}\neq l}(\psi_{lj}-\psi_{l ^{\prime}j})^{2}\right\};\]
a sum-to-zero constraint \(\sum_{l}\psi_{lj}=0\) is implemented to ensure identifiability. The Stan code for this model is provided in Listings 1 to 4 in the Appendix1.
Footnote 1: The model presented in the Listings includes an offset parameter, which will be explained in Section 2.3. Given the models are otherwise identical, the no-offset model is omitted. Note further that the model makes use of the spatial functions developed by Connor Donegan [18]. Functions developed by Mitzi Morris [54] were used for calculating the adjacency matrix.
We fit the same model separately for each choice \(j\). For each choice-model, we generate 8 chains of samples worth 500 iterations each. From each chain we discard a warmup of 250 iterations, and apply thinning-factor of 4 to minimise auto-correlation. We keep a total of 504 high-quality posterior samples for each parameter.2
Footnote 2: We are not particularly concerned about strict convergence. The goal of this implementation is for the posterior samples of the cell-level choice-probabilities, aggregated up to the desired level of analysis (see Equation 3), to be stable across runs. This aggregate, which converges faster than any given model parameter (law of large numbers), is the primary output of our estimation procedure. We do not seek to make inference for any given parameter. Lauderdale et al. [48] describe reaching stability across runs by producing shorter chains (36 chains of 25 iterations, 25-iteration warmup). We find our relatively small posterior samples are enough to ensure stability of the \(5^{th}\), \(50^{th}\) and \(95^{th}\) percentiles of our state- and day- level estimates.
#### 2.2.4 Posterior Prediction
Let \(\chi\) index the posterior samples obtained via the MCMC procedure. Allowing for the slightly abusive notation introduced earlier to indicate all random effects, posterior samples for the latent propensity \(\boldsymbol{\mu}_{j}\) are derived:
\[\{\mu\}_{mj}^{\chi}= \{\alpha\}_{j}^{\chi}+\sum_{U}\{\gamma^{U}\}_{u[m]j}^{\chi}+\sum_ {b}\{\beta\}_{bj}^{\chi}x_{mb}^{\star},\hskip 28.452756pt\forall\ \chi\in\{1,\ldots,504\}. \tag{16}\]
We obtain posterior samples of the choice-probability \(\boldsymbol{\pi}_{j}\) via the inverse-logit link3:
Footnote 3: In the specific application to the 2020 US Presidential election, we perform a normalisation step to calculate the distribution of preferences across individuals who turn-out. Let \(j=5\) denote the ‘stay home’ option, and \(j\in\{1,\ldots,4\}\) represent choosing the Republican, Democrat, Libertarian or Green-party candidates. We normalise the choice probabilities to account for turnout as follows: \(\pi_{mj}^{\star}=\frac{\pi_{mj}}{\sum_{j}^{i}\pi_{mj}},\ \forall j\in\{1,\ldots,4\}.\)
\[\{\pi\}_{mj}^{\chi}=\frac{\exp\big{(}\ \{\mu\}_{mj}^{\chi}\ \big{)}}{1+\exp \big{(}\ \{\mu\}_{mj}^{\chi}\ \big{)}}.\]
Finally, calling back to Equation 3, we can obtain posterior simulations for the desired marginal probability of choosing option \(j\) over any of the categorical predictor \(U\) of interest:
\[\{\theta\}_{uj}^{\chi}=\frac{\sum_{m\in o_{u}}\{\pi\}_{mj}^{\chi} \times w_{m}}{\sum_{m\in o_{u}}w_{m}}. \tag{17}\]
### Online Selection
By assuming a sampling design that conforms to the ignorability assumption, it is valid to use the posterior distribution to input unobservable choices. Unfortunately, the choices of observable individuals may be unrepresentative due to _selection effects_. Factors which are not included in the analysis can determine the individual's probability to select into sample. If these factors are correlated with preferences, our estimates will be biased. In this section we present a methodology to address a simple form of selection bias in the context of MrP.
Selection bias resulting from selecting on the dependent variable is addressed in a seminal paper by King & Zeng [43]. They develop a bias-correction method in the context of rare-events, though their results generalise to various instances of exogenous selection on the dependent variable leading to unbalanced choice counts in the sample [15].
In the context of pre-election opinion polling, we are primarily concerned with unbalanced samples of respondents from social media and online panels. We assume a simple selection mechanism that we label _online selection_. Online selection affects the number of individuals eligible for selection per cell \(w_{m}\) when only a proportion of the eligible individuals will'survive' the selection. Notice that these cell-counts can be
broken down into the sum of the number of individuals who make a specific choice: \(w_{m}=\sum_{j}\pi_{mj}w_{m}=\sum_{j}\sum_{i\in g_{m}}\mathds{1}(y_{i}=c_{j})\). Let survival proportion per cell-choice combination be \((1-\Upsilon_{mj})\), where \(\Upsilon_{mj}\) is an online selection penalty. The individual selection probability under this mechanism can then be expressed as:
\[\Pr(\varrho_{i}=1\mid i\in g_{m},y_{i}=c_{j})=\frac{w_{mj}^{\star}}{\sum_{j} \sum_{m}w_{mj}^{\star}} \tag{18}\]
\[w_{mj}^{\star}=(1-\Upsilon_{mj})\sum_{i\in g_{m}}\mathds{1}(y_{i}=c_{j}); \tag{19}\]
\[\Upsilon_{mj}\sim\text{Beta}\left(\mu_{j}^{\Upsilon},\sigma_{j}^{\Upsilon} \right);\hskip 28.452756pt\mu_{j}^{\Upsilon}\in(0,1);\hskip 28.452756pt\sigma_{j}^{ \Upsilon}\in\left(0,\mu^{\Upsilon}(1-\mu^{\Upsilon})\right); \tag{20}\]
where \(\mu_{j}^{\Upsilon}\) is a choice-specific inclination to opt-out of a given medium; \(\sigma_{j}^{\Upsilon}\) is the cross-cells deviation from the central tendency, which is bounded at \(\mu^{\Upsilon}(1-\mu^{\Upsilon})\) to ensure the Beta distribution is proper; and \(\Upsilon_{mj}\) is the resulting cell-heterogeneous choice-specific selection effect.
It is easy to see this selection at work in self-selection on social-media. The social media site Gab 4 has notoriously attracted right-leaning voters, and within this group is has attracted a specific subset of cells [38]. Twitter and Facebook have historically been liberal-leaning platforms [53], though they attract different segments of this population. These considerations would translate directly to a platform-specific survival-probability \(\Upsilon_{mj}\), dominated by a choice-specific tendency \(\mu_{j}^{\Upsilon}\). Sampling at random from the population of these social-media communities is then akin to noisy sampling on the dependent variable, in the manner described above. Note that this violates the ignorability assumption: posterior predictive samples generated by models trained under this sampling protocol will be different from those obtained by training a model on the unobserved data. In other words:
Footnote 4: [https://gab.com](https://gab.com)
\[p(\mathbf{y}\mid\mathbf{y}^{obs},\mathbf{\varrho}=0)\neq p(\mathbf{y}\mid\mathbf{y}^{obs},\mathbf{ \varrho}=1). \tag{21}\]
How can we then estimate a valid posterior distribution \(p(\mathbf{\pi}_{j}\mid\mathbf{y})\) from samples selected as above? In our DGP we have proposed a structured logistic-regression as a plausible model for the DGP of our data. Even assuming we can fully control for factors which influence both selection and choice probabilities, the proposed model will still be biased. The consequences of this type of selection are principally manifest in a biased intercept \(\alpha_{j}\). We propose to apply King & Zeng's prior correction [43] to account for choice-specific online selection.
Let sample size \(n=n_{j}^{0}+n_{j}^{1}\), where \(n_{j}^{1}=\sum_{\iota}^{n}\mathds{1}(y_{\iota}=c_{j})\), the total number of individuals who choose option \(j\) in our sample (_cases_), and \(n_{j}^{0}=\sum_{\iota}^{N}\mathds{1}(y_{\iota}\neq c_{j})\), the number of those who do not (_controls_). These quantities have known population counterparts: \(N=N_{j}^{0}+N_{j}^{1}\), where \(N_{j}^{1}=\sum_{i}^{N}\mathds{1}(y_{i}=c_{j})\) and \(N^{0}=\sum_{i}^{N}\mathds{1}(y_{i}\neq c_{j})\). We can define the conditional probabilities of sampling cases and controls under our selection mechanism
as follows:
\[\Pr(\varrho_{\iota}=1\mid y_{\iota}=c_{j})= \frac{n_{j}^{1}}{N_{j}^{1}}; \tag{22}\] \[\Pr(\varrho_{\iota}=1\mid y_{\iota}\neq c_{j})= \frac{n_{j}^{0}}{N_{j}^{0}}. \tag{23}\]
King & Zeng show that the log-odds of selection can be used as an offset in a logistic regression model to correct bias associated with the intercept:
\[\mu_{ij}= \log\left(\frac{n_{j}^{1}/N_{j}^{1}}{n_{j}^{0}/N_{j}^{0}}\right)+ \tilde{\alpha}_{j}+\sum_{U}\gamma_{u[\iota]j}^{U}+\sum_{b}\beta_{bj}x_{\iota b} ^{\star}; \tag{24}\] \[\tilde{\alpha}_{j}= \alpha_{j}-\log\left(\frac{n_{j}^{1}/N_{j}^{1}}{n_{j}^{0}/N_{j}^ {0}}\right). \tag{25}\]
Posterior samples can be generated for a representative sampling protocol by omitting the offset from the prediction equation:
\[\tilde{\mu}_{mj}= \tilde{\alpha}_{j}+\sum_{U}\gamma_{u[m]j}^{U}+\sum_{b}\beta_{bj}x_ {mb}^{\star}; \tag{26}\] \[\tilde{\pi}_{mj}= \frac{\exp(\tilde{\mu}_{mj})}{1+\exp(\tilde{\mu}_{mj})}. \tag{27}\]
#### 2.3.1 Exogenous Prevalence
The prior correction relies on knowledge of \(N_{j}^{0}\) and \(N_{j}^{1}\) - or more succinctly, knowledge of the prevalence \(\pi_{j}=\frac{N_{j}^{1}}{N_{j}^{1}+N_{j}^{0}}\). This is unknown and unobserved in our setup, as described in Section 2.1. We must therefore find a way to estimate this quantity.
MrP is often used to obtain small-area estimates in the context of pre-election opinion polling. This methodology is powerful because it allows us to address a variety of non-response biases by controlling for relevant covariates in the regression equations [22]. These adjustments are vital in the context of small-area estimation [41], where obtaining representative samples appears more challenging than at the national level. Despite occasional misses, there is strong evidence that national polling has remained accurate over time and across countries [39]. It follows that national polling aggregators are a relatively objective source of population-prevalence during an election campaign.
We propose to leverage prevalence estimates obtained via polling averages to inform our bias-correction term. The methodology of aggregating polls at the national and sub-national level during an election campaign is well established [35, 52, 32]. It would be feasible to leverage national polling data and an appropriate aggregation model to produce a fully-Bayesian estimation of prevalence, simultaneously with the other model parameters. This would however impose greater computational costs on the estimation
procedure. Moreover, given the bias induced by the online sampling protocol, it is preferable to take prevalence as a known constant.
In our application to the 2020 US presidential election we use the average of the FiveThirtyEight national-level predictions [66]. Note that we are producing an election-day estimate, so we assume the entire time-series of forecasts up-to election-day is known. The prevalence per party is then the simple average of these series across the days of the campaign.
## 3 Simulation Study
What are the implications of introducing the King & Zeng bias-correction for estimates of the stratified (Equation 3) and cell-level (Equation 2) choice probabilities? To explore the properties of the bias-correction mechanism, we perform a simulation study. Table 3 presents the various models and sampling protocols we compare in the simulation study. Our primary interest is to compare our proposed modeling strategy (S.8) against its uncorrected version (S.4), and the 'best-case-scenario' of random-sampling (S.0). We further use the simulation study to evaluate the gains afforded by the use of structured priors under the proposed DGP, as well as any disadvantage arising from using a sequential Bernoulli likelihood as opposed to a Multinomial likelihood. We evaluate each of the scenarios according to the following metrics:
\[\text{Bias:}\hskip 56.905512pt\mathcal{B}= \frac{1}{n}\sum_{i}f_{i}-\hat{f}_{i}; \tag{28}\] \[\text{Root Mean Squared Error:}\hskip 14.226378pt\mathcal{RMSE}= \sqrt{\frac{1}{n}\sum_{i}(f_{i}-\hat{f}_{i})^{2}};\] (29) \[\text{Pearson Correlation:}\hskip 56.905512pt\rho= \frac{\sum_{i=1}^{n}(\hat{f}_{i}-\bar{\hat{f}})(f_{i}-\bar{f})} {\sqrt{\sum_{i=1}^{n}(\hat{f}_{i}-\bar{\hat{f}})^{2}}\sqrt{\sum_{i=1}^{n}(f_{ i}-\bar{f})^{2}}};\] (30) \[\text{Coverage }(90\%):\hskip 56.905512pt\Gamma= \frac{1}{n}\sum_{i}\mathds{1}(\hat{f}_{i}^{5\%}<f<\hat{f}_{i}^{ 95\%}). \tag{31}\]
We take inspiration from Leemann & Wasserfallen [51] to calibrate the simulation. We simulate a population of size \(N=1,000,000\), and we focus on \(J=3\) options. We explore samples of size \(n\in[100,\ldots 10,000]\). The simulated data broadly follows the DGP described in Section 2.2. Like [51] we use arbitrary cutoffs to discretise a set of correlated individual-level covariates \(X\). Individuals are assigned to areas according to a Dirichlet-Multinomial process to explore both even and uneven population distributions across areas. Spatially correlated effects \(\mathbf{\psi}_{j}\) are sampled from a Spatial Autoregressive (SAR) DGP. We exaggerate the degree of spatial auto-correlation by simulating \(1,000\) SAR parameters and selecting the combination that gives the highest Moran I in each simulation round. This is then mixed with \(\mathbf{\phi}_{j}\) according to a mixing parameter \(\xi_{j}\), as
per the BYM2 model. The area-level effect \(\beta_{j}\) is assigned a uniform prior to explore a range of contextual variables' effect sizes. We simulate a total of 150 populations, generating performance scores for 450 choices in total (\(J=3\) for every simulation). The simulation study's DGP follows:
\[\pi_{ij}=\text{Softmax}(\alpha_{j}+\gamma_{u^{1}[i]j}^{1}+\gamma_{u^ {2}[i]j}^{2}+\gamma_{u^{3}[i]j}^{3}+\gamma_{l[i]j}^{\Lambda}+\beta_{j}z_{l[i]});\] \[\alpha_{j}\sim N(0,1);\qquad\qquad\gamma_{u^{k}j}^{k}\sim N(0,1); \qquad\qquad\beta_{j}\sim\text{Unif}\left(-1,1\right);\qquad\qquad z_{l}\sim N(0, 1);\] \[\boldsymbol{\gamma}_{j}^{\Lambda}\stackrel{{\text{ BYM2}}}{{\sim}}\left(\boldsymbol{\nu},\boldsymbol{\phi}_{j},\boldsymbol{\psi}_{j}, \xi_{j}\right);\qquad\qquad\boldsymbol{\phi}_{j}\sim N(0,1);\qquad\qquad \boldsymbol{\psi}_{j}\sim\text{SAR};\qquad\qquad\xi_{j}\sim\text{Unif}(0,1);\] \[u_{i}^{k}=\begin{cases}1&x_{ik}<-1\\ 2&-1\geq x_{ik}<0\\ 3&0\geq x_{ik}<1\end{cases};\qquad\qquad\begin{pmatrix}x_{i1}\\ x_{i2}\\ x_{i3}\end{pmatrix}\sim N\left[\begin{pmatrix}0\\ 0\\ 0\end{pmatrix},\begin{pmatrix}1&\rho^{x}&\rho^{x}\\ \rho^{x}&1&\rho^{x}\\ \rho^{x}&\rho^{x}&1\end{pmatrix}\right];\quad\rho^{x}\sim\text{Unif}(0,1);\] \[l_{i}\sim\text{Dirichlet-Multinomial}\left(n=1,\alpha_{1}=1, \ldots,\alpha_{51}=1\right).\]
Figure B.1 presents a performance comparison of every model and sampling strategy combination (on the x-axis) against (S.0) (on the y-axis). Figure B.2 presents the same comparison for the estimation of cell-level probabilities \(\boldsymbol{\pi}_{j}\). Our simulation study suggests the following: _i. **likelihood**_: for the estimation of the stratified preferences \(\boldsymbol{\theta}_{j}\) there are no substantial differences between models using Bernoulli and Multinomial likelihoods. On the other hand, at the cell-level, the best Multinomial estimates of \(\boldsymbol{\pi}_{j}\) under random sampling have lower average RMSE (\(-0.017\)), somewhat greater correlation (\(+0.021\)) and significantly greater coverage (\(+0.143\)); _ii. **structure**_: across metrics, at the stratified and cell levels, structured models systematically outperform unstructured ones. However, the gains from structured priors for the DGP under consideration appear extraordinarily minor. The metric most affected by structure appears
\begin{table}
\begin{tabular}{c|c c c c|c c c|c|c} \hline \hline _Scenario_ID_ & _Sampling_ & _Likelihood_ & _Structured Priors_ & _Bias-correction_ & _Bias_ & _RMSE_ & _Pearson Correlation_ & _Covariance (Distance from 90\%)_ & _Colour_ \\ \hline \hline (S.0) & random & Bernoulli & TRUE & FALSE & / & / & / & — \\ (S.1) & random & Bernoulli & FALSE & FALSE & 0 & 0.004 & -0.014 & -0.005 & — \\ (S.2) & random & Multinomial & TRUE & FALSE & 0 & 0 & -0.002 & -0.012 & — \\ (S.3) & random & Multinomial & FALSE & FALSE & 0 & 0.005 & -0.016 & -0.014 & — \\ (S.4) & selected & Bernoulli & TRUE & FALSE & 0.093 & 0.084 & -0.088 & -0.445 & — \\ (S.5) & selected & Bernoulli & FALSE & FALSE & 0.094 & 0.087 & -0.104 & -0.435 & — \\ (S.6) & selected & Multinomial & TRUE & FALSE & 0.092 & 0.082 & -0.087 & -0.447 & — \\ (S.7) & selected & Multinomial & FALSE & FALSE & 0.093 & 0.085 & -0.104 & -0.439 & — \\ (S.8) & selected & Bernoulli & TRUE & TRUE & 0.015 & 0.028 & -0.079 & -0.150 & — \\ (S.9) & selected & Bernoulli & FALSE & TRUE & 0.016 & 0.033 & -0.097 & -0.149 & — \\ \hline \hline \end{tabular} Note: Summary of the modeling and sampling scenarios evaluated in the simulation study. The scoring metrics presented here are averages differences relative to (S.0), the best-case scenario.
\end{table}
Table 3: Summary of the modeling and sampling scenarios.
to be correlation, where structured models provide an increase in correlation around \(+0.01\); _iii. bias-correction_: bias-corrected models afford unequivocal advantages under online-selection. Looking at estimates of the stratified preferences \(\mathbf{\theta}_{j}\), the best bias-corrected model under online selection (S.8) alleviates the absolute bias of the best performing uncorrected alternative (S.6)by\(-0.077\); the RMSE falls by \(-0.054\); the correlation increases \(+0.008\) and the coverage shoots up \(+0.297\). In estimates of \(\mathbf{\pi}_{j}\) at the cell level, we see similar improvements in bias (\(-0.074\)), RMSE (\(-0.044\)), correlation (\(+0.009\)) and coverage (\(+0.16\)).
We further explore how the performance of each estimation strategy responds to: i. changes in sample size \(n\); ii. changes in population-prevalence \(\pi_{j}\); iii. changes in the central tendency of the online selection penalty \(\mu_{j}^{\Upsilon}\); iv. changes in the size of the severity of sample-prevalence-bias (the difference between population-prevalence \(\pi_{j}\) and sample-prevalence \(\bar{\pi}_{j}\)). Figures 1 to 4 present the distribution of each of the scoring metrics for each of the four stimuli, as it pertains to the estimation of \(\mathbf{\theta}_{j}\). Similar plots relevant to the estimation of the cell-level probabilities \(\mathbf{\pi}_{j}\) are available in Figures B.3 to B.6. A detailed description of these figures is available in the Appendix.
We can summarise the findings of our analysis as follows: i.(Figures 1 and B.3): Returns of additional samples reach a quasi-plateau around \(n=8,000\) for random-selection, whilst no clear plateau is reached for non-random samples. There is evidence for decreasing returns at the limit of the studies samples sizes (\(n\approx 10,000\)). Still, more \(n\) appears to be better for reducing RMSE and increasing correlation in estimates from non-random samples. Large sample-sizes tend to degrade the coverage of models trained on non-random samples. Bias-corrected models appear more robust at any sample size, and reach similar levels of RMSE as random samples at large sample sizes; ii.(Figures 2 and B.4): RMSE is minimised at lower prevalence levels, whilst correlation plateaus at \(\pi_{j}\approx 0.2\), and degrades beyond \(\pi_{j}\approx 0.7\). Bias-corrected models appear robust to any prevalence levels, and behave similarly to random-samples throughout; iii.(Figures 3 and B.5): increases in the central selection penalty \(\mu_{j}^{\Upsilon}\) substantially degrade performance for non-random samples. If a party is relatively under-selected (\(\mu_{j}^{\Upsilon}<\bar{\mu}^{\Upsilon}\)) we induce positive-bias to structured MrP estimates, whilst over-selection brings about a more severe negative-bias. These translate into degradation of RMSE, correlation and coverage. Bias-correction makes these models extremely robust to central-selection pressure, meaningfully deviating from random-sampling performance for bias, RMSE and correlation only beyond \(\mu_{j}^{\Upsilon}\approx 0.7\). This is a massive level of selection which suggests less than 30% of individuals for a given party are eligible for selection into the subject pool, on average. Under-coverage remains an issue, though it is massively alleviated by bias-correction relative to uncorrected models; iv.(Figures 4 and B.6): sample prevalence bias is a major driver or poor performance. Uncorrected models suffer dramatically from sample prevalence bias in all metrics. Bias-correction generates robust estimates which appear to perform at similar levels as random samples for moderate levels of bias, and are relatively robust at any level of bias.
We conducted a simulation study to assess the performance of a bias-corrected model under online selection against a series of scenarios outlined in Table 3. We examined performance across four metrics: bias, RMSE, pearson correlation and coverage of the 90% prediction interval. Our simulation study ultimately shows that, under severely unrepresentative samples, uncorrected MrP is liable to fail on every metric. Our findings further suggest that performance of our bias-corrected model under online selection is comparable to that of an uncorrected model under random sampling for a broad range of plausible scenarios. Based on the simulation study, we produce the following recommendations: 1.) When interested in cell-level estimates of preferences, use a Multinomial likelihood. Otherwise, a series of Bernoulli models is preferable given computational considerations. 2.) Structured priors are preferable to unstructured ones, though the gains from these are very minimal compared to other modeling choices. 3.) When the true population prevalence \(\pi_{j}\) is available, there is no reason to perform uncorrected MrP to estimate cell- or stratified-level quantities - we always recommend
Figure 1: Effect of sample size \(n\) on estimation performance for \(\mathbf{\theta}_{j}\).
Figure 2: Effect of population prevalence \(\pi\) on estimation performance for \(\mathbf{\theta}_{j}\).
Figure 3: Effect of online selection penalty \(\mu_{j}^{\Upsilon}\) on estimation performance for \(\mathbf{\theta}_{j}\).
to implement the King & Zeng prior correction. 4.) With respect to sample size, we confirm that more is better, though we caution that after \(n>10,000\) the gains in terms of RMSE and correlation are minimal, and there is an associated loss of coverage for selected samples.
## 4 Structuring Digital Traces
Having built a solid theoretical foundation for making close-to-representative inference under online selection, we now turn to data collection. _digital trace_ data are a cheap and plentiful alternative to random-digit-dial surveys. We define digital trace data as unobtrusively measured information belonging to a real-life person that can be found online. A rich corpus of digital traces can be obtained from social media companies such as Twitter or reddit via publicly available APIs [4, 5]. We focus in this paper on data obtained via the Twitter streaming API, though the methodology proposed here is general to any social medium.
Digital trace data from social media contains signals about public opinion and voting preferences [14], partisanship [3, 33], demographics [82], geographics [72] and other individual-level information. Social media data is unrepresentative [53, 75], and selection onto different platforms is often directly dependent on political preferences [38]. We can think of social media data as an imperfect online panel [17] constructed of largely unstructured data. Our challenge is to structure this data into a survey object, amenable to analysis via the inferential framework outlined above.
In this section we present a procedure to extract survey-like information from social-media data via artificial or human intelligence. We also present the alternative samples we use in this paper to validate and compare our feature-extraction and estimation strategies. Table 4 presents the 6 samples we use in our analysis.
### Twitter Corpus
Data collection from Twitter was performed with the rtweet package [40]. We stream tweets from July \(24^{th}\) 2020 to election-day November \(3^{rd}\) 2020. The earliest tweet in our pool is dated 2020-07-16 at 01:42:46 UTC, whilst the latest was created on 2020-11-03 at 00:12:55 UTC. Our query returns a sample of tweets containing the words '_Biden_' or '_Trump_'. We make use of a congressional districts map to specify a point-radius search around the centroid of each district, to ensure good geographic coverage. We screen-out non-english tweets to limit the amount of non-US-residents in our pool. This curated corpus contains \(492,539\) unique users, responsible for \(3,019,184\) tweets.
### Feature Extraction via LLMs
We use LLMs to generate surveys from social media data. The LLM of choice is gpt-3.5-turbo from OpenAI. In constructing our online digital trace sample, we rely on gpt-3.5-turbo for two feature extraction tasks that are very much a strength of LLMs. First, with limited contextual information, gpt-3.5-turbo generates socio-demographic profiles of individuals. This is a simple categorization exercise that requires no demanding causal reasoning or directed exploration where GPT has performed less well [8]. Second, there is considerable evidence that gpt-3.5-turbo performs as well, if not better, than humans in characterizing the partisanship of Twitter content [74, 58, 26].
An important distinction must be made with the work of Argyle et al. [2]. In Argyle, the generative quality of the LLM model is of interest - the model is asked to generate new survey-like samples after conditioning. The authors argue that with proper conditioning the machine can generate close-to-random samples of populations of interest. We propose a very different approach: We advocate using LLMs to annotate existing high-frequency, unrepresentative, cheap and unstructured samples. Relying on the machine to generate a reasonably representative synthetic sample requires an understanding of the DGP of LLMs which, under the current paradigm, is unknowable: there is a total lack of explainability [28], despite recent advances [7]. Its not clear that relying on conditioning of the kind outlined in Argyle is sufficient to ensure representative inference. The issue of time-relevance is paramount in pre-election opinion polling: the samples need to contain valuable information about the daily changes in preferences across population categories. While it is possible to correct for selection biases to a degree by conditioning the LLM with the aid of exogenous representative samples (e.g. the ANES), it is not possible to have access to equally relevant and representative samples in the run-up to an election. The point of opinion-polling is to collect such samples. Our approach is able to capture changes over time by providing regular, timely, high-frequency, unobtrusively observed expressions of preferences.
Figures 5 and 6 present the prompts provided to gpt-3.5-turbo, along with a sample of answers. The LLM was asked to classify users according to the survey-like categories presented in Figure C.1. We set temperature = 0 to ensure relatively stable classification. For each Twitter user we provide the LLM with two prompts: the first to extract the location of the user, and the second to extract the socio-demographic and political characteristics. The reason for these prompts to be separate has to do with the nature of Twitter self-reported location data and the importance of a clean location signal for MrP. The self-reported location of the user tends to be reasonably low in noise, so it makes sense to extract this in a separate prompt. Including it in the socio-demographic prompt could increase the level of noise in the response. Moreover the formatting of the prompt for location relies on the knowledge of the LLM regarding what a 'State' in the US is, as well as what a 'Territory' is. By using this inherent knowledge without having to specify 50 states + DC as separate options in the prompt, as we would have to in the \(2^{nd}\) prompt's format, we save on valuable tokens.
For the second prompt, notice the line of code randomising a set of categories (demos_string) across which we want the LLM to classify users. We randomise to account for the auto-regressive nature of LLMs [50]. The category - and the LLM's classification of the user within that category - that is placed earlier in the prompt has an effect downstream to the future classifications. We presume, for example, that if the LLM generates output classifying a given user as 'highly educated', it may be more likely to
Figure 5: Example of the location-field prompt, followed by a sample of 5 answers. The object user$location[i] is passed to the prompt for the \(i^{th}\) user in the sample.
classify the same user as a Democrat, regardless of the content of the Tweets. This is in virtue of the probability of the word 'Democrat' following the words 'highly educated' being more likely than the word 'Republican', again independently of the content of the tweets [84]. By randomising the categories across which we wish to classify users in the prompt-order, we avoid systematic bias due to auto-regressive behaviour in the full annotated sample.
We ask the LLMs to annotate two sets of samples. _Sample (1)_ was generated dynamically during the election campaign. Users from our evolving Twitter corpus were selected daily starting on October \(1^{st}\) 2020 and ending November \(1^{st}\). On a given day during the campaign a subset of users was sampled at random from the corpus, conditional on having tweeted at least 10 times up to that point. A total of \(4,590\) users from this sample were fed to the LLM. _Sample (2)_ consists of every user from our corpus who: a) tweeted for a total of 5 times of more during the campaign; b) produced at least 1 tweet in the last month of the campaign. This second sample, which spans \(30,154\) unique users, serves to examine the effect of decreasing the amount of context available to the LLM in favour of increasing sample size.
### Feature Extraction via Amazon Mechanical Turks
To validate our LLM classification we use Amazon Mechanical Turk workers to annotate _Sample (1)_. We use a simple survey instrument coded in html on the AMT platform to present the workers with user information. Much like the LLM, workers have access to the last 10 tweets generated by the user, the user's name, location and description. Beyond this information, workers also see the dates on which the tweets were posted, which device the tweets were generated on, the users' profile and background pictures, and whether the account was verified. We implement attention-checks to control data-quality, as well as to screen-out bots and VPN workers falsifying their location [42]. A total of \(2,492\) unique workers participated in the human intelligence task.
### Benchmarking
To put the performance of bias-corrected structured MrP in context, we fit the model to a series of reference datasets. We use the 2020 American National Election Study (ANES) time-series sample as the 'optimal' scenario - namely a close-to-random, high-quality sample. Poor performance of the model on this dataset would indicate serious issues with the modeling framework. To measure the gains obtained via modeling, we further compare results against the raw, unweighted and unmodeled ANES data. In a separate scenario, we generate a new training set by augmenting the ANES with samples from swing-state rdd polls produced by ABC and The Washington Post. This makes the sample less representative, though it is common to over-sample swing-states in classic MrP applications [46, 47], and hence it is of interest to measure performance under this combination. To compare performance against unrepresentative samples recruited via
**Prompt:**
paste(
'A person has in their Twitter bio the following information:
\(\ll\)',users$bio[i],'\(\gg\) ;
Further, they have written the following tweets:
\(\ll\)',users$text[i],'\(\gg\).
I will now show you a number of categories to which this user may belong.
The categories are preceded by a header (e.g. "AGE:" or "SEX:" etc.) and an identifier (e.g. "A1", "A2" or "E2" etc.). Please select, for each header, the most likely category to which this user belongs to.
In your answer present, for each header, the selected identifier.', paste0(sample(demos_string,replace = FALSE),collapse = '\n')
).
**Answer:**
HIGHEST EDUCATIONAL QUALIFICATION: Q2) completed high-school but did not go to college
THIS INDIVIDUAL IS REGISTERED AS: R2) a Democrat
ETHNICITY: E1) White
2016 US PRESIDENTIAL ELECTION VOTE: L3) voted for Hillary Clinton, the Democrat candidate
SEX: S1) Male
AGE: A5) 45 to 54 years old
MARITAL STATUS: M1) Married
2020 US PRESIDENTIAL ELECTION VOTE: V3) voted for Joe Biden, the Democrat candidate
2018 MIDTERM ELECTION VOTE: T3) voted for the Democratic Party
HOUSEHOLD INCOME BRACKET: H5) more than 100000 USD per year \(\cdots\)
Figure 6: Example of the socio-demographic and political characteristics prompt, followed by a sample answer. The object users$bio[i] is passed to the prompt for the \(i^{th}\) user in the sample, and contains the location information of the user, their self-reported description and their screen-name. The object users$text[user] is also looped-over, and provides the LLM with a list of recent tweets produced by the user. Finally, the R code line paste0(sample(demos_string,replace = FALSE),collapse = '\n’) samples attributes in random order from the object demos_string, which contains all the combinations of headers and identifiers. Every user is classified into exclusively one option for every category.
online surveys, as opposed to social-media data, we use a survey of Amazon Mechanical Turks. Survey responses were collected from Turks who contributed to the human-intelligence annotation task. Finally, we compare performance against the uniform swing model [36]. We use the true national swing from 2016 to 2020 to fit this model.
## 5 Results
Table 5 summarizes the accuracy of state-level election day vote-share predictions for six different sampling strategies and three alternative modeling benchmarks arranged over the columns. The rows of Table 5 are the different evaluation metrics. Figure D.7 presents the election-day state-level predictions for each choice under consideration in the 2020 US election (rows) by each model/data combination (columns). Figure D.9 summarises the ability of forecasts to capture the change since the last election. Figures D.8 and D.10 present the same comparisons, though here Libertarians and Greens are aggregated into an 'other' category to enable comparison with state-of-the-arts area-level polling aggregator FiveThirtyEight. Turnout is also omitted from this comparison as FiveThirtyEight's calculation could not be sensibly reconciled with our approach. Figure 7 displays the national predictions over the course of the campaign. Below we highlight key results regarding state-level predictions and predicted campaign trends.
### Election-day State-level Predictions
_(1). Bias-corrected structured MrP achieves state-of-the-arts performance on high-quality quasi-random samples_. Firstly, a sanity check against disaggregation: estimates of the Republican-Democrat (R-D) margin from the raw ANES data are significantly
\begin{table}
\begin{tabular}{c|c|c c c c|c c c|c c c} \hline \hline & & \multicolumn{4}{c|}{_Non-probability Samples_} & \multicolumn{3}{c|}{_Quasi-probability Samples_} & \multicolumn{4}{c}{_Other Benchmarks_} \\ \hline & & \multicolumn{3}{c|}{Twitter} & \multicolumn{1}{c}{Twitter} & \multicolumn{1}{c}{Mazon} & \multicolumn{1}{c|}{Mechanical} & \multicolumn{1}{c|}{ANES} & \multicolumn{1}{c|}{ANES + WaPo} & \multicolumn{1}{c}{ANES} & \multicolumn{1}{c}{Uniform} & \multicolumn{1}{c}{FiveThirtyEight} \\ & & \multicolumn{3}{c|}{gpt-3.5-turbo} & \multicolumn{1}{c}{gpt-3.5-turbo} & \multicolumn{1}{c}{Humar} & \multicolumn{1}{c|}{Turks} & \multicolumn{1}{c|}{ABC \& WaPo} & \multicolumn{1}{c}{’Raw’} & \multicolumn{1}{c}{Swing} & \multicolumn{1}{c}{FiveThirtyEight} \\ \hline \hline & D & 0.02 & 0.04 & 0.04 & **0** & -0.01 & 0.01 & 0 & -0.01 & 0.03 \\ Bias & R & -0.03 & -0.05 & -0.05 & **0** & 0 & -0.01 & -0.09 & 0 & -0.02 \\ & R-D & -0.05 & -0.09 & -0.09 & **0** & 0 & -0.02 & -0.08 & 0.01 & -0.04 \\ \hline & D & **0.03** & 0.05 & 0.05 & 0.04 & 0.02 & 0.01 & 0.07 & 0.02 & 0.03 \\ RMSE & R & **0.03** & 0.06 & 0.06 & 0.05 & 0.02 & 0.02 & 0.12 & 0.02 & 0.02 \\ & R-D & **0.06** & 0.1 & 0.11 & 0.09 & 0.03 & 0.03 & 0.17 & 0.03 & 0.05 \\ \hline & D & **0.99** & **0.99** & 0.97 & 0.93 & 0.99 & 0.87 & 0.99 & 0.99 \\ & R & **0.99** & **0.99** & 0.97 & 0.93 & 0.99 & 0.99 & 0.83 & 0.99 & 0.99 \\ & R-D & **0.99** & **0.99** & 0.98 & 0.93 & 0.99 & 0.99 & 0.83 & 0.99 & 0.99 \\ \hline & D & 0.63 & 0.22 & 0.76 & **0.84** & 0.96 & 0.98 & / & / & 1 \\ & R & 0.71 & 0.08 & 0.38 & **0.86** & 0.98 & 0.94 & / & / & 1 \\ & R-D & 0.65 & 0.14 & 0.47 & **0.84** & 1 & 0.98 & 0.83 & 0.99 & 0.88 \\ \hline \(R^{2}\) Non-Unif. & D & **0.22** & 0.18 & 0.13 & 0.01 & 0.61 & 0.72 & / & 0 & 0.62 \\ Swing & R & **0.45** & 0.22 & 0.08 & 0.30 & 0.56 & 0.61 & / & 0 & 0.61 \\ \hline \hline \end{tabular} Note: Summary of the main results for the two major parties. For each metric we show in underlined bold the best performing non-representative sample.
\end{table}
Table 5: Summary of results for Democrats and Republicans.
worse, with an RMSE close to 0.14 points higher. This is caused by an anti-Republican bias of 0.09 points. Training our model with the same ANES data sees a reduction of this bias to 0. Secondly, a check on overall performance: again looking at the R-D margin, we witness the following performance across metrics for the bias-corrected model trained on ANES data: {bias = 0, RMSE = 0.03, correlation = 0.99, coverage = 1}. Comparing this against FiveThirtyEight's forecasting model {bias = -0.04, RMSE = 0.05, correlation = 0.99, coverage = 0.88}, we can be confident our approach can compete with state-of-the-arts polling aggregators under optimal sampling. It's worth noting that a uniform-swing model using the true national swing also outperforms the FiveThirtyEight forecast. This is in line with existing literature about the predictive power of the uniform swing [36], though we note it performs exceedingly well here largely due to the extremely high correlation between the 2016 and 2020 election.
_(2). Bias-corrected structured MrP achieves satisfactory performance on all selected samples_. Estimates of the Democratic vote for selected samples are generally satisfactory, performing within the following ranges {bias = [0,0.04], RMSE = [0.03,0.04], correlation = [0.93,0.99], coverage = [0.22,0.84]}; Republican vote estimates present a similar performance range {bias = [-0.05,0], RMSE = [0.03,0.06], correlation = [0.93,0.99], coverage = [0.08,0.86]}; Turnout estimation performance is also of interest {bias = [-0.07,0.02], RMSE = [0.02,0.08], correlation = [0.11,0.73], coverage = [0.45,0.88]}. Third-party predictions generally have low bias and RMSE, high coverage, but also lower correlation compared to the two main parties. We observe the following patterns in results:
a. Twitter data under-sampled Republicans, and over-sampled Democrats to a degree that was not entirely accounted for by the King & Zeng bias-correction. This was true whether the data was annotated by humans or machine. Amazon Mechanical Turks on the other hand displayed no residual systematic bias;
b. Performance on turnout for selected samples was variable: with the exception of the model trained on human-labeled Twitter users, turnout predictions were generally satisfactory, showing a maximum RMSE of 0.04. The best performing model was that trained on the small-context AI-labeled Twitter data, which achieved a correlation of 0.73 - superior to a turnout model trained on ANES data. Human annotators tended to under-estimate the propensity for Twitter users to show-up on election day, generating a severe bias in the turnout estimates of \(-0.07\);
c. As predicted by the simulation study, coverage worsened as sample size increased on selected samples. The worst coverage was associated with the small-context AI-labeled data, which had the largest sample size (\(n>30,000\)). The best coverage was associated with the Amazon Mechanical Turk survey, which had the smallest sample size (\(n<3,000\));
d. Performance on the R-D margin on selected samples tended to be worse due to compounding biases - {bias = [-0.09,0], RMSE = [0.06,0.11], correlation = [0.93,0.99], coverage = [0.14,0.84]} - though the best performing models were still in line with state-of-the-art MrP applications [48].
(3). AI-annotated social-media surveys with large context (10 tweets) outperform other selected samples and achieve state-of-the-arts performance._ The performance of the AI-annotated polls with large-context on the R-D margin {bias = -0.05, RMSE = 0.06, correlation = 0.99, coverage = 0.65 } is extremely close to the FiveThirtyEight forecasting model performance, generally displaying similar levels of bias, RMSE and correlation per choice-model, and losing out primarily in coverage. The small-context AI-annotated polls performed on the whole slightly worse, with compounding biases leading to a relatively large R-D RMSE of 0.1. This provides some evidence as to the optimal prompting style for generating social-media polls using LLMs, in an MrP application of this sort. The human-labeled social-media polls performed worst out of the selected samples, with a R-D RMSE of 0.11. The survey of Amazon Mechanical Turks has variable performance: a relative large RMSE of 0.09 is paired with a completely unbiased prediction on the R-D margin. Given the unbiased nature of the estimates and relatively high coverage, it's possible that under larger sample sizes, the sample of Turk workers might have rivaled the performance of models trained on large-context AI-annotated social-media data.
_(4). Bias-corrected models trained on AI-annotated social-media surveys can explain a substantial portion of the non-uniform swing for each party._ Figures D.9 and D.10 showcase the ability of the models to capture non-uniform swings from the 2016 election. To get a sense of this, we square the correlation coefficients to gain an interpretation in terms of the proportion of the cross-states variance in swings explained by our models. The \(R^{2}\) for the swing in Democratic vote across top-performing models are { gpt_10_tweets = 22.1%, ANES + WaPo = 72.3%, FiveThirtyEight = 62.4% }; for the Republicans we have { gpt_10_tweets = 42.3%, ANES + WaPo = 60.8%, FiveThirtyEight = 60.8% }; values for third-parties are omitted as the correlations are generally close to 1 for most models; for turnout we have { gpt_10_tweets = 9.6%, gpt_5_tweets = 23.0%, ANES = 17.6%, ANES + WaPo = 7.3% }. Selected samples generally explain a lower portion of the non-uniform swing from 2016 than random samples. These proportions are nonetheless substantial, and broadly justify a non-uniform modeling approach. Amongst selected samples, large-context AI-annotated social-media polls provide the best predictions for the Republican and Democratic swings, whilst the low-context AI annotations provide the better explanations for third-party and turnout swings.
The national-level election-day prediction of the vote from the best performing AI poll is in line with that produced by traditional polling aggregators, despite the AI being trained on extremely unrepresentative samples.
### National Campaign Trends
_Bias-corrected models trained on AI-annotated social-media surveys provide reasonable estimates of the national vote and campaign-trends_. If we are to use AI polls to monitor changes in support over the course of the election-campaign, these should show similar national trends in the last 30 days of the election campaign as the FiveThirtyEight polling averages over the same period. Figure 7 presents this comparison.
Unlike election-day state-level vote shares, daily polling averages are uncertain quantities. For ease of analysis, here we take them as 'truth', though there is generally substantial disagreements day-to-day swings in the vote across pollsters. The temporal variance in the last 30 days of the campaign is extremely small by definition, hence metrics such as bias and RMSE are not very meaningful for cross-model comparisons. The Pearson correlation coefficient seems like a better metric, capturing the extent to which two pollsters can 'order' the daily preferences. But here again it is easier to 'order' quantities which are extremely different - such as the election-day state-level vote-shares, as opposed to quantities which are extremely similar - such as daily national estimates of voter preferences. Republicans win around 5% of the vote in Washington D.C., and around 70% of the vote in Wyoming - whilst their national vote varies from 41.8% to 43.4% in the last 30 days in the campaign. These differences - though potentially very consequential, are very challenging to capture. Literature further suggests that, whilst campaign events matter, minor daily swings in voting preferences show a high degree of stochasticity [83, 52]. For all these reasons, we should not expect correlation levels comparable to the state-level election-day estimates, when it comes to comparing daily national swings estimates.
We are most interested in the ability of the model to replicate trends observable in traditional polls. The metric which captures this ability is the correlation coefficient. Large-prompt AI-annotated social-media polls perform well on this metric for the Democratic vote, with a correlation of 0.83 with the FiveThirtyEight average, whilst the pattern around the Republican vote is more muddled - with a relatively low correlation of 0.2.
### Human-Machine Disagreement
We seek to understand the strong performance of the Artificially Intelligent Polls. The better performance of the gpt-3.5-turbo model, prompted with 10 tweets, compared to the performance of the human-annotated sample is especially meaningful. The two samples deal with the same Twitter users. This would then suggest the annotations from the LLM were more useful to the MrP framework than those produced by humans. This does not necessarily mean the AI labels are more accurate. It is plausible that gpt-3.5-turbo annotates the sample of Twitter users to be more congruent with the election result. gpt-3.5-turbo was trained with data up-to September 2021 9, hence it is in some sense aware of the election result. It could therefore be applying some implicit raking [20], and generating individual-level labels which are more consistent with the marginal distribution of the 2020 vote. Here we will show the degree, and patterns, of agreement/disagreement between humans and LLMs.
Footnote 9: [https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5)
#### 5.3.1 Kippendorff's \(\alpha\)
Figure 8: Bootstrap distribution of Krippendorff’s \(\alpha\) for the annotated variables used in the model.
We use the package icr[71] for efficient computation.
Krippendorff's \(\alpha\) is the state-of-the-arts metric of global agreement across raters. Figure 8 (and D.1a) shows statistically significant agreement between humans and the LLM across variables, with the exception of the variable 'household income', despite performing the adjusted computation to account for its ordinal nature. The magnitude of the degree of agreement is highly variable across variables, with only the '2020 vote', 'gender' and'state' touching Krippendorff's minimal arbitrary threshold of \(\alpha=\frac{2}{3}\) for _'the data under consideration [to be] at least similarly interpretable by other scholars (as represented by different coders)'_[44]. We should note here that this threshold is minimally relevant for our purposes: whilst Krippendorff's \(\alpha\) tells us that there are substantial differences in how humans and LLMs annotate Twitter users' profiles, there is evidence that LLMs may provide better annotations than crowd-workers [26], and especially so in the realm of political data [74]. Moreover, the context of Krippendorff's quote is that of reliability data, whereby strict thresholds may make sense depending the consequences of disagreement. Finally, we note that the level of analysis of interest for us is the 'aggregate' level - namely the cell-level or the stratified-level - and there are many sets of annotations that would be consistent with optimal performance at those levels.
#### 5.3.2 (Dis)Agreement Network
We complement Krippendorf's \(\alpha\) with a fine-grained analysis of the agreement per variable. We do not have access to an underlying 'truth' for the characteristics of Twitter users, hence this analysis cannot confirm or disprove the absolute accuracy of the AI labels. We can however identify patterns of disagreement between AIs and humans. From these patterns we can assess the directions of bias affecting each annotated sample of Twitter users. This in turn can inform us on the reasons for the relative success of AI polls compared to relevant alternatives.
We borrow from the network-science literature and treat contingency-tables of annotations for each rater-pair as generations from a bipartite network [87]. Table 6 shows
\begin{table}
\begin{tabular}{c|r||r r r r r} \hline \hline & & & \multicolumn{3}{c}{gpt-3.5-turbo} \\ \hline & & & D & G & L & R & stay home \\ \hline \hline \multirow{4}{*}{**LMs**} & D & 1906 & 3 & 4 & 50 & 234 \\ & G & 3 & 5 & 0 & 0 & 0 \\ & L & 0 & 0 & 7 & 0 & 2 \\ & R & 40 & 0 & 6 & 864 & 59 \\ & stay home & 185 & 2 & 1 & 95 & 71 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Rater-agreement matrix \(A\) for the variable ‘2020 vote’.
an example of the agreement matrix for the dependent variable 'vote_2020'. To each of these matrices we fit a Bayesian Poisson mixture-model of the following form:
\[A_{ij}\sim\text{Poisson}(\mu_{ij});\] \[\log(\mu_{ij})=\beta^{0}+\beta_{i}^{1}+\beta_{j}^{2}+\log(1+rB_{ ij});\] \[B_{ij}\sim\text{Bernoulli}(\pi_{ij}), \pi_{ij}\sim\text{Beta}\left(\frac{1}{2},\frac{1}{2}\right), r\sim\text{Exp}(0.01);\] \[\beta^{0}\sim N(0,10), \beta_{i}^{1}\sim N(0,\sigma_{1}), \beta_{j}^{2}\sim N(0,\sigma_{2}), \boldsymbol{\sigma}\sim\text{Unif}(0,5);\]
where \(A\) is the agreement matrix; \(i,j\in\{1,\ldots,L\}\) index the category of a given variable the two raters are annotating (e.g. for the variable '2020 vote', each rater can choose any one of \(L=5\) categories); \(\beta^{0}\) is the global abundance term, representing the overall sampling effort; \(\beta^{1}\) and \(\beta^{2}\) model the relative propensity of each rater to classify someone in each category; \(r\) is the sampling premium if the two raters are 'linked' in the latent network; and \(B\) is the incidence matrix, representing the _latent (dis)agreement network_ of annotations across raters.
This model is a 'twist' on the classic saturated log-linear model for contingency tables, where the 'twist' is that the interaction term has a latent-variable parametrisation which allows the estimation of a network structure. \(B_{i,j}=1\) represents a link between the annotations produced by the two raters, meaning when rater 1 chooses annotation \(i\), rater 2 will preferentially choose annotation \(j\), net of any inherent tendency for any rater to choose any other option. \(rB_{ij}\) is responsible for the counts in the agreement matrix not explained by rating propensity. Upon estimating the model, we can generate \(S\) plausible network patterns \(B^{\star}\) from the posterior predictive distribution of \(B\). The monte carlo mean reveals the posterior predictive probability of a link between annotation levels across raters:
\[\bar{B^{\star}}_{ij}=\frac{1}{S}\sum_{s}^{S}B^{\star}_{sij}.\]
We fit the model using JAGS[62], which offers the ability to sample from latent parameters without having to re-parametrise the model. Figure 10 presents the relative frequency of posterior predictive incidences for human versus gpt-3-5 turbo with 10 tweets (additional comparisons are provided in Figures D.3 and D.5). These comparisons generally display broad agreement between the annotators, as can be seen by the mostly orange tone of the diagonals of the posterior incidence matrices. Where there is disagreement, this is typically informative of relative 'biases' of annotators - disagreement seems rarely 'random', and more often'systematic' in a specific direction. Each panel of Figure 10 identifies the predominant disagreements for the variable in question. A summary of the disagreements between models is tallied in Table D.1. Note that for the variable'state', the context does not play a role (see Figure 5). Hence any differences between gpt-3.5-turbo (10 tweets) and (5 tweets) are ultimately due to noise in the LLM output to the location prompt.
Figure 9: Perfect Agreement
Figure 10: Posterior predictive incidence across annotations: humans versus gpt-3.5-turbo model prompted with 10 tweets
From the annotations in Figure 10 and the summaries in Table D.1 we see the large-context LLM tended to believe potential Democrats would stay home in 2020 at higher rates than humans did. LLM tended to believe far more Twitter users would have stayed-home in 2016, independent of their other characteristics, whilst humans tended to be more pessimistic about the chances that potential democratic voters would cast a ballot on election-day. Due to the relative importance of past-vote as a predictor of current vote, we can tentatively attribute at least part of the relatively strong performance of models fit to large-context AI-annotated data to the AI's ability to annotate samples such that the joint distribution of 2020 and 2016 vote would be more compatible with the true swing. At a macro-level, this can also be seen in Figures D.9 and D.10, where the change in vote shares since the last election is substantially better predicted by the large-context AI. Large context LLMs further tended to believe Twitter users were generally somewhat older, richer, more highly-educated and more likely to have not voted in 2016. It is difficult to say whether these biases contributed to the better performance. Linking Twitter user data to an auxiliary ground-truth dataset, such as Voter Registration Files [14], could provide a strategy to better understand whether these disagreement represent higher-quality AI labels, or are simply noise which ends up having no meaningful effect on the predictions due to the relatively low magnitude of the demographic effects, net of past-vote.
Discussion
We have presented _Artificially Intelligent Opinion Polling_: a novel methodology to produce fully-automated high-quality pre-election polls from social-media data. We introduce the use of AI, in the form of Large Language Models, to extract pre-election polling features from the self-reported preferences and socio-demographics of Twitter users. We show LLMs tend to broadly agree with humans in their annotations of social-media users. We further propose a modification to the traditional Multilevel Regression and Post-stratification methodology to account for _online selection_, a general selection framework applicable to many kinds of data where self-selection onto a given medium plays a role. We show this amendment should generate substantial gains in bias reduction, RMSE, pearson correlation and coverage via a simulation study. We further show that applying bias-corrected structured MrP to the AI-extracted social-media surveys can produce state-of-the-arts estimates of the vote in an application to the 2020 US election. Our methodology could reduce the cost of pre-election public opinion polling, relative to random-digit-dial, by making it anywhere between 500 and \(2,500\) times cheaper (see Table 4). The methodology we outline here, and the results reported, suggest three broad areas for further research: increasing the automation of variable selection and modeling; improving on, and accounting for, the uncertainty in feature extraction from social media; and sampling multi-media content from more diverse social-media platforms.
**Automation.** We have outlined the prospect of a fully-automated polling machine. Data-collection for pre-election polls can already be fully automated, as the implementation of APIs to download social-media data and convert these into survey-formats is logistically straightforward, as well as being computationally and economically feasible for research. The major pain-points for the implementation of such a machine lie with the modeling framework, and in particular variable selection. In prior versions of this manuscript we experimented with using the horseshoe prior [13, 61] to select an optimal area-time level predictor. Unfortunately, though not outlandish, the predictions produced under this approach were substantially weaker than those produced by carefully selecting the predictor using topic-specific knowledge and expertise. Moreover, the use of the horseshoe negatively affected the coverage of the estimates. There are other approaches which have been suggested to automate variable selection within MrP [9, 57, 10], though each presents trade-offs, especially as it comes to the accounting of uncertainty, and some are not compatible with the linear-modeling specification necessary to thoughtfully account for online selection. In our view, this is a crucial area of research: whilst in the context of pre-election polls we benefit from knowing, at least partially, the functional form of the vote and its most relevant predictors, we do not benefit from the same knowledge in other areas of relevant opinion and behaviour. If one was able to successfully automate variable selection and mitigate selection bias, we could see a world in which real-time, population-representative monitoring of a variety of interesting variables - happiness, psychometrics, consumption, attitudes, pathogen
spread, etc. - could happen effortlessly, in real time, and at a granular level of analysis.
**Feature Extraction.** In our application to the 2020 US election we have concentrated exclusively on four types of self-reported variables: location, name, description and posts. All of these inputs to the LLM were text-based. Within these boundaries, we have attempted to test the power of context. We looked at how the quality of post-modeling predictive outputs would change under differing amounts of context. We also explored how the annotations would change, under different amounts of context. Our analysis suggests more context is better for predictive power. Future work should focus on a more formal investigation, controlling for the quality of the context, and exploring heterogeneity across opinion-domains. It is possible pre-processing tweets to extract only those with the highest proportion of 'useful' context could improve the quality of the annotations.
Whatever the quality of the annotations might be, it is unlikely we will ever be able to extract individual-level features from social-media without adding some measure of noise and/or systematic bias into our sample. Future work should further consider enhancing the Hierarchical Bayesian model to account for such impurities. Simple extensions to the Bayesian logistic regression framework proposed here are available to account for measurement error [55] as well as contamination in the dependent variable [14]. Proper accounting of uncertainty at the annotation-level is liable to make the coverage of bias-corrected structured MrP estimates more robust. Ultimately we would benefit from having a probabilistic LLM which provides uncertainty estimates. In absence of this, future work should look at tuning the temperature hyper-parameter to produce desirable levels of uncertainty around the labels. Based on this parameter we could then sample multiple labels per user, essentially generating quasi-posterior samples from the LLM. Proper accounting for annotation uncertainty is an important area of further research.
Another concern specific to the study of vote-choice is what understanding the LLM has of the underlying social context. We use gpt-3.5-turbo, which was trained with data encompassing the 2020 US election, and hence we could assume it was in some sense aware of the _quality_ of the candidates, and their specific relationships with various types of voters. Would the LLM produce judgments of vote-choice which are equally valid if its training data did not include information about a specific candidate? For example, if in the 2024 US election we would see the Democratic or Republican party run some relatively unknown candidate, would the LLM be able to account for this? How about an insurgent third-party candidate with national appeal? If the LLM would limit itself to looking at the party of the candidate - which is what would remain stable in terms of relationships with voters from election to election - it would surely produce low-quality labels. This is where the work of Argyle et al. [2] on conditioning using specific context-cues may be most relevant. The pollster would need to provide extra context in the prompts to refine the LLM labels on new candidates, or to update it of
specific changes in political dynamics since its last training day. How to best do this in the context of pre-election opinion polling remains an open area of research.
As a final comment on feature extraction, we note the future of this methodology clearly lies in Multimodal AI [56, 63], which would enable the ability to perform feature-extraction from images, videos, recordings, or other media. A version of this feature extraction machine is already implementable: image-to-text [80], speech-to-text [29] and video-to-text [79] models exist, and state-of-the-arts versions are typically accessible through various APIs10. Their outputs can be distilled and passed-on to a LLM for a more holistic feature extraction, which is not solely limited to text-inputs.
Footnote 10: See the Multimodal Models, and associated APIs at [https://huggingface.co/docs/transformers](https://huggingface.co/docs/transformers). OpenAI makes their speech to text model ‘_Whisper_’ available via their API - [https://openai.com/research/whisper](https://openai.com/research/whisper), and gpt-4 accepts images as inputs and is able to use internally formulated descriptions of these images to answer specific prompts and complete classification tasks.
**Social Media Data.** In our US 2020 application we focus on Twitter, which at the time of collection we considered to be the primary platform for varied online political discourse. It was also conveniently accessible via a free-to-use streaming API. Since then the social media landscape has been evolving. We cannot fail to mention a trend in making API basic-usage tiers more expensive. Both Twitter 11 and reddit 12 have reduced basic-usage API access and partially implemented subscription plans. It's too early to tell what impact this may have on our proposed modeling strategy, though we have shown a moderate amount of posts (in the order of 50,000) from a moderate number of users (in the order of 5,000) can be sufficient to make high-quality predictions. This amount of Tweets and users is still relatively feasible to collect over a few months for aspiring pollsters under a 'hobbysit' Twitter plan, which comes at a cost of $100 per month 13. A second important trend concerns the impact of polarisation on social-media usage. Though little has been done to systematically review this phenomenon, there is some evidence that the social-media space is fracturing according to partisanship. Facebook and Twitter have historically had a partisan and demographic bent [53] - though this may be changing. The well documented flight of users from Twitter to Mastodon [88] has coincided with a general perception of Twitter become more right-aligned [1]. Platforms such as Truth-Social and Gab [38] appeal primarily to a subset of conservative and libertarian users. Interestingly, Mastodon and Gab's APIs remain free. This offers an unique opportunity for researchers, namely to fully embrace the sampling design advocated by King & Zeng [43], and sample partisans from fully siloed social-media platforms. Provided we can use the bias-correction to address the sampling protocol, we believe our approach to be robust to both changes in pricing and partisan siloing of social-media platforms - though ultimately these are empirical
questions.
### Conclusion
Advances in artificial intelligence will radically change how we conduct public opinion polling. Our contribution is first to suggest how AI can be leveraged to transform digital traces into public opinion data. Secondly, we propose a robust strategy for modeling public opinion preferences based on these transformed digital traces data. The major modeling challenge we address is accounting for the non-representative nature of the online-generated digital trace sample. Our work makes it clear that with the judicious use of AI and social-media data that builds on a robust inferential framework, we can significantly advance claims regarding the representativeness of digital trace data. Hence, the research agenda for _Artificially Intelligent Opinion Polling_ is clear: to build automated pipelines, founded on flexible and interpretable models, that enable representative inference from easily obtainable, high-frequency unrepresentative samples. We hope others can build on this work.
## References
* [1] M. Anderson. After musk's takeover, big shifts in how republican and democratic twitter users view the platform. 2023.
* [2] L. P. Argyle, E. C. Busby, N. Fulda, J. R. Gubler, C. Rytting, and D. Wingate. Out of one, many: Using language models to simulate human samples. _Political Analysis_, 31(3):337-351, 2023.
* [3] P. Barbera. Birds of the same feather tweet together: Bayesian ideal point estimation using twitter data. _Political analysis_, 23(1):76-91, 2015.
* [4] C. Barrie and J. C.-t. Ho. academictwitter: an r package to access the twitter academic research product track v2 api endpoint. _Journal of Open Source Software_, 6(62):3272, 2021.
* [5] J. Baumgartner, S. Zannettou, B. Keegan, M. Squire, and J. Blackburn. The pushshift reddit dataset. In _Proceedings of the international AAAI conference on web and social media_, volume 14, pages 830-839, 2020.
* [6] J. Besag, J. York, and A. Mollie. Bayesian image restoration, with two applications in spatial statistics. _Annals of the institute of statistical mathematics_, 43(1):1-20, 1991.
* [7] S. Bills, N. Cammarata, D. Mossing, H. Tillman, L. Gao, G. Goh, I. Sutskever, J. Leike, J. Wu, and W. Saunders. Language models can explain neurons in language models. _URL https://openaipublic. blob. core. windows. net/neuron-explainer/paper/index. html.(Date accessed: 14.05. 2023)_, 2023.
* [8] M. Binz and E. Schulz. Using cognitive psychology to understand gpt-3. _Proceedings of the National Academy of Sciences_, 120(6):e2218523120, 2023.
* [9] J. Bisbee. Barp: Improving mister p using bayesian additive regression trees. _American Political Science Review_, 113(4):1060-1065, 2019.
* [10] P. Broniecki, L. Leemann, and R. Wuest. Improved multilevel regression with poststratification through machine learning (automrp). _The Journal of Politics_, 84(1):597-601, 2022.
* [11] M. K. Buttice and B. Highton. How does multilevel regression and poststratification perform with conventional national surveys? _Political analysis_, 21(4), 2013.
* [12] B. Carpenter, A. Gelman, M. D. Hoffman, D. Lee, B. Goodrich, M. Betancourt, M. Brubaker, J. Guo, P. Li, and A. Riddell. Stan: A probabilistic programming language. _Journal of statistical software_, 76(1), 2017.
* Carvalho et al. [2009] C. M. Carvalho, N. G. Polson, and J. G. Scott. Handling sparsity via the horseshoe. In _Artificial intelligence and statistics_, pages 73-80. PMLR, 2009.
* Cerina and Duch [2020] R. Cerina and R. Duch. Measuring public opinion via digital footprints. _International Journal of Forecasting_, 36(3):987-1002, 2020.
* Cerina et al. [2023] R. Cerina, C. Barrie, N. Ketchley, and A. Y. Zelin. Explaining recruitment to extremism: A bayesian hierarchical case-control approach. _Political Analysis_, 2023.
* Devlin et al. [2018] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_, 2018.
* Diaz et al. [2016] F. Diaz, M. Gamon, J. M. Hofman, E. Kiciman, and D. Rothschild. Online and social media data as an imperfect continuous panel survey. _PloS one_, 11(1):e0145406, 2016.
* Donegan [2022] C. Donegan. Flexible functions for icar, bym, and bym2 models in stan. _GitHub_, 2022. URL [https://github.com/ConnorDonegan/Stan-IAR](https://github.com/ConnorDonegan/Stan-IAR).
* Donegan [2022] C. Donegan. geostan: An r package for bayesian spatial analysis. _Journal of Open Source Software_, 7(79):4716, 2022.
* Fienberg et al. [1970] S. E. Fienberg et al. An iterative procedure for estimation in contingency tables. _The Annals of Mathematical Statistics_, 41(3):907-917, 1970.
* Gao et al. [2021] Y. Gao, L. Kennedy, D. Simpson, and A. Gelman. Improving multilevel regression and poststratification with structured priors. _Bayesian Analysis_, 16(3):719, 2021.
* Gelman [2019] A. Gelman. Mrp (multilevel regression and poststratification; mister p): Clearing up misunderstandings about. _Statistical Modeling, Causal Inference, and Social Science_, 2019. URL [https://statmodeling.stat.columbia.edu/2019/01/10/mrp-multilevel-regression-poststratification-mister-p-clearing-misunderstandings](https://statmodeling.stat.columbia.edu/2019/01/10/mrp-multilevel-regression-poststratification-mister-p-clearing-misunderstandings).
* Gelman [2020] A. Gelman. Prior choice recommendations. _Stan Developer Wiki_, 2020. URL [https://github.com/stan-dev/stan/wiki/Prior-Choice-Recommendations](https://github.com/stan-dev/stan/wiki/Prior-Choice-Recommendations).
* Gelman and Little [1997] A. Gelman and T. C. Little. Poststratification into many categories using hierarchical logistic regression. 1997.
* Gelman et al. [2013] A. Gelman, J. B. Carlin, H. S. Stern, D. B. Dunson, A. Vehtari, and D. B. Rubin. _Bayesian data analysis_. Chapman and Hall/CRC, 2013.
* Gilardi et al. [2023] F. Gilardi, M. Alizadeh, and M. Kubli. Chatgpt outperforms crowd-workers for text-annotation tasks. _arXiv preprint arXiv:2303.15056_, 2023.
* Groves et al. [2011] R. M. Groves, F. J. Fowler Jr, M. P. Couper, J. M. Lepkowski, E. Singer, and R. Tourangeau. _Survey methodology_. John Wiley & Sons, 2011.
* [28] D. Gunning, M. Stefik, J. Choi, T. Miller, S. Stumpf, and G.-Z. Yang. Xai--explainable artificial intelligence. _Science robotics_, 4(37):eaay7120, 2019.
* [29] A. Hannun, C. Case, J. Casper, B. Catanzaro, G. Diamos, E. Elsen, R. Prenger, S. Satheesh, S. Sengupta, A. Coates, et al. Deep speech: Scaling up end-to-end speech recognition. _arXiv preprint arXiv:1412.5567_, 2014.
* [30] C. Hanretty. 2019 General Election MRP predictions. [https://www.survation.com/2019-general-election-mrp-predictions-survation-and-dr-chris-hanretty/](https://www.survation.com/2019-general-election-mrp-predictions-survation-and-dr-chris-hanretty/), 2019.
* [31] C. Hanretty, B. E. Lauderdale, and N. Vivyan. Comparing strategies for estimating constituency opinion from national survey samples. _Political Science Research and Methods_, 6(3):571-591, 2018.
* [32] M. Heidemanns, A. Gelman, and G. E. Morris. An updated dynamic bayesian forecasting model for the us presidential election. _Harvard Data Science Review_, 2(4):10-1162, 2020.
* [33] L. Hemphill, A. Culotta, and M. Heston. # polar scores: Measuring partisanship using social media content. _Journal of Information Technology & Politics_, 13(4):365-377, 2016.
* [34] M. D. Hoffman and A. Gelman. The no-u-turn sampler: adaptively setting path lengths in hamiltonian monte carlo. _J. Mach. Learn. Res._, 15(1):1593-1623, 2014.
* [35] S. Jackman. Pooling the polls over an election campaign. _Australian Journal of Political Science_, 40(4):499-517, 2005.
* [36] S. Jackman. The predictive power of uniform swing. _PS: Political Science & Politics_, 47(2):317-321, 2014.
* [37] S. Jackman and B. Spahn. Why does the american national election study overestimate voter turnout? _Political Analysis_, 27(2):193-207, 2019.
* [38] G. Jasser, J. McSwiney, E. Pertwee, and S. Zannettou. 'welcome to# gabfam': Far-right virtual community on gab. _New Media & Society_, 25(7):1728-1745, 2023.
* [39] W. Jennings and C. Wlezien. Election polling errors across time and space. _Nature Human Behaviour_, 2(4):276-283, 2018.
* [40] M. W. Kearney. rtweet: Collecting and analyzing twitter data. _Journal of open source software_, 4(42):1829, 2019.
* [41] C. Kennedy, M. Blumenthal, S. Clement, J. D. Clinton, C. Durand, C. Franklin, K. McGeeney, L. Miringoff, K. Olson, D. Rivers, et al. An evaluation of the 2016 election polls in the united states. _Public Opinion Quarterly_, 82(1):1-33, 2018.
* [42] R. Kennedy, S. Clifford, T. Burleigh, P. D. Waggoner, R. Jewell, and N. J. Winter. The shape of and solutions to the mturk quality crisis. _Political Science Research and Methods_, 8(4):614-629, 2020.
* [43] G. King and L. Zeng. Logistic regression in rare events data. _Political analysis_, 9 (2):137-163, 2001.
* [44] K. Krippendorff. Reliability in content analysis: Some common misconceptions and recommendations. _Human communication research_, 30(3):411-433, 2004.
* [45] K. Krippendorff. Computing krippendorff's alpha-reliability. 2011.
* [46] B. E. Lauderdale. How YouGov's 2019 General Election model works. [https://yugov.co.uk/topics/politics/articles-reports/2019/11/27/how-youogovs-2019-general-election-model-works](https://yugov.co.uk/topics/politics/articles-reports/2019/11/27/how-youogovs-2019-general-election-model-works), 2019.
* [47] B. E. Lauderdale and J. Blumenau. Constructing and assessing seat level estimates. Reading the 2019 Election Polls: Event by the London School of Economics, Department of Methodology, and the British Polling Council, 27/11/2019.
* [48] B. E. Lauderdale, D. Bailey, J. Blumenau, and D. Rivers. Model-based pre-election polling for national and sub-national outcomes in the us and uk. _International Journal of Forecasting_, 36(2):399-413, 2020.
* [49] J. R. Lax and J. H. Phillips. How should we estimate sub-national opinion using mrp? preliminary findings and recommendations. In _annual meeting of the Midwest Political Science Association, Chicago_, 2013.
* [50] Y. LeCun. Do large language models need sensory grounding for meaning and understanding? In _Workshop on Philosophy of Deep Learning, NYU Center for Mind, Brain, and Consciousness and the Columbia Center for Science and Society_, 2023.
* [51] L. Leemann and F. Wasserfallen. Extending the use and prediction precision of subnational public opinion estimation. _American journal of political science_, 61 (4):1003-1022, 2017.
* [52] D. A. Linzer. Dynamic bayesian forecasting of presidential elections in the states. _Journal of the American Statistical Association_, 108(501):124-134, 2013.
* [53] J. Mellon and C. Prosser. Twitter and facebook are not representative of the general population: Political attitudes and demographics of british social media users. _Research & Politics_, 4(3):2053168017720008, 2017.
* [54] M. Morris, K. Wheeler-Martin, D. Simpson, S. J. Mooney, A. Gelman, and C. DiMaggio. Bayesian hierarchical spatial models: Implementing the besag york mollie model in stan. _Spatial and spatio-temporal epidemiology_, 31:100301, 2019.
* [55] S. Muff, A. Riebler, L. Held, H. Rue, and P. Saner. Bayesian analysis of measurement error models using integrated nested laplace approximations. _Journal of the Royal Statistical Society Series C: Applied Statistics_, 64(2):231-252, 2015.
* [56] J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A. Y. Ng. Multimodal deep learning. In _Proceedings of the 28th international conference on machine learning (ICML-11)_, pages 689-696, 2011.
* [57] J. T. Ornstein. Stacked regression and poststratification. _Political Analysis_, pages 1-9, 2019.
* [58] J. T. Ornstein, E. N. Blasingame, and J. S. Truscott. How to train your stochastic parrot: Large language models for political texts. Technical report, Working Paper, 2022.
* [59] O. Papaspiliopoulos, G. O. Roberts, and M. Skold. A general framework for the parametrization of hierarchical models. _Statistical Science_, pages 59-73, 2007.
* [60] D. K. Park, A. Gelman, and J. Bafumi. Bayesian multilevel estimation with poststratification: State-level estimates from national polls. _Political Analysis_, 12(4):375-385, 2004.
* [61] J. Piironen and A. Vehtari. On the hyperprior choice for the global shrinkage parameter in the horseshoe prior. In _Artificial Intelligence and Statistics_, pages 905-913. PMLR, 2017.
* [62] M. Plummer et al. Jags: A program for analysis of bayesian graphical models using gibbs sampling. In _Proceedings of the 3rd international workshop on distributed statistical computing_, volume 124, pages 1-10. Vienna, Austria, 2003.
* [63] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, et al. Learning transferable visual models from natural language supervision. In _International conference on machine learning_, pages 8748-8763. PMLR, 2021.
* [64] A. Riebler, S. H. Sorbye, D. Simpson, and H. Rue. An intuitive bayesian spatial model for disease mapping that accounts for scaling. _Statistical methods in medical research_, 25(4):1145-1165, 2016.
* [65] I. Rudnytskyi. openai: R Wrapper for OpenAI API, 2023. URL [https://github.com/irudnyts/openai](https://github.com/irudnyts/openai). R package version 0.4.1.
* [66] N. Silver. 2020 Election Forecast. _FiveThirtyEight_, 2020. URL [https://projects.fivethirtyeight.com/2020-election-forecast/](https://projects.fivethirtyeight.com/2020-election-forecast/).
* [67] D. Simpson, H. Rue, A. Riebler, T. G. Martins, S. H. Sorbye, et al. Penalising model component complexity: A principled, practical approach to constructing priors. _Statistical science_, 32(1):1-28, 2017.
* [68] Stan Development Team. RStan: the R interface to Stan, 2023. URL [https://mc-stan.org/](https://mc-stan.org/). R package version 2.21.8.
* [69] C. Stan Development Team et al. Rstan: the r interface to Stan. _R package version_, 2(3), 2018.
* [70] Standard Definitions. Final dispositions of case codes and outcome rates for surveys. _The American Association for Public Opinion Research_, 2023. URL [https://aapor.org/wp-content/uploads/2023/05/Standards-Definitions-10th-edition.pdf](https://aapor.org/wp-content/uploads/2023/05/Standards-Definitions-10th-edition.pdf).
* [71] A. Staudt, P. L'Ecuyer, M. A. Staudt, I. Rcpp, and L. Rcpp. Package 'icr'. 2023.
* [72] K. Stock. Mining location from social media: A systematic review. _Computers, Environment and Urban Systems_, 71:209-240, 2018.
* [73] P. Sturgis, B. Nick, C. Mario, F. Stephen, G. Jane, W. Jennings, K. Jouni, L. Ben, and S. Patten. Report of the inquiry into the 2015 british general election opinion polls. 2016.
* [74] P. Tornberg. Chatgpt-4 outperforms experts and crowd workers in annotating political twitter messages with zero-shot learning. _arXiv preprint arXiv:2304.06588_, 2023.
* [75] Z. Tufekci. Big questions for social media big data: Representativeness, validity and other methodological pitfalls. In _Proceedings of the international AAAI conference on web and social media_, volume 8, pages 505-514, 2014.
* [76] J. Twyman. Getting it right: Yougov and online survey research in britain. _Journal of Elections, Public Opinion and Parties_, 18(4):343-354, 2008.
* [77] S. Van Buuren. _Flexible imputation of missing data_. CRC press, 2018.
* [78] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need. _Advances in neural information processing systems_, 30, 2017.
* [79] S. Venugopalan, M. Rohrbach, J. Donahue, R. Mooney, T. Darrell, and K. Saenko. Sequence to sequence-video to text. In _Proceedings of the IEEE international conference on computer vision_, pages 4534-4542, 2015.
* [80] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 3156-3164, 2015.
* [81] W. Wang, D. Rothschild, S. Goel, and A. Gelman. Forecasting elections with non-representative polls. _International Journal of Forecasting_, 31(3):980-991, 2015.
* [82] Z. Wang, S. Hale, D. I. Adelani, P. Grabowicz, T. Hartman, F. Flock, and D. Jurgens. Demographic inference and representative population estimates from multilingual social media data. In _The World Wide Web Conference_, pages 2056-2067, 2019.
* [83] C. Wlezien and R. S. Erikson. The timeline of presidential election campaigns. _The Journal of Politics_, 64(4):969-993, 2002.
* [84] S. Wolfram. _What Is ChatGPT Doing... and Why Does It Work?_ Stephen Wolfram, 2023.
* [85] S. Wood and M. S. Wood. Package'mgcv'. _R package version_, 1(29):729, 2015.
* [86] J. M. Wooldridge. _Introductory econometrics: A modern approach_. Cengage learning, 2015.
* [87] J.-G. Young, F. S. Valdovinos, and M. Newman. Reconstruction of plant-pollinator networks from observational data. _Nature Communications_, 12(1):3911, 2021.
* [88] H. B. Zia, J. He, A. Raman, I. Castro, N. Sastry, and G. Tyson. Flocking to mastodon: Tracking the great twitter migration. _arXiv preprint arXiv:2302.14294_, 2023.
## Appendix
### Table of Contents
* A Stan listings
* B Simulation Study
* B.1 Examining Model Behaviour
* C Feature Extraction via LLMs
* D Rater Agreement
## Appendix A Stan listings
```
1data{
2
3intlowar=1>B;//n.observations
4intY[N];//binarychoice
5realoffset;//king-zengoffsetparameter
6
7int<lower=1>P;//n.state-andday-levelfxedeffects
8matrix[N,P]X;//state-andday-levelcovariatematrix
9
10intgender_id[N];//genderlevelid
11int<lower=1>gender_N;//numberofgenderlevels
12int ethnicity_id[N];//ethnicitylevelid
13int<lower=1>ethnicity_N;//numberofethnicitylevels
14int<lower=1>ethnicity_N;//numberofethnicitylevels
15
16intag_id[N];//navelevalid
17int<lower=1>edge_N;//numberofagelevels
18intedu_id[N];//educationlevelid
19int<lower=1>edu_N;//numberofeducationlevels
20intincome_id[N];//incomelevelid
21int<lower=1>income_N;//numberofincomelevels
22int vote2016_id[N];//2016votelevelid
23int<lower=1>vote2016_N;//numberof2016votelevels
24
25int dota_id[N];//days-to-electionid
26int<lower=1>det_N;//naxnumberofdays-to-election
27
28//SPATIAL-COMPENT DATA
39int area_id[N];//indexofareasintheobserveddata
40int<lower=area_N;//n.ofofofapatailunits
41int<lower=1>k;//no.ofapapapateminar-connectedgroups
42int group_size[k];//observationalunitspergroup
43int group_idx[area_N];//indexofoverations,orderedbygroup
44
45int<lower=1>H_edges;//numberofadjacencyinstances
46int<lower=1,upper=area_N>hod[N_edges];//node[i]adjacenttonode2[i]
47int<lower=1,upper=area_N>hod[2N_edges];//node[i]<hod2[i]
48int<lower=1,upper=>cond[dura_N];//idsofgroupsbyareas
49vector[k]inv_sqrt_scaling_factor ;//BNT2scalefactor,withsingletensrepresentedby1
50
51
52
53
54
55
56
57
58transformeddata{
59int<lower=0,upper=1>has_phi=1;//turnphiontocincludesunstructuredrandomeffectinBNT2spec.
60
61
62}
```
Listing 1: Stan 'Data' and 'Transformed Data' Declaration Blocks.
## 1 Introduction
The _Ramsey_ is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_, is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, is a _Ramsey_.
TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_.
TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_, is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_.
TheRamsey is a _Ramsey_, which is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_. TheRamsey is a _Ramsey_, which is a _Ramsey_.
TheRamsey is a _
```
1transformedparameters{
2
3 vector[N]nu;//latentpropensivityforchoicej
5//NON-CENTREREDPARAMETTRISTATION
6
7 vector[gender_N]gamma_gender_star=gamma_gender*gamma_gender_scale;
8
9 vector[ethnicity_N]gamma_ethnicity_star=gamma_ethnicity*gamma_ethnicity_scale;
10
11 vector[agg_N]gamma_agg_star=gamma_agg*gamma_agg_scale;
12
13 vector[edu_N]gamma_edu_star=gamma_edu*gamma_edu_scale;
14
15 vector[income_N]gamma_income_star=gamma_income*gamma_income_scale;
16
17 vector[ vote2016_N]gamma_vote2016_star=gamma_vote2016*gamma
```
1model{
2
3//IIPRIORS
5alpha_star-std_normal();
6
7to_vector(beta_star)-std_normal();
8
9//UNSTRUCTUREDANDRON EFFECTS
10
11to_vector(gamma_gender)-std_normal();
12gamma_gender_scale-std_normal();
13
14to_vector(gamma_ethnicity)-std_normal();
15gamma_ethnicity_scale-std_normal();
16
17to_vector(gamma_edu)-std_normal();
18gamma_edu_scale-std_normal();
19
20to_vector(gamma_vota2016)-std_normal();
21gamma_vota2016_scale-std_normal();
22
23//STRUCTURED AUTORORRESSIVE PRIORS
24
25sum(gamma_income)-normal(0,0.01*income_N);//sum-to-0constraint
26for(in2:income_N)/
27gamma_income(i)-normal(gamma_income[i-1],1)
28);
29gamma_income_scale-std_normal();
30
31sum(gamma_age)-normal(0,0.01*age_N);//sum-to-0constraint
32for(iin2:seg_N){
33gamma_age[i]-normal(gamma_age[i-1],1)
34};
35gamma_age_scale-std_normal();
36
37sum(delta)-normal(0,0.01*dte_N);//sum-to-0constraint
38for(in2:idte_N){
39delta[i]-normal(delta[i-1],1)
40};
41delta_scale-std_normal();
42
43psi-icar_normal(spatial_scale,node1,node2,k,group_size,group_idx,has_phi);
44phi-std_normal();
45omega-beta(0.5,0.5);
46spatial_scale-std_normal();
47
48//LIRELIBOD
49
50Y-barnoulli_logit(mu);
51
52}
```
Listing 4: Stan 'Model' Declaration Block.
## Appendix B Simulation Study
Notes: The y-axis represents the score of a structured-priors MrP model without bias-correction fit to a random sample from the population (S.0); the x-axis represents performance of all other models, for a given simulated population. The legend reports the performance relative to (S.0). Smooth curves are fit using the mgcv package [85].
Figure B.2: Comparing quality of estimates of \(\mathbf{\pi}_{j}\) under different scenarios.
### Examining Model Behaviour
The following emerges from a detailed look at the properties of each scenario relative to specific stimuli:
_i. **sample size**\(n\): Figures 1 and B.3 demonstrate that more is better in terms of sample size for RMSE reduction and increases in point-estimate correlations. Regarding the stratified preferences \(\mathbf{\theta}_{j}\), under random-sampling we see decreasing returns and a plateau around \(n=8,000\). For selected samples we never enter the plateau phase in the examined range. Coverage is negatively affected by increasing sample-size under online selection. We see logarithmic degradation of coverage under online selection. At the cell-level, estimates of \(\mathbf{\pi}_{j}\) behave similarly, though we see one major difference - namely models endowed with Multinomial likelihood outperform Bernoulli models. Models with Bernoulli likelihood experience substantial coverage degradation at larger sample sizes, whilst Multinomial models remain robust at any sample-size;
_ii. **population prevalence**\(\pi_{j}\) : Figures 2 and B.4 show that with respect to RMSE there is a general degradation moving away from _rare_ events - a property attributable to the larger scale of the prevalence. The DGP rarely produces prevalence close to 1, so we cannot confidently asses properties for \(\pi_{j}>0.8\). However, behaviour from estimates of both \(\mathbf{\theta}_{j}\) and \(\mathbf{\pi}_{j}\) suggests a symmetric improvement in performance away from \(\pi_{j}\approx 0.5\). Regarding correlation, here models show similar behaviour, with correlation peaking at \(\pi_{j}\approx 0.2\), again showing hints of symmetry around \(\pi=0.5\) for selected samples. Coverage behaviour is similar to that which we see for sample-size, namely selected samples experience quasi-logarithmic decrease in coverage as prevalence increases, with bias-corrected models seemingly more robust to changes in prevalence. We further note that at the cell-level, Multinomial models outperform Bernoulli models, especially as it pertains to coverage - though we see no evidence of this at the stratified level;
_iii. **central selection penalty**\(\mu_{j}^{\Upsilon}\): Figures 3 and B.5 show the online selection penalty exponentially degrades estimates from uncorrected selected samples at \(\mu_{j}^{\Upsilon}>0.5\), across all metrics. Bias-corrected models appear broadly unbiased despite cripppling penalty levels, only showing signs of degradation around \(\mu_{j}^{\Upsilon}>0.8\). RMSE starts degrading earlier, around at \(\mu_{j}^{\Upsilon}>0.6\), though the rate of degradation in RMSE is far slower for bias-corrected models than for uncorrected models. Note further that, whilst the bias-corrected models are robust to bias introduced via relative over-sampling, the uncorrected models under online selection show an increase in positive bias proportional to the degree of over-sampling relative to the other choices. This can be seen in spikes of bias and RMSE when the \(\mu_{j}^{\Upsilon}=0\) and \(\mu_{j^{\prime}}^{\Upsilon}>0,\ \forall^{j^{\prime}}\neq j\). The impact of the penalty is especially severe on correlation estimates when \(\mu_{j}^{\Upsilon}>0.8\), at which point there is a massive drop in correlation between the estimates and the true values. Though coverage of bias-corrected models is not as good as random sampling, we see relatively robust coverage at any penalty level compared to otherwise abysmal levels (approaching 0 as the penalty approaches 1) under uncorrected selection.
_iv. **sample prevalence bias**\(\hat{\pi}_{j}-\pi_{j}\): this is a measure of the severity of the bias which is effectively faced by the sample. Note that random samples can also suffer -
albeit more rarely than selected samples - from severely biased prevalence. Figures 4 and B.6 suggest bias-correction is robust to bias in sample prevalence. Only under extraordinary sample prevalence bias \(|\)\(\hat{\pi}_{j}-\pi_{j}\)\(|\)\(>0.1\) do we see a significant degradation in performance for bias-corrected models. The Figures clearly display the highly damaging impact of sample prevalence bias, which, if unaccounted for, translates almost linearly to increases in bias and RMSE, as well as exponential loss of coverage. The smooth curves imply some evidence that bias-correction further out-performs random-samples in the unlikely event of highly-unrepresentative random draws, though because these extreme draws are extremely rare, the intervals around these tail-events are large and we caution against over-interpreting these few limiting observations.
Figure B.3: Effect of sample size \(n\) on estimation performance for \(\mathbf{\pi}_{j}\).
Figure B.4: Effect of population prevalence \(\pi_{j}\) on estimation performance for \(\mathbf{\pi}_{j}\).
Figure B.5: Effect of online-selection penalty \(\mu_{j}^{\Upsilon}\) on estimation performance for \(\mathbf{\pi}_{j}\).
Figure B.6: Effect of sample bias \((\hat{\pi}_{j}-\pi_{j})\) on estimation performance for \(\mathbf{\pi}_{j}\).
Figure B.5: Effect of online-selection penalty \(\mu_{j}^{\Upsilon}\) on estimation performance for \(\mathbf{\pi}_{j}\).
## Appendix C Feature Extraction via LLMs
Figure C.1: Format of the survey-like categories used for classification. Each socio-demographic and political category, along with its respective levels, is an element within the vector demos_string. The elements of this vector are randomised and passed to the LLM within the prompt in Figure 6. The \n are necessary to appropriate ensure spacing in the prompt.
## Appendix D Rater Agreement
Figure D.2: Graphical representation of the relative frequency of posterior predictive incidence across annotations between humans and the gpt-3.5-turbo model prompted with 10 tweets, for the variable ‘state’.
Figure D.3: Graphical representation of the relative frequency of posterior predictive incidence across annotations between humans and the got-3.5-turbo model prompted with 5 tweets. See Figure 9 for understanding scale and colour-coding. See Figure D.4 for the variable ‘state’.
Figure D.4: Graphical representation of the relative frequency of posterior predictive incidence across annotations between humans and the got-3.5-turbo model prompted with 5 tweets, for the variable ‘state’.
Figure D.5: Graphical representation of the relative frequency of posterior predictive incidence across annotations between the got-3.5-turbo model prompted with 10 tweets and the same model prompted with 5 tweets. See Figure 9 for understanding scale and colour-coding. See Figure D.6 for the variable ‘state’.
Figure D.6: Graphical representation of the relative frequency of posterior predictive incidence across annotations between gpt-3.5-turbo model prompted with 10 tweets and the same model prompted with 5 tweets, for the variable ‘state’.
\begin{table}
\begin{tabular}{c||l|l|l} & humans v. & humans v. & gpt-3. 5-turbo: \\ & gpt-3. 5-turbo (10 tweets) & gpt-3. 5-turbo (5 tweets) & (10 tweets) v. (5 tweets) \\ \hline \hline
2020 _votte_ & \(\bullet\) Human \(\rightarrow\)_sta long_ bias & \(\bullet\)_stay long_ disagreement & \(\bullet\) (5 tweets) \(\rightarrow\) stay-home bias \\ & LLM \(\rightarrow\) Democrats _stay home_ bias & \(\bullet\) LLM \(\rightarrow\) third-party mixing & \(\bullet\) (5 tweets) \(\rightarrow\) stay-home bias \\ \hline _age bins_ & \(\bullet\)_middle-aged_ confusion & \(\bullet\) see humans v. & \(\bullet\) see humans v. \\ & \(\bullet\) LLM \(\rightarrow\) some _old-age_ bias & gpt-3. 5-turbo (10 tweets) & \(\bullet\) minor _middle-age_ noise \\ \hline _hh income_ & \(\bullet\) LLM \(\rightarrow\)_higher-income_ bias & \(\bullet\) noisy agreement & \(\bullet\) noisy agreement \\ \hline _gender_ & & & \\ \hline _education_ & \(\bullet\) LLM \(\rightarrow\)_high-edu._ bias & \(\bullet\) see humans v. & \\ & \(\bullet\) ggpt-3. 5-turbo (10 tweets) & \(\bullet\) (5 tweets) \(\rightarrow\)_low-edu_ bias \\ \hline \multirow{2}{*}{2016 _votte_} & \(\bullet\) humans \(\rightarrow\) Democrats _stay home_ bias & \(\bullet\) see humans v. & \\ & \(\bullet\) LLM \(\rightarrow\) general _stay home_ & gpt-3. 5-turbo (10 tweets) & \(\bullet\) (5 tweets) \(\rightarrow\) Democrats _stay home_ bias \\ \hline \multirow{3}{*}{_state_} & \(\bullet\) extremely minor disagreement & \(\bullet\) extremely minor disagreement & \(\bullet\) extremely minor disagreement \\ & \(\bullet\) LLM = DC \(\rightarrow\) humans = PA & \(\bullet\) LLM = MO \(\rightarrow\) humans = KS & \(\bullet\) (5 tweets) = NY \(\rightarrow\) (10 tweets) = VT \\ & \(\bullet\) LLM = WA \(\rightarrow\) humans = DC & \(\bullet\) (5 tweets) = WA \(\rightarrow\) (10 tweets) = DC \\ \hline \end{tabular}
\end{table}
Table D.1: A qualitative summary of deviations from perfect agreement emergent from the (Dis)Agreement Network analysis. Any ‘bias’ in this context should be interpreted as relative to the opposite rater.
|
2301.13560 | Efficiency at maximum power of a Carnot quantum information engine | Optimizing the performance of thermal machines is an essential task of
thermodynamics. We here consider the optimization of information engines that
convert information about the state of a system into work. We concretely
introduce a generalized finite-time Carnot cycle for a quantum information
engine and optimize its power output in the regime of low dissipation. We
derive a general formula for its efficiency at maximum power valid for
arbitrary working media. We further investigate the optimal performance of a
qubit information engine subjected to weak energy measurements. | Paul Fadler, Alexander Friedenberger, Eric Lutz | 2023-01-31T11:18:12Z | http://arxiv.org/abs/2301.13560v1 | # Efficiency at maximum power of a Carnot quantum information engine
###### Abstract
Optimizing the performance of thermal machines is an essential task of thermodynamics. We here consider the optimization of information engines that convert information about the state of a system into work. We concretely introduce a generalized finite-time Carnot cycle for a quantum information engine and optimize its power output in the regime of low dissipation. We derive a general formula for its efficiency at maximum power valid for arbitrary working media. We further investigate the optimal performance of a qubit information engine subjected to weak energy measurements.
Heat engines convert thermal energy into mechanical work by running cyclicly between two heat baths at different temperatures. They have been widely used to generate motion, from ancient steam engines to modern internal combustion motors [1]. Information engines, on the other hand, extract energy from a single heat bath by processing information, for instance, via cyclic measurement and feedback operations [2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14]. They thus exploit information gained about the state of a system to produce useful work [15; 16]. Such machines may be regarded as interacting with one heat reservoir and one information reservoir which only exchanges entropy, but no energy, with the device [17; 18; 19]. Information engines are possible owing to a fundamental connection between information and thermodynamics, as exemplified by Maxwell's celebrated demon [20; 21; 22]. Successful information-to-work conversion has been reported in a growing number of classical experiments [24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34].
At low enough temperatures, typical nonclassical effects, such as coherent superposition of states and measurement back-action that randomly perturbs the state of a system, come into play [35]. They deeply affect the work extraction mechanism and impact the performance of measurement controlled quantum machines [36; 37; 38; 39; 40; 41; 42; 43; 44]. In this context, quantum measurements, in either their strong (projective) or weak (nonprojective) forms [35], may be considered as an unconventional thermodynamic resource [36; 37; 38; 39; 40; 41; 42; 43; 44]. Experimental investigations of the thermodynamic properties of a quantum Maxwell's demon, based on quantum measurement and feedback control of a qubit system, have recently been performed using nuclear magnetic resonance [45] as well as superconducting [46; 47; 48] and cavity quantum electrodynamical [49] setups.
Two central performance measures of heat engines are efficiency, defined as the ratio of work output and heat input, and power that characterizes the work-output rate [1]. The efficiency of any heat engine coupled to thermal baths is bounded from above by the Carnot efficiency, \(\eta_{\rm C}=1-T_{\rm c}/T_{\rm h}\), where \(T_{\rm c,h}\) are the respective temperatures of the cold and hot heat reservoirs [1]. This value is usually only reachable in the ideal reversible limit, which corresponds to vanishing power. However, real thermal machines operate in finite time with finite power, and far from reversible conditions. Their efficiency is hence reduced by irreversible losses [50; 51]. Optimizing the cyclic operation of heat engines is therefore crucial. A practical figure of merit is the efficiency at maximum power which has been extensively studied for classical [52; 53; 54; 55; 56; 57] and quantum [58; 59; 60; 61; 62] heat engines. A general example of such an efficiency at maximum power is the Curzon-Ahlborn formula, \(\eta_{\rm CA}=1-\sqrt{T_{\rm c}/T_{\rm h}}\), which bears a striking resemblance to the Carnot expression, except for the square root [63]. The Curzon-Ahlborn efficiency appears to be universal for finite-time Carnot machines that operate under conditions of low, symmetric dissipation [55]. While information engines also run in finite time and with finite power, no generic expression for their efficiency at maximum power is currently known, owing to the difficulty to properly optimize them [11; 12; 13].
We here introduce a generalized Carnot cycle for a quantum information engine by replacing the cold heat bath of a finite-time quantum Carnot heat engine by an information reservoir. This cycle is fully reversible in the infinite-time limit. We optimize its power output and derive a general formula for the efficiency at maximum power for arbitrary working media within the framework of nonequilibrium thermodynamics in the weak dissipation regime. We obtain a Curzon-Ahlborn-like expression where the optimal cold coupling time is replaced by a new dissipation time that characterizes irreversible losses. We further illustrate our findings with the example of a qubit information engine, and obtain a microscopic expression of its efficiency at maximum power.
_Reversible information engine cycle._ The reversible Carnot cycle describes the most efficient heat engine, and is thus of fundamental importance. It consists of two adiabatic and of two isothermal (expansion and compression) branches [1]. Its realization requires two heat baths: a hot bath from which heat is absorbed during the hot isotherm and a cold bath which takes on heat during the cold isotherm. Finite-time quantum Carnot cycles have been theoretically studied in Refs. [64; 65; 66; 67; 68]. The first experimental implementation of a classical finite-time Carnot engine has been presented in Ref. [69]. We here construct a finite-time generalization of the Carnot cycle for a quantum information engine by substituting the cold heat bath (and the corresponding isotherm) by an information bath that involves measurement and subsequent outcome-dependent feedback (Fig. 1).
An important feature of this information cycle is that
it is thermodynamically reversible for infinitely long cycle durations, like its thermal counterpart. In other words, each branch, including measurement and feedback, does not dissipate any irreversible entropy in that limit. We concretely impose the following three conditions on the engine cycle: (a) both measurement and feedback control are reversible, (b) the cycle is independent of the measurement outcome, meaning that measurement and feedback operation always lead to the same state, irrespective of the measurement result, and (c) the state \(\rho_{\text{after}}\) after measurement and feedback is a thermal state at temperature \(T_{\text{after}}\) with the same Hamiltonian \(H\) as that of the state \(\rho_{\text{before}}\) before the measurement.
We measure the state of the working medium of the information engine with a generalized measurement described by a set of positive operators \(\{M_{i}\}\) that satisfy \(\sum_{i}M_{i}^{\dagger}M_{i}=I\). The state after a measurement is \(\rho_{i}=M_{i}\rho_{\text{before}}M_{i}^{\dagger}/p_{i}\) with probability \(p_{i}=\text{Tr}[M_{i}\rho_{\text{before}}M_{i}^{\dagger}]\)[35]. We denote by \(S_{i}=-k\,\text{Tr}[\rho_{i}\ln\rho_{i}]\) the entropy and by \(E_{i}=\text{Tr}[\rho_{i}H]\) the energy of that state (\(k\) is the Boltzmann constant). Such a generalized measurement usually leads to a classical mixtures of states, implying that entropy is irreversibly produced during the process, \(S(\rho_{\text{meas}})>S(\rho_{\text{before}})\), where \(\rho_{\text{meas}}=\sum_{i}p_{i}\rho_{i}\) is the density operator averaged over all the measurement outcomes, unless \([M_{i},\rho_{\text{before}}]=0\)[38]. In order to make the measurement thermodynamically reversible, \(S(\rho_{\text{meas}})=S(\rho_{\text{before}})\), we accordingly require that the operators \(M_{i}\) commute with the state of the system before the measurement, \([M_{i},\rho_{\text{before}}]=0\). Since the latter state is diagonal in the energy basis after the adiabatic compression branch, the operators \(M_{i}\) describe a nonprojective measurement of the energy of the working fluid. We next apply reversible feedback control [5] to transform each state \(\rho_{i}\) into the thermal state \(\rho_{\text{after}}\). To that end, depending on the measurement outcome, we reversibly reorder the populations of \(\rho_{i}\) so that they decrease monotonically with increasing energy, while keeping the entropies \(S_{i}\) constant. We further shift the energy levels in order to obtain, after completion of the feedback operation, the same Hamilton operator as that of the initial state \(\rho_{\text{before}}\). The explicit measurement-plus-feedback protocol for the case of a two-level system is detailed below.
The average entropy change provided by the measurement is \(\langle\Delta S\rangle=\sum_{i}p_{i}S_{i}-S_{\text{before}}\leq 0\), where \(S_{\text{before}}\) is the entropy of state \(\rho_{\text{before}}\) before the measurement [35]. Noting that after feedback control, \(\rho_{i}=\rho_{\text{after}}\) and, therefore,
Figure 1: Generalized finite-time Carnot cycle for the quantum information engine. a) Polarization-frequency diagram for an arbitrary working medium with Hamiltonian \(H_{t}=\omega_{t}\mathcal{P}\). The cycle consists of one isochore during which a reversible measurement-plus-feedback protocol is implemented (1-2), one adiabatic expansion (2-3), one isothermal compression (3-2), and one adiabatic compression (4-1). The work \(\langle W_{\text{wm}}\rangle\) produced by the working medium during one cycle is given by the enclosed area and the reversible feedback work \(\langle W_{\text{fb}}\rangle\) is extracted during step (1-2). The total work done is equal to the sum \(\langle W\rangle=\langle W_{\text{wm}}\rangle+\langle W_{\text{fb}}\rangle\). b) Entropy-temperature diagram of the same cycle. It reduces to a Carnot cycle for vanishing feedback frequency, \(\omega_{\text{fb}}=0\), (dashed lines). c) Explicit realization of the four steps of the cycle for a qubit information engine. The blue (red) dot represents the occupation probability of the ground (excited) state of the two-level system. The two outcomes of the reversible generalized energy measurement with Kraus operators (7) occur with respective probabilities \((p_{0},p_{1})\).
\(S_{i}=S_{\rm after}\) for all measurement outcomes \(i\), we simply have \(\left\langle\Delta S\right\rangle=S_{\rm after}-S_{\rm before}=\Delta S\). The average work extracted by the reversibly operating feedback controller is additionally \(\left\langle W_{\rm fb}\right\rangle=\sum_{i}p_{i}(E_{i}-E_{\rm after})\), since the individual entropies \(S_{i}\) remain constant during the feedback process. Furthermore, since \(\left[M_{i},\rho_{\rm before}\right]=0\), and hence \(\sum_{i}p_{i}E_{i}=E_{\rm before}\), we have \(\left\langle W_{\rm fb}\right\rangle=E_{\rm before}-E_{\rm after}\).
Let us now evaluate the work associated with the engine cycle shown in Fig. 1. For that purpose, it is useful to distinguish, on the one hand, the measurement and feedback part (step 1-2) in Fig. 1), as discussed above, and, on the other hand, the engine cycle seen from the standpoint of the working medium (steps (1-4) in Fig. 1) [70]. During adiabatic expansion and compression, the system is isolated from the bath. In order to make these steps reversible and avoid quantum friction [71; 72; 73], the Hamiltonian is chosen to commute with itself at all times, \(\left[H_{t},H_{\nu}\right]=0\), as in the standard quantum Carnot cycle [64; 65; 66; 67; 68]. As a result, nonadiabatic transitions do not occur for all driving times while work is performed. For concreteness, and without loss of generality, we consider a Hamilton operator of the scaling form \(H_{t}=\omega_{t}\mathcal{P}\), with time-dependent frequency \(\omega_{t}\)[68]. From the point of view of the working medium, the cycle then consists of four branches (Fig. 1): (1-2) one isochore at constant frequency \(\omega_{\rm fb}\), (2-3) one adiabat with frequency variation from \(\omega_{\rm fb}\) to \(\omega_{3}\), (3-4) one isotherm with frequency change from \(\omega_{3}\) to \(\omega_{4}\) at constant bath temperature \(T_{\rm h}\), and (4-1) one adiabat with frequency decrease from \(\omega_{4}\) to \(\omega_{\rm fb}\). The average produced work \(\left\langle W_{\rm wm}\right\rangle\) is simply given by the area enclosed by the cycle. According to the first law applied to the working medium, we have \(\left\langle W_{\rm wm}\right\rangle=\left\langle Q_{\rm h}\right\rangle+ \left\langle Q_{\rm c}\right\rangle\), where \(\left\langle Q_{\rm h,c}\right\rangle\) are the respective heat contributions from the isotherm and the isochore. In the long-time limit, the heat absorbed from the hot reservoir may be written in leading order (low dissipation regime) as \(Q_{\rm h}=T_{\rm h}(\Delta S-\Sigma/\tau_{\rm h})\), where \(\Sigma\) is a coefficient that characterizes the entropy production during time \(\tau_{\rm h}\) along the isotherm [56]. Moreover, the heat exchanged by the working medium during the cold isochore can be evaluated by purely thermodynamic means (without involving the measurement and feedback aspect) [58; 59; 60]. It is given by \(\left\langle Q_{\rm c}\right\rangle=\omega_{\rm fb}\Delta(\mathcal{P})=E_{\rm after }-E_{\rm before}\).
The total work \(\left\langle W\right\rangle\) done during the complete information engine cycle is the sum of the work extracted by the feedback controller, \(\left\langle W_{\rm fb}\right\rangle\), and the work produced by the working medium, \(\left\langle W_{\rm wm}\right\rangle\). We hence obtain
\[\left\langle W\right\rangle=\left\langle W_{\rm fb}\right\rangle+\left\langle W _{\rm wm}\right\rangle=T_{\rm h}\left(\Delta S-\frac{\Sigma}{\tau_{\rm h}} \right). \tag{1}\]
We note that \(\left\langle Q_{\rm c}\right\rangle\) and \(\left\langle W_{\rm fb}\right\rangle\) exactly cancel. In other words, the information reservoir only exchanges entropy but no energy with the system. We are now in the position to investigate the phenomenological finite-time performance of the generalized Carnot information engine.
_Efficiency at maximum power._ The efficiency at which information is converted into work in the cyclic quantum information engine is defined as [36; 37; 38; 39; 40; 41; 42; 43; 44]
\[\eta=\frac{\left\langle W\right\rangle}{T_{\rm h}\Delta S}=1-\frac{\Sigma}{ \Delta S\tau_{\rm h}}, \tag{2}\]
where we have used Eq. (1). Unit efficiency \(\left(\eta_{\rm max}=1\right)\) is achieved for \(\tau_{\rm h}\rightarrow\infty\), when the cycle is reversible. In this regime, information about the state of the system, gained through the measurement, is fully converted into work by the cyclic engine. For finite-time operation, the efficiency is reduced \(\left(\eta<1\right)\) owing to dissipative processes associated with irreversible entropy production.
The power of the information engine further reads [1]
\[P=\frac{\left\langle W\right\rangle}{\tau_{\rm h}+\tau_{\rm fb}}=\frac{T_{\rm h }\left(\Delta S-\frac{\Sigma}{\tau_{\rm h}}\right)}{\tau_{\rm h}+\tau_{\rm fb}}, \tag{3}\]
where \(\tau_{\rm fb}\) denotes the time of the measurement and feedback protocol. The time spent along the two adiabats can be set to zero since they are reversible irrespective of their duration [58; 59]. By contrast, the feedback time \(\tau_{\rm fb}\) is determined by the measurement-feedback process and we take it to be fixed [74]. Setting the derivative of the power \(P\) with respect to \(\tau_{\rm h}\) to zero, we find the optimal coupling time to the hot heat reservoir
\[\tau_{\rm h}^{*}=\frac{\Sigma}{\Delta S}\left(1+\sqrt{1+\frac{\Delta S}{\Sigma} \tau_{\rm fb}}\right). \tag{4}\]
The corresponding efficiency at maximum power \(\eta^{*}\) of the quantum information engine then follows as
\[\eta^{*}=1-\frac{1}{1+\sqrt{1+\tau_{\rm fb}/\tau_{\rm h}^{\emptyset}}}=1-\frac{ \tau_{\rm h}^{\emptyset}}{\tau_{\rm h}^{*}}, \tag{5}\]
where we have used Eq. (4) and introduced the typical dissipation time \(\tau_{\rm h}^{\emptyset}=\Sigma/\Delta S\) associated with irreversible losses along the hot isotherm: \(\tau_{\rm h}^{\emptyset}\) is small (resp. large) when the entropy production is small (resp. large). Expression (5) is reminiscent of the Curzon-Ahlborn formula [63], which can be written in terms of the optimal cold and hot coupling times, \(\tau_{\rm c}^{*}\) and \(\tau_{\rm h}^{*}\), as \(\eta_{\rm CA}=1-\tau_{\rm c}^{*}/\tau_{\rm h}^{*}\)[58]. The optimal time of the cold isotherm \(\tau_{\rm c}^{*}\) is here simply replaced by the new dissipation time \(\tau_{\rm h}^{\emptyset}\). We moreover observe from Eq. (5) that in general \(\eta_{\rm max}/2<\eta^{*}<\eta_{\rm max}=1\), the lower (upper) bound being reached when the feedback time is much smaller (larger) than the dissipation time \(\tau_{\rm fb}\ll\tau_{\rm h}^{\emptyset}\) (\(\tau_{\rm fb}\gg\tau_{\rm h}^{\emptyset}\)).
With the help of the above expressions, the maximum power \(P^{*}\) may furthermore be written as,
\[P^{*}=\frac{\eta^{*}T_{\rm h}\Delta S}{\tau_{\rm h}^{*}+\tau_{\rm fb}}, \tag{6}\]
with the optimal produced work \(\left\langle W\right\rangle^{*}=\eta^{*}T_{\rm h}\Delta S\). These results generically hold for any working medium.
_Qubit information engine._ We proceed by illustrating our findings with the case of a spin-1/2 information engine with Hamilton operator \(H_{t}=\omega_{t}\sigma_{z}/2=\omega_{t}\mathcal{P}\), where
\(\sigma_{z}\) is the usual Pauli operator and \(\mathcal{P}=\sigma_{z}/2\) is the polarization. The knowledge of the precise quantum dynamics of this system allows for the microscopic evaluation of the efficiency at maximum power of the information engine.
We begin by specifying the measurement-feedback protocol of the generalized finite-time Carnot cycle (Fig. 1). In order to satisfy the conditions (a)-(c) stated above (measurement and feedback should be reversible, all measurement results should be mapped onto the thermal state \(\rho_{\rm after}\) with the same Hamilton operator as \(\rho_{\rm before}\)), we construct a generalized quantum measurement such that the first measurement outcome (\(i=0\)) is \(\rho_{\rm after}\) (that is, \(\rho_{0}=\rho_{\rm after}\) with energy \(E_{0}=E_{\rm after}\)) and the second measurement outcome (\(i=1\)) is equal to its spin-flipped counterpart (that is, \(\rho_{1}=\sigma_{x}\rho_{\rm after}\sigma_{x}\) with energy \(E_{1}=-E_{\rm after}\)). The corresponding measurement operators are explicitly given by (Supplemental Material [75])
\[M_{0} =\sqrt{\frac{1-e^{(\beta_{\rm b}+\beta_{\rm a})\omega_{\rm th}}} {1-e^{2\beta_{\rm a}\omega_{\rm th}}}}\left|1\right\rangle\left\langle 1 \right|+\sqrt{\frac{1-e^{-(\beta_{\rm b}+\beta_{\rm a})\omega_{\rm th}}}{1-e^ {-2\beta_{\rm a}\omega_{\rm th}}}}\left|0\right\rangle\left\langle 0\right|\] \[M_{1} =\sqrt{\frac{1-e^{(\beta_{\rm b}-\beta_{\rm a})\omega_{\rm th}}} {1-e^{-2\beta_{\rm a}\omega_{\rm th}}}}\left|1\right\rangle\left\langle 1 \right|+\sqrt{\frac{1-e^{-(\beta_{\rm b}-\beta_{\rm a})\omega_{\rm th}}}{1-e^ {2\beta_{\rm a}\omega_{\rm th}}}}\left|0\right\rangle\left\langle 0\right| \tag{7}\]
where \(\beta_{\rm b}=\beta_{\rm before}\) and \(\beta_{\rm a}=\beta_{\rm after}\) are the respective inverse temperatures of the states \(\rho_{\rm before}\) and \(\rho_{\rm after}\). The kets \(\left|0\right\rangle\) and \(\left|1\right\rangle\) denote the (ground and excited) energy eigenstates of the qubit. The Kraus operators (7) describe a nonprojective energy measurement of the spin-1/2 (it becomes weak in the high-temperature limit).
We next apply outcome-dependent feedback control to transform all the measurement results (\(i=0,1\)) into the same state \(\rho_{\rm after}\). For outcome \(0\), we apply the identity \(I\), since \(\rho_{0}=\rho_{\rm after}\) by construction; we hence trivially have \(H_{0}=H\). For outcome \(1\), we unitarily rearrange the states with the transformation \(H_{1}=-H+(E_{1}-E_{\rm after})I\), which leaves the energy of the state unchanged, \({\rm Tr}[\rho_{1}H_{1}]={\rm Tr}[\rho_{1}H]\). We finally shift the energy level to obtain the Hamiltonian of the state \(\rho_{\rm before}\). In doing so, we extract the feedback work \((W_{\rm fb})=E_{\rm before}-E_{\rm after}\).
The interaction of the two-level system with the hot heat bath may be microscopically described with the help of a usual quantum master equation of the form [58; 59]
\[\dot{\mathcal{P}}_{t} =\gamma_{+}\left(\sigma_{-}[\mathcal{P}_{t},\sigma_{+}]+[\sigma_{ -},\mathcal{P}_{t}]\sigma_{+}\right)\] \[+\gamma_{-}\left(\sigma_{+}[\mathcal{P}_{t},\sigma_{-}]+[\sigma_{ +},\mathcal{P}_{t}]\sigma_{-}\right)+\frac{\partial\mathcal{P}_{t}}{\partial t}, \tag{8}\]
for the polarization \(\mathcal{P}_{t}\) in the Heisenberg picture and the operators \(\sigma_{\pm}=\sigma_{x}\pm i\sigma_{y}\). Assuming that the damping coefficients satisfy the detailed-balance condition \(\gamma_{-}/\gamma_{+}=\exp(\beta_{\rm b}\omega_{t})\), by choosing, for instance, the concrete parametrization \(\gamma_{+}=a\exp(q\beta_{\rm b}\omega_{t})\) and \(\gamma_{-}=a\exp((1+q)\beta_{\rm b}\omega_{t})\) (with \(a>0\) and \(0>q>-1\) constant parameters), Eq. (8) can be rewritten as [58; 59]
\[\langle\dot{\mathcal{P}}_{t}\rangle=-ae^{q\beta_{\rm b}\omega_{t}}\big{[}2(1+e ^{\beta_{\rm b}\omega_{t}})\,(\mathcal{P}_{t})+(e^{\beta_{\rm b}\omega_{t}}-1 )\big{]}. \tag{9}\]
The parameter \(a\) characterizes the magnitude of the damping coefficients and, thus, the rate of change of the average polarization. Solving the above equation for time [58; 59], the duration of the isotherm in the high-temperature limit (\(\beta_{\rm b}\omega_{3,4}\ll 1\)) is found to read [75]
\[\tau_{\rm h}=\frac{\ln\left(\omega_{3}/\omega_{4}\right)}{4a\left(1-\beta_{\rm b }/\beta^{\prime}\right)}, \tag{10}\]
where the effective inverse temperature \(\beta^{\prime}\) of the qubit is determined via \((\mathcal{P}_{t})=-\tanh(\beta^{\prime}\omega_{t}/2)/2\)[58; 59]. Due to the finite-time relaxation of the system, the temperature \(T^{\prime}\) is not necessarily equal to the bath temperature \(T_{\rm h}\), when thermalization is not complete; we have \(\tau_{\rm h}\rightarrow\infty\) when \(T^{\prime}\to T_{\rm h}\) (or \(a\to 0\)). Noting further that the work
Figure 2: Optimal performance of the quantum information engine. a) Reduced power \(P/P^{*}\), Eq. (3), as a function of the duration of the hot isotherm \(\tau_{\rm h}\), Eq. (4), for different values of the feedback time \(\tau_{\rm h}\) (both in units of the dissipation time \(\tau_{\rm h}^{\rm a}\)). Maximum power \(P^{*}\) is reached at the optimal time \(\tau_{\rm h}^{*}\). b) Power versus efficiency curves, for the same parameters, that exhibit the characteristic shape of an endoreversible engine. The general inequality \(\eta_{\rm max}/2<\eta^{*}<\eta_{\rm max}=1\) is verified.
\(\langle W\rangle=T_{\rm h}(\Delta S-\Sigma/\tau_{\rm h})\) produced by the irreversible engine cycle with bath temperature \(T_{\rm h}\) is equal to the work \(T^{\prime}\Delta S\) produced by a reversible cycle with effective bath temperature \(T^{\prime}\)[58], we find the dissipation time,
\[\tau_{\rm h}^{\otimes}=\frac{\Sigma}{\Delta S}=\frac{\ln(\omega_{3}/\omega_{4} )}{4a}. \tag{11}\]
Equation (11) is solely determined by the beginning and end frequencies \(\omega_{3,4}\) of the isotherm and the bath coupling parameter \(a\). We therefore obtain the microscopic expression for the efficiency at maximum power (5):
\[\eta^{*}=1-\frac{\tau_{\rm h}^{\otimes}}{\tau_{\rm h}^{*}}=1-\frac{1}{1+\sqrt {1+4a\tau_{\rm fb}/\ln(\omega_{3}/\omega_{4})}}. \tag{12}\]
Figure 2a) displays the reduced power \(P/P^{*}\) of the qubit information engine as a function of the duration of the hot isotherm \(\tau_{\rm h}\) for different values of the feedback time \(\tau_{\rm fb}\) (both in units of \(\tau_{\rm h}^{\otimes}\)). We identify a clear maximum at the optimal time \(\tau_{\rm h}^{*}\) given by Eq. (4). Figure 2b) moreover shows the corresponding power versus efficiency curves that are typical for an endoreversible engine [52]. Such machines are internally reversible and irreversible losses only occur via thermal contact with the external bath. They hence outperform fully irreversible engines and have played for this reason a central role in finite-time thermodynamics [50; 51]. We note that the general inequality \(\eta_{\rm max}/2<\eta^{*}<\eta_{\rm max}=1\) is satisfied.
_Conclusions._ We have proposed a generalized finite-time Carnot cycle for a quantum information engine. Like the standard Carnot cycle for heat engines, it is thermodynamically reversible for large cycle durations. This cycle thus describes the most efficient quantum information engine with unit information efficiency. We have optimized its power output in the regime of low dissipation and derived a Curzon-Ahlborn-like formula for its efficiency at maximum power. This generic expression only depends on the optimal time of the hot isotherm and a new dissipation time associated with irreversible entropy production. The efficiency at maximum power was further shown to obey the general inequality \(1/2<\eta^{*}<1\), independent of the microscopic details of the engine. Our results provide a theoretical basis for the optimization of information engines. We hence expect them to be important for the study of optimal quantum machines in finite-time information thermodynamics.
###### Acknowledgements.
We acknowledge financial assistance from the German Science Foundation (DFG) (under project FOR 2724) and thank Florian Marquardt for his support.
|
2309.03151 | Stochastic bra-ket interpretation of quantum mechanics | The stochastic nature of quantum mechanics is more naturally reflected in a
bilinear two-process representation of density matrices rather than in squared
wave functions. This proposition comes with a remarkable change of the
entanglement mechanism: entanglement effects do not originate from
superpositions of wave functions, but result from the bilinear structure of
density matrices. Quantum interference appears as a multiplicative phenomenon
rather than an additive superposition mechanism. We propose two general
requirements such that the bilinear representation of density matrices is given
in terms of two uniquely defined, identically distributed, Markovian stochastic
jump processes. These general ideas are illustrated for the
Einstein-Podolsky-Rosen and double-slit experiments. The expression of the
stochastic nature of quantum mechanics in terms of random variables rather than
their probability distributions facilitates an ontological viewpoint and leads
us to a bra-ket interpretation of quantum mechanics. | Hans Christian Öttinger | 2023-09-06T16:48:51Z | http://arxiv.org/abs/2309.03151v2 | # Two-Worlds Interpretation of Quantum Mechanics
###### Abstract
The stochastic nature of quantum mechanics is more naturally reflected in a bilinear two-process representation of density matrices rather than in squared wave functions. This proposition comes with a remarkable change of the entanglement mechanism: entanglement does not originate from superpositions of wave functions, but results from the bilinear structure of density matrices. Quantum interference is not an additive superposition mechanism, but rather a multiplicative phenomenon. The proposed bilinear representation of density matrices is given in terms of two stochastic jump processes. These ideas are illustrated for the Einstein-Podolsky-Rosen and double-slit experiments. The expression of the stochastic nature of quantum mechanics in terms of random variables rather than their probability distributions facilitates an ontological viewpoint and leads us to a two-worlds interpretation of quantum mechanics.
## I Introduction
The stochastic nature of quantum mechanics calls for an appropriate setting for describing randomness. The most natural description of quantum states is given by a density matrix \(\rho\) on a Hilbert space for a system of interest, by which the average of any observable \(A\), which is a self-adjoint operator on the Hilbert space, can be obtained as a trace, \(\langle A\rangle=\mathrm{tr}(\rho A)\). In the Schrodinger picture, the evolution of the density matrix \(\rho_{t}\) describing the time-dependent state of a quantum system is given by the von Neumann equation
\[i\hbar\frac{d\rho_{t}}{dt}=[H,\rho_{t}], \tag{1}\]
where \(\hbar\) is the reduced Planck constant, \(H\) is the Hamiltonian of the system, and the square brackets denote the commutator. Except for its greater flexibility in choosing initial conditions, which are not restricted to pure states, the von Neumann equation is equivalent to the Schrodinger equation.
As an alternative probabilistic tool, Schrodinger's complex wave function \(\psi_{t}\) is less directly associated with the stochastic nature of quantum mechanics than the density matrix \(\rho_{t}\). One needs the additional rule that the squared modulus of the wave function, \(|\psi_{t}|^{2}\), should be interpreted as a probability density, whereas (1) is a linear evolution equation for the primary probabilistic quantity of interest, \(\rho_{t}\).
Quantum master equations for density matrices have the further advantage that they are perfectly suited not only for describing reversible dynamics, but also for dissipative quantum systems [1; 2]. A class of robust quantum master equations for dissipative systems has been obtained by quantizing the geometric structures behind classical nonequilibrium thermodynamics [3; 4]. The same type of thermodynamic master equations has also been found by means of projection-operator methods for coarse-graining reversible systems [5].
The most fundamental equations of nature are generally believed to be reversible, whereas irreversibility is considered to be an emergent phenomenon. However, as we presently do not know the most fundamental equations of nature and, even if we happened to know them at some point, we could never be sure because new observations could always be made, it might be more appropriate to build all physical theories by default on irreversible equations. One should actually consider reversible equations as an idealization that needs to be justified whenever it might be appropriate. Therefore, density matrices governed by quantum master equations appear to provide the most natural setting for describing the evolution of quantum systems.
Meaningful deterministic equations for the evolution of probabilistic tools should characterize stochastic processes. We here propose to consider stochastic processes, which provide the most common indeterministic models [6], as the more fundamental level of description. This is particularly appealing because quantum physics involves ontic randomness, contrary to the epistemic randomness associated with coarse graining, say in classical statistical mechanics.
When one solves the linear Schrodinger equation for the wave function of an interacting system, superposition of states occurs inevitably. In the subsequent section, we introduce a bilinear representation of quantum master equations in terms of two independently evolving stochastic jump processes, for which superposition no longer is an essential feature (Section II). This representation is further motivated by discussing serious limitations of quantum mechanics, which can be overcome in the quantum field theoretic setting of particle physics (Section III). We then discuss two experiments, the standard interpretation of which relies heavily on superposition states, in the proposed two-process representation of quantum mechanics: the Einstein-Podolsky-Rosen Gedankenexperiment (Section IV) and the double-slit experiment (Section V). The final conclusions (Section VI) focus on the interpretation of quantum me
chanics, ontological considerations, and the elimination of paradoxes by natural limitations for the applicability of quantum mechanics.
## II Two-process unravelings
The general strategy of so-called unravelings is to introduce stochastic processes in Hilbert space such that a density matrix evolving according to the von Neumann equation (1) can be extracted in terms of suitable averages. This idea of passing from probabilistic tools like density matrices to equivalent stochastic processes was originally motivated by computer simulations for dissipative quantum systems [1], but it is useful also for reversible systems and conceptual clarifications. The passage from probabilities to stochastic objects is particularly relevant to ontology [7].
For dissipative systems, unravelings typically employ continuous evolution to represent reversible dynamics and a combination of stochastic jump processes with continuous correction terms to reproduce dissipative dynamics [1]. In the context of dissipative quantum field theory [8], it has been proposed to treat also reversible interactions by jumps. The usefulness of this idea has been elaborated in detail for simulating purely reversible quantum dynamics in [9].
The treatment of interactions as jumps requires a splitting of the full Hamiltonian into free and interacting contributions, \(H=H^{\text{free}}+H^{\text{int}}\). We further assume that there exists a distinguished basis \(\mathcal{B}\) of orthonormal eigenstates of the free Hamiltonian \(H^{\text{free}}\). The basis vectors are denoted by \(b_{n}\) and the corresponding eigenvalues of \(H^{\text{free}}\) are given by \(E_{n}\). For unravelings in terms of two stochastic processes \(\ket{\phi}_{t}\) and \(\ket{\psi}_{t}\) in Hilbert space, one wishes to reproduce the density matrix \(\rho_{t}\) by the following expectation,
\[\rho_{t}=E\Big{(}\ket{\phi}_{t}\bra{\psi}_{t}\Big{)}, \tag{2}\]
where we have used Dirac's bra-ket notation. In this setting we can impose the strict superselection rule that the states \(\ket{\phi}_{t}\) and \(\ket{\psi}_{t}\) can only be multiples of basis vectors from \(\mathcal{B}\). Reasons for believing in the existence of such a distinguished basis and a strict superselection rule emerge from the origins of quantum mechanics in elementary particle physics, or quantum field theory, and will be discussed in Section III.
The strict superselection rule has the useful side-effect of reducing the enormous number of possibilities for constructing unravelings. It naturally guides us to the following construction of piecewise continuous trajectories with interspersed jumps.
The stochastic processes for the bra-vector \(\ket{\psi}_{t}\) and the ket-vector \(\ket{\phi}_{t}\) in (2) evolve independently, but according to the same rules. Therefore, it is sufficient to describe the construction of \(\ket{\phi}_{t}\). At any time \(t\), the stochastic state vector \(\ket{\phi}_{t}\) is a multiple of some basis vector \(b_{n}\in\mathcal{B}\). Between jumps, the process is governed by the free Schrodinger equation,
\[i\hbar\frac{d}{dt}\ket{\phi}_{t}=H^{\text{free}}\ket{\phi}_{t}, \tag{3}\]
which simply introduces a phase factor \(e^{-iE_{n}\Delta t/\hbar}\) if \(\ket{\phi}_{t}\) is a multiple of \(b_{n}\) and \(\Delta t\) is the time between jumps. Quantum jumps reproduce the effect of the interaction Hamiltonian \(H^{\text{int}}\). They are characterized by a rate parameter \(r\) for the occurrence of jumps and by the transition probabilities \(p_{ln}\) for jumping from a given state \(b_{n}\) to any particular state \(b_{l}\) (with \(\sum_{l}p_{ln}=1\)). If the process before a jump at time \(t\) is in the state \(\ket{\phi}_{t}=c_{t}b_{n}\), then the state \(\ket{\phi^{\prime}}_{t}\) after the jump is given by the stochastic jump rule
\[\ket{\phi}_{t}=c_{t}b_{n}\rightarrow\ket{\phi^{\prime}}_{t} = \frac{1}{p_{ln}}\left(\frac{1}{i\hbar r}\bra{b_{l}}H^{\text{int}} \ket{b_{n}}+\delta_{ln}\right)c_{t}b_{l} \tag{4}\] \[\text{with probability }p_{ln}.\]
The rate parameter \(r\) should be chosen such that \(\hbar r\) is a characteristic interaction energy of the system and the transition probabilities \(p_{ln}\) are given by the proportionality
\[p_{ln}\propto\big{|}\bra{b_{l}}H^{\text{int}}\ket{b_{n}}+i\hbar r\,\delta_{ln }\big{|}^{J}. \tag{5}\]
For \(J=1\) the changes of the prefactors in (4) are as uniform as possible, whereas the choice \(J=2\) can be motivated by Fermi's golden rule.
We still need to specify the initial conditions for the stochastic processes \(\ket{\phi}_{t}\) and \(\ket{\psi}_{t}\). The proper initial conditions for the Einstein-Podolsky-Rosen and double-slit experiments are discussed in Sections IV and V. In both cases, also the initial conditions for the two processes of the unraveling are independent random variables. Methods for constructing stochastic unravelings of equilibrium initial states at given temperatures, with special emphasis on the ground state at zero temperature, can be found in [10].
The continuous free evolution shows the importance of allowing for scalar multiplication. The jumps eliminate the need for any addition of states. Superpositions of base vectors do not occur in the two-process unraveling. The discussion of concrete experiments clarifies how entanglement nevertheless arises in the two-process unraveling.
In summary, we have constructed two stochastic processes \(\ket{\phi}_{t}\) and \(\ket{\psi}_{t}\), each satisfying a strict superselection rule. The density matrix obtained as a correlation function of these processes according to (2) satisfies the von Neumann equation (1), that is, the very same equation as obtained from the Schrodinger equation for density matrices associated with pure states. The stochastic representation of the density matrix is not restricted to pure states and, as it is based on two stochastic processes instead of one deterministic wave function, it offers an alternative interpretation of quantum mechanics based on a new implementation of entanglement.
## III Existence of distinguished bases
In conventional quantum mechanics one is free to do basis changes, which typically involve superpositions. Therefore, we need to justify our assumption of the existence of a distinguished basis \(\mathcal{B}\) in the superposition-free formulation of quantum mechanics based on two-process unravelings. To this end one needs to recognize the limitations and the approximate nature of quantum mechanics for systems consisting of a fixed number of particles and to look at its deeper origin in quantum field theory.1
Footnote 1: We here use the term _quantum mechanics_ for systems with a fixed, finite number of degrees of freedom, whereas _quantum field theory_ deals with a variable number of degrees of freedom, including the limit of infinitely many degrees of freedom. _Quantum theory_ includes both mechanics and field theory.
Dirac's marvelous quantization recipe formally suggests that we can quantize any classical Hamiltonian system, preferably in canonical coordinates, by replacing classical Poisson brackets by quantum commutators. However, the quantization of say planetary motion can hardly be particularly meaningful. A rigid restriction of quantum theory to elementary particles, which would be the other extreme, is unpractical because we can never be sure that we know the truly elementary particles. For example, nobody hesitates to apply quantum mechanics to protons although they are not elementary particles.
A revealing textbook example for the application of quantum mechanics is the hydrogen atom, which leads to the famous prediction for the line spectrum of atomic hydrogen. Note that the Hamiltonian for this problem contains the Coulomb potential of classical electrostatics. In that sense, quantum mechanics possesses a semi-classical character, which is usually not pointed out. Particularly questionable is the occurrence of electron-proton interactions at a distance.
In a more rigorous treatment of the hydrogen atom, the Coulomb interaction should result from an exchange of soft photons with momenta of the order of the electron momentum and wave lengths of the order of the atomic size between the electron and the proton, or between the electron and the quarks in a proton. We then are in the domain of quantum field theory, allowing for the creation and annihilation of photons. Quantum mechanics for systems with a fixed number of particles can only be an approximate theory that arises in the low-energy limit of quantum field theory, when particle creation and annihilation are suppressed (which is hardly possible for massless particles). Quantum mechanics should emerge from the quantum field theory of elementary particles and their interactions after making suitable approximations. This remark implies that special relativity can be properly taken into account at the starting point for justifying quantum mechanics. Of course, it may turn out that there are more fundamental theories of elementary particles from which one could obtain both quantum mechanics and quantum field theory by suitable approximations.
In quantum field theory, Fock space comes with a natural basis \(\mathcal{F}\)[8; 11; 12]. The Fock basis vectors describe states with well-defined free particle content, where free particles are in momentum eigenstates (we assume regularization by a momentum lattice to avoid the problems associated with a continuous basis; physically this means that space is finite). In quantum field theory, the strict superselection rule implies that a system can never be in a superposition state of different particle contents, where quantum particles can be distinguished by a number of intrinsic properties. In making approximations for passing from quantum field theory to quantum mechanics one should make sure that a distinguished basis \(\mathcal{B}\) emerges from the natural Fock basis \(\mathcal{F}\) of quantum field theory. For example, the treatment of a proton as an elementary particle on the atomic length scale (or low energies) in quantum mechanics relies on color confinement. Having an eye on what makes sense from the perspective of particle physics restricts the situations in which quantum mechanics can be applied in a deep and meaningful way.
Different options for choosing a distinguished basis may arise even in quantum field theory. For example, in the Schwinger model for electrodynamics in \(1+1\) dimensions [13; 14; 15], one can switch from the standard Fock space to an alternative Fock space based on photons and bound electron-positron pairs (see Section 3.3.7 of [8]), which is similar to regarding protons as elementary particles and actually illustrates the emergence of a new distinguished basis via confinement.
When quantum field theory is developed on the Fock space of momentum eigenstates, one cannot know the position of a free quantum particle at a given time. However, whenever collisions between typically three or four particles occur, we know that all the colliding particles are at the same position. This remark explains the characteristic particle tracks observed in collider experiments when a high-energy collision is followed by many low-energy collisions in a detector ruled by a certain correlation structure of collision events [7].
In the next two sections, we discuss the Einstein-Podolsky-Rosen and double-slit experiments in terms of two-process unravelings in order to illustrate the basic ideas. For these discussions we use basis vectors inspired by the Fock space of particle physics with the corresponding superselection rule. In the Einstein-Podolsky-Rosen experiment, the allowed states consist of pairs of photons with well-defined wave lengths and helicities; in the double-slit experiment, the allowed states consist of electrons with well-defined momenta.
## IV Einstein-Podolsky-Rosen
Gedankenexperiment
The Einstein-Podolsky-Rosen (EPR) Gedankenexperiment [16] was designed to reveal the occurrence of
actions-at-a-distance as a sign of the incompleteness of quantum mechanics. We consider the EPR experiment in the version for photons, which is appealing for both theoretical arguments and experimental realization [17; 18; 19; 20; 21; 22]. We here prefer photons over electrons because their circular polarization states are characterized with respect to the unambiguous direction of their motion, whereas the electron spin states are usually characterized with respect to a preferred direction in space (because their direction of motion is not Lorentz invariant and is actually undefined in the rest frame). The arbitrariness of the chosen direction of space would have to be addressed with undesirable changes of bases.
Pairs of photons with different wave lengths (\(\lambda_{1}=551.3\,\mathrm{nm}\) and \(\lambda_{2}=422.7\,\mathrm{nm}\)) moving with the same circular polarization in opposite directions can be created in the decay of properly excited calcium atoms [17; 18]. Whereas the actual decay occurs through an intermediate state within some \(5\,\mathrm{ns}\), we here use the idealization of a single collision event in which a calcium excitation is annihilated and two photons are created.
In the discussion of the EPR experiment, one usually focuses entirely on polarizations states and neglects spatial information. Spatial information is associated with the annihilation of a calcium excitation into two photons and with the detection of the photons by collisions in a photomultiplier. In between, we do not have any spatial information. However, one may assume a certain correlation structure for collision events, for example, implying that the two photons from a single excitation hit two equally distant detectors at equal times (in the laboratory). Filters can be introduced such that photons with wave length \(\lambda_{1}\) arrive only in one detector, say the left one, whereas photons with wave length \(\lambda_{2}\) arrive only in the other detector, say the right one.
The following orthonormal basis vectors for the two-photon states are natural: \(b_{1}=\left|5513,1\right\rangle\,\left|4227,1\right\rangle\) and \(b_{2}=\left|5513,-1\right\rangle\,\left|4227,-1\right\rangle\), where each photon is characterized by its wave length in Angstrom and by its helicity. The eigenvalues for these eigenstates of the free Hamiltonian are independent of helicity, \(E_{1}=E_{2}\). When a calcium excitation decays, the two processes of the stochastic unraveling are independently initialized with probability \(1/2\) in the states \(\sqrt{2}\,b_{1}\) or \(-\sqrt{2}\,b_{2}\). According to the basic idea of unravelings in (2), this implies the initial density matrix
\[\rho_{\mathrm{EPR}}=\frac{1}{2}\,\begin{pmatrix}1&-1\\ -1&1\end{pmatrix}, \tag{6}\]
resulting from the independence of the bra- and ket-states. Note that this density matrix is of rank \(1\), that is, it corresponds to a pure state in standard quantum mechanics. The pure state associated with the density matrix (6) is the maximally entangled Bell singlet state \((b_{1}-b_{2})/\sqrt{2}\), which is the usual starting point for the discussion of the EPR experiment. In the unraveling, this initial condition is produced without any superposition of the natural basis states. In the language of Schrodinger's famous dramatization, both the bra-cat and the ket-cat of the two-worlds interpretation are either dead or alive, but they may be in different states.
The representation of the density matrix (6) in the two-process unraveling appears at the center of the schematic illustration of the EPR experiment in Figure 1. The photons moving in opposite directions with wave lengths \(\lambda_{1}\) and \(\lambda_{2}\) are indicated by left and right arrows, respectively. The free motion leads to complex phase factors that do not depend on the helicity states, so that the bra-ket average for the density matrix of the helicity states remains unchanged during the free evolution. When the photons hit the detectors at equal distances from the site of their creation, they still have their original, equal helicities: \(+1\) and \(-1\) with equal probabilities, independently in the two worlds.
Each of the detectors consists of a beam-splitting linear polarizer and two photomultipliers counting the photons in the two linear polarization states. These polarizers are cubes made of two glass prisms with suitable dielectric thin films on the sides stuck together. The polarizers can be rotated by the angles \(\theta_{1}\), \(\theta_{2}\) around the optical axis with respect to a reference direction. As the photons are moving in opposite directions, also the rotations are performed in opposite directions, so that \(\theta_{1}+\theta_{2}\) actually is the angle between the two polarizers. We are interested in the probabilities for finding the photons in the two optical units in parallel (\(\parallel\)) and perpendicular (\(\perp\)) polarization states. In the natural, intrinsic basis of circular polarization states, these probabilities are the averages of the following observables, which are obtained by transformation from linear to circular polarization states,
\[A_{\parallel\,\parallel}(\theta_{1},\theta_{2})=A_{\perp\,\perp}(\theta_{1}, \theta_{2})=\frac{1}{4}\,\begin{pmatrix}1&e^{2i(\theta_{1}+\theta_{2})}\\ e^{-2i(\theta_{1}+\theta_{2})}&1\end{pmatrix} \tag{7}\]
and
\[A_{\parallel\,\perp}(\theta_{1},\theta_{2}) = A_{\perp\,\parallel}(\theta_{1},\theta_{2}) \tag{8}\] \[= \frac{1}{4}\,\begin{pmatrix}1&-e^{2i(\theta_{1}+\theta_{2})}\\ -e^{-2i(\theta_{1}+\theta_{2})}&1\end{pmatrix}.\]
These averages performed with the density matrix (6) are used to detect a violation of Bell's inequalities [23; 24],
Figure 1: Schematic illustration of the EPR experiment in the two-world interpretation (see text for details).
which are nowadays no longer interpreted as actions-at-a-distance but rather as nonlocal correlations in quantum systems [18; 19]. Such correlations arise from the multiplicative interplay of the two independent stochastic processes of the two-world interpretation. As the density matrices obtained from the maximally entangled Bell singlet state of standard quantum mechanics and from the two-process unraveling coincide, so do the respective predictions for the correlations.
It is important to note that the definition of the observables (7) and (8) implies the following strong _postulate_: When photons pass through macroscopic polarizers, all polarization and phase shift effects coincide with those for classical electromagnetic waves. Also filters must act on individual photons in the same way as on electromagnetic waves.
## V Double-slit experiment
The double-slit experiment provides most convincing evidence in favor of wave-like properties of quantum particles. As for waves of water or electromagnetic waves, one likes to add amplitudes and to square the result to obtain intensities exhibiting interference effects. We discuss some details in the spirit of Sections 1-3 to 1-5 in Volume III of the Feynman Lectures on Physics [25]. A similar discussion, but from a more philosophical perspective, can be found in Section 7.1 of [22]. An idealized experimental setup for the double-slit experiment with electrons is sketched in Figure 2.
The electron gun in Figure 2 could simply be a tungsten wire emitting electrons when heated by an electric current, surrounded by a metal box with a hole in it. A voltage between the wire and the box accelerates the electrons, where all electrons leaving the box through the hole possess nearly the same kinetic energy \(E\). The wall with two parallel slits of equal width typically is a thin metal plate. Attached to the absorbing wall, there is a movable electron detector, say a Geiger counter, to measure the distribution of electron hits as a function of position on the absorbing wall.
The electrons are found to hit the absorbing wall as individual events occurring at random positions \(x\). If only Slit I (or II) is open, one finds the probability density \(P_{\rm I}\) (or \(P_{\rm II}\)) for the spatial distribution of events on the absorber, which may also be regarded as intensities (see Figure 2). If both slits are open, one observes the startling interference pattern \(P_{\rm I,\,II}\) for the distribution of discrete electron hits on the screen, which is taken as evidence for a "particle-wave duality." Electrons do not simply go through either one or the other slit so that a wave-like pattern with constructive and destructive interference arises. Note that also the single-slit probability densities \(P_{\rm I}\) and \(P_{\rm II}\) are affected by wave-like diffraction.
### Standard interpretation
Following [25], the spin of the electron is neglected in our discussion of the double-slit experiment. The probability density \(P_{\rm I,\,II}\) obtained with both slits open is clearly not the sum of the probability densities \(P_{\rm I}\) and \(P_{\rm II}\) for each slit alone. As for waves of water or electromagnetic waves, interference effects can be described by adding amplitudes rather than adding probability densities or intensities,
\[P_{\rm I,\,II}=|\psi_{\rm I}+\psi_{\rm II}|^{2}\quad\neq\quad|\psi_{\rm I}|^{ 2}+|\psi_{\rm II}|^{2}=P_{\rm I}+P_{\rm II}, \tag{9}\]
where the complex amplitudes \(\psi_{\rm I}\) and \(\psi_{\rm II}\) are the wave functions for the single-slit experiments, obtained by solving the corresponding Schrodinger equations. The probability densities sketched in Figure 2 actually correspond to the intensity profiles obtained from the Fraunhofer diffraction equation of classical wave optics for large distances from the slits. Such a calculation is based on Huygens' idea that every point on a wavefront acts as a source of a spherical wave and that the sum of all these spherical waves determines the further propagation of the wavefront.
Experimental data for cold neutrons can be found in [26]. Less detailed experimental data for electrons are available in a much older paper [27] (in German), or in its partial translation into English [28].
Explanations of the type sketched above are generally presented and readily accepted as a sound and convincing story of quantum interference by both physicists and philosophers. However, such explanations may be considered as rather symbolic, raising a number of questions. For example, note that \(\psi_{\rm I}\) and \(\psi_{\rm II}\) in (9) are time-independent wave functions evaluated on the absorbing wall, that is, on a boundary of the domain. On what domain should one actually solve the time-independent Schrodinger equation? Between the walls or also around the electron gun? What kind of boundary conditions are required to solve the second-order time-independent
Figure 2: Double-slit experiment with electrons explained in terms of wave functions (symbolically illustrated by the interference of waves).
Schrodinger equation? Is the wave function on the absorbing wall needed as a boundary condition or is it a predictive result of the full solution? Is the sum \(\psi_{\mathrm{I}}+\psi_{\mathrm{II}}\) consistent with the boundary conditions for the double-slit experiment? Furthermore note that the phase shifts, which are at the heart of interference, are in direct correspondence to the flight times for the electrons. How can then a time-independent solution provide the whole story? Why shouldn't small differences in arrival times reveal through which slit an electron has passed? Does a discrete electron hit on the absorbing wall occur at one instant in time or is it smeared in time due to contributions from the two slits? A detailed theoretical discussion that goes far beyond the usual textbook arguments can be found in [29].
Maybe the above questions could be addressed most convincingly within Bohmian mechanics [30; 31; 32; 33; 34]. Whereas all these questions are nonchalantly ignored for the standard interpretation based on wave functions, this is no longer possible for the two-process unraveling.
### Two-process interpretation
Each of the independent jump processes of the two-process unraveling consists of the following five steps: creation of an electron at the exit hole of the electron gun, free evolution of the electron to one of the two slits, effective collision of the electron in the corresponding narrow slit, free evolution of the electron to the absorbing wall, and absorption of the electron by the wall. As in the standard interpretation, we neglect the electron spin, and we assume that all electrons leave the gun with the same (nonrelativistic) energy \(E\). The magnitude of the electron momentum \(\mathbf{k}\) is then restricted by \(\mathbf{k}^{2}=2mE\), where \(m\) is the electron mass. We further assume that there is a large but finite number of possible orientations of \(\mathbf{k}\) and that both the set of possible momentum states and their frequencies of occurrence respect the up-down symmetry of the experiment illustrated in Figure 3. Finally, all relevant momenta possess a positive component to the right.
To elaborate the stochastic jump processes in more detail, we introduce the sets \(\mathcal{K}_{\mathrm{I}},\mathcal{K}_{\mathrm{II}}\) which consist of the momenta pointing from the exit of the electron gun to slit I or II, respectively, and the set \(\mathcal{K}\) containing all momenta that can occur in an effective elastic collision in a slit. Simultaneously but independently for the two processes of the unraveling, an electron is initiated with a momentum \(\mathbf{k}\in\mathcal{K}_{\mathrm{I}}\cup\mathcal{K}_{\mathrm{II}}\) so that it can pass through one of the slits. After the time determined by the distance of the electron gun from the wall with slits and the component of the momentum \(\mathbf{k}\) normal to the wall, the electron passes a slit. As a result of a high interaction rate with macroscopic matter at the narrow slit, the electron jumps into a state \(\mathbf{k}^{\prime}\in\mathcal{K}\), where we make the simplifying assumption of a single effective elastic collision, that is \(|\mathbf{k}^{\prime}|=|\mathbf{k}|\). The flight time to the absorbing wall is determined by the distance between the parallel walls and by the normal component of \(\mathbf{k}^{\prime}\). When the electron hits the absorbing wall, it is assumed to be stopped by an inelastic collision and its momentum jumps to \(\mathbf{0}\).
Equipped with the construction of the independent jump processes of the stochastic unraveling, how can we find the probability for an electron to hit the detector? As the jump process involves only momentum states, we need to consider also the geometry of the experimental setup. For a given position of the detector, we can construct the set \(\mathcal{D}\) of all pairs of momenta \((\mathbf{k},\mathbf{k}^{\prime})\) for which the electron ends up in the detector. The number of possible momentum states should be so large that, even for a detector with diameter small compared to the width of the slits, the set \(\mathcal{D}\) contains many pairs. Figure 3 shows four such \((\mathbf{k},\mathbf{k}^{\prime})\) pairs and the corresponding piece-wise linear "trajectories" that end in the detector. We put the term "trajectories" in quotation marks because the position of an electron is known only when collisions occur, that is, in the electron gun, at a slit, and on the absorbing wall, thus defining a \((\mathbf{k},\mathbf{k}^{\prime})\) pair; in between, the free evolution of the electrons comes with a certain correlation structure between collision events implying flight times and phase factors expressed by piece-wise linearity, just as in the EPR experiment. Two "trajectories" per slit is the minimum number required to account for single-slit diffraction. Any finite number of "trajectories" per slit can be treated in the same way and even the limit to infinitely many "trajectories" may be considered. If, and only if, the "trajectories" of both jump processes are contained in \(\mathcal{D}\), there can be a nonzero contribution to the probability for the electron to end up in the detector. Note that the absorption of the electron for the two processes does not occur at exactly the same time; a non-vanishing contribution to the density matrix arises only after both processes have reached the absorbing wall.
The relevance of \((\mathbf{k},\mathbf{k}^{\prime})\) pairs shows that the calculation of probability densities depends on the sequences of events occurring in the jump processes. Such an analysis
Figure 3: Double-slit experiment with electrons explained by two-process interpretation.
goes beyond the evolution of the density matrix according to the von Neumann equation (1) and can only be performed in the context of the more detailed stochastic unravelings of the density matrix. An appealing framework for analyzing such histories of events is provided in [35].
We are now in a position to evaluate the probability for an electron to hit the detector. For the pair \(j=(\mathbf{k},\mathbf{k}^{\prime})\in\mathcal{D}\), the probability \(p_{j}\) is given by the product of the probabilities for generating and scattering the electron in the corresponding momentum states \(\mathbf{k}\) and \(\mathbf{k}^{\prime}\), respectively. The complex phase factor \(\phi_{j}\) is determined by the length of the corresponding piece-wise linear "trajectory" or, equivalently, by the flight time. The probability for a pair of independent "trajectories" \(j,l\) is given by the product \(p_{j}\,p_{l}\), and the overall probability for electron detection suggested by the density matrix (2) is given by
\[\sum_{j,l\in\mathcal{D}}p_{j}p_{l}\,\phi_{j}\phi_{l}^{*}=\bigg{|}\sum_{j\in \mathcal{D}}p_{j}\phi_{j}\,\bigg{|}^{2}. \tag{10}\]
What looks like the square of a sum actually results from the product of all independent combinations of possibilities. The independent processes of the unraveling imply the same result as the corresponding superposition of properly weighted states [in (9), equal weights are assumed]. Without loss of generality, the phase factors at the hole of the electron gun have been taken as unity [random phase factors would average out in (10)].
It is remarkable that the numerical calculation performed in [26] for comparison to experimental diffraction data for neutrons is performed in a fully analogous way. It is also based on the phase factors resulting from the sum of the lengths of the linear paths from the electron source to a point in one of the slits and from there to the detector, and their final result corresponds to the right-hand side of (10), thus confirming the assumed correlation structure between collision events. The left-hand side reveals that we here do not obtain this result by superposition, but from the bilinear representation of the density matrix in terms of stochastically independent bra- and ket-vectors.
## VI Conclusions
The proposed two-worlds interpretation of quantum mechanics leads to the same equation for density matrices as standard quantum mechanics, which is the von Neumann equation (1). In standard quantum mechanics, this equation for density matrices is inferred from the Schrodinger equation for wave functions. In the two-worlds interpretation, the von Neumann equation is obtained from two stochastic jump processes in Hilbert space. Although the same equation arises in both approaches, they nevertheless are significantly different. The two-worlds interpretation introduces a new entanglement mechanism and heavily restricts the quantum systems to which the von Neumann equation can be applied, thus reducing the risk of paradoxes.
We have proposed to use unravelings of density matrices in terms of stochastic jump processes to account for the stochastic nature of quantum mechanics in the most direct way. Continuous free Schrodinger evolution is interrupted by stochastic quantum jumps that reproduce the proper interaction effects. The bilinear representation (2) of density matrices in terms of two independently evolving stochastic processes is the most general, natural and versatile option. The enormous freedom in constructing two-process unravelings is restricted by a strong superselection rule, which essentially eliminates superposition states from quantum mechanics. This strict superselection rule, which can be justified by obtaining quantum mechanics as a limit of quantum field theory, requires that the content of fundamental particles in a quantum system is well-defined; superpositions of particle states with different characteristic properties are forbidden. In the proposed formulation of quantum mechanics, entanglements no longer arise from superposition, but rather from the dyadic pairing and averaging of the two stochastic state vectors of our unravelings in the fundamental representation (2) of the density matrix as a correlation function.
Superposition states are at the origin of many paradoxes in quantum mechanics. Therefore, a superposition-free implementation of quantum entanglement offers the possibility to eliminate paradoxes. This is a consequence of restricting the applicability of quantum mechanics by the strong superselection rule that results from the origin of quantum mechanics in the quantum field theory of elementary particle physics.
For example, in Schrodinger's cat version of the EPR experiment, the superposition state involves a complex macroscopic system. In the tradition of "Wigner's friend" [36], Frauchiger and Renner [37] assume the applicability of quantum mechanics to even more complex superposition states that include agents who are themselves using quantum theory, and they then reveal inconsistencies in conventional quantum mechanics. However, for these situations the applicability of quantum mechanics cannot be justified by making a connection to quantum field theory and verifying the strong superselection rule.
The stochastic jumps of our unravelings reproduce the interaction of standard quantum mechanics. Being equivalent to the interaction part of the Schrodinger equation, they reflect the intrinsic stochastic nature of quantum mechanics, not an additional feature. This situation is fundamentally different from the GRW approach [38], in which spontaneous collapses of wave functions are an additional stochastic jump feature on top of the full Schrodinger dynamics [39]. In the GRW approach, superpositions are suppressed only in the passage from microscopic to macroscopic systems [40].
Despite the linguistic similarity between the two
worlds and many-worlds [41; 42] interpretations of quantum mechanics, they are very different views. Unlike in the many-worlds interpretation with a branching structure into many separate worlds, a combination or overlay of two worlds, or maybe better of two semi-worlds, determines the behavior of a single full world in the present interpretation. In the many-worlds view, "world" is often replaced by "universe" whereas, in the two-worlds interpretation, we deal with two stochastic processes in the Hilbert space of a quantum "system." These two processes are rigorously ruled by classical probability theory, where probability is an ontic feature of quantum theory. The Markov property of its stochastic processes suggests to consider the two-worlds interpretation as a hidden-variable theory. Note that the nature of this hidden-variable theory is very different from deterministic Bohmian mechanics [30; 31; 32; 33; 34].
Although it is beyond the scope of this paper to develop a general theory of measurement, it should be pointed out that a theory of measurable multitime correlation functions has been developed in [1]. For the semilinear quantum master equations of thermodynamic origin, which are linear for scalar multiplication but, in general, nonlinear for addition, this theory of measurable correlations has been generalized in [8]. One can further ask the question whether density matrices can be measured. This question can, for example, be addressed by quantum-state tomography [43; 44]. Concerning the measurement problem formulated in a concise way by Maudlin [45] as the mutual inconsistency of three claims associated with the standard formulation of quantum mechanics in terms of wave functions, none of these three claims is made in the two-worlds interpretation.
"Density-matrix realism" as an appealing alternative to "wave-function realism" has been discussed in great detail in [46]. We here go a step further and consider the stochastic unraveling of a density matrix as the fundamental representation of a quantum system. Unravelings provide an opportunity of a new interpretation of quantum mechanics, and this new form of realism may be called "unravelism." Complete knowledge of a quantum state requires two stochastic state vectors. These two "semi-worlds" are what there ultimately exists, and they play together to characterize the quantum state of the "full world," including entanglements. In the context of dissipative quantum field theory, this opportunity has already been explored in [7; 8]. The magic number "two" in the two-world interpretation corresponds to the number "2" in the expression \(|\psi|^{2}\), which relates the wave function to probability. The role of Born's rule is taken over by the bilinear expression (2) for the probabilistic tool density matrix in terms of two more fundamental stochastic objects.
It might be desirable to couple the semi-worlds of the two-worlds interpretation (bringing two views of the world together is a challenging task, not only for drunk-ards). Such a coupling indeed arises as soon as we pass from reversible to irreversible quantum systems [1; 8]. As we argued in the introduction, irreversible dynamics should be considered as the most natural choice, whereas reversible dynamics can be justified only in exceptional situations. For thermodynamic equilibrium states, it has been observed that the two processes of the unraveling may not be independent [9; 10]. This should not be surprising because thermodynamic equilibrium states are reached by some kind of irreversible process, where the precise form or strength of the process is irrelevant.
A quantum field theory that incorporates dissipative smearing at very short length scales has been elaborated in hocaft. It has been argued that the length scale at which dissipative smearing sets a limit to physical resolvability could be identified with the Planck length. As the Planck length involves Newton's gravitational constant it may be concluded that irreversibility is associated with gravity and, therefore, that also the coupling between the two worlds of an unraveling can be a gravitational effect. The foundations for including gravity into dissipative quantum field theory have been laid in [47; 48].
###### Acknowledgements.
I would like to thank Michael Esfeld, Carlo Rovelli, Andrea Oldofredi, Simon Friederich, Alexei Bazavov, Romain Chessex and Alexander Weyman for helpful comments on a first draft of this paper. I gratefully acknowledge inspiring discussions with Jurg Frohlich and Amine Rusi El Hassani on the operator-algebraic ETH approach (Events-Trees-Histories) to quantum theory.
|
2309.13342 | Evolve the Model Universe of a System Universe | Uncertain, unpredictable, real time, and lifelong evolution causes
operational failures in intelligent software systems, leading to significant
damages, safety and security hazards, and tragedies. To fully unleash the
potential of such systems and facilitate their wider adoption, ensuring the
trustworthiness of their decision making under uncertainty is the prime
challenge. To overcome this challenge, an intelligent software system and its
operating environment should be continuously monitored, tested, and refined
during its lifetime operation. Existing technologies, such as digital twins,
can enable continuous synchronisation with such systems to reflect their most
updated states. Such representations are often in the form of prior knowledge
based and machine learning models, together called model universe. In this
paper, we present our vision of combining techniques from software engineering,
evolutionary computation, and machine learning to support the model universe
evolution. | Tao Yue, Shaukat Ali | 2023-09-23T11:30:26Z | http://arxiv.org/abs/2309.13342v1 | # Evolve the Model Universe of a System Universe
###### Abstract
Streszczenie--Uncertain, unpredictable, real-time, and lifelong evolution causes operational failures in intelligent software systems, leading to significant damages, safety and security hazards, and tragedies. To fully unleash such systems' potential and facilitate their wider adoption, ensuring the trustworthiness of their decision-making under uncertainty is the prime challenge. To overcome this challenge, an intelligent software system and its operating environment should be continuously monitored, tested, and refined during its lifetime operation. Existing technologies, such as digital twins, can enable continuous synchronisation with such systems to reflect their most up-to-date states. Such representations are often in the form of prior-knowledge-based and machine-learning models, together called'model universe'. In this paper, we present our vision of combining techniques from software engineering, evolutionary computation, and machine learning to support the model universe evolution.
Index Terms--Model Universe, System Universe, Coevolution, Epigenetics, Machine Learning
## I Motivation
Intelligent software systems are transforming business, life, and the global economy. Machine learning (ML) techniques are often employed in such systems to enable nontrivial autonomous decision-making under uncertainties, thereby being intelligent [1]. Such systems are prone to unforeseen situations in operation due to several factors, including 1) various degrees of uncertainty in physical environments and networks; 2) the probabilistic, non-backwards-traceable nature of the inner workings of the ML techniques employed; 3) unpredictable or design-time-unknown operating environments; and 4) the systems' own continuous and lifelong learning/evolution.
Such uncertain, unpredictable, real-time, and lifelong evolution causes operational failures in intelligent software systems, leading to significant damages, safety and security hazards, and tragedies. Hence, ensuring the dependability of such systems at design and development time alone is insufficient to ensure their dependability in real-world operation. Current approaches (e.g., testing) are insufficient before such systems are deployed since it is impossible to know all the critical situations these systems will experience in the real world. Some of these situations appear only during their operations, and it is also hard (if even possible) to predict when and why. To fully unleash intelligent software systems' potential and facilitate their wider adoption, ensuring the trustworthiness of their decision-making under uncertainty is the prime challenge.
Hence, intelligent software systems should be continuously monitored, tested, and refined with real-world data to ensure they can gracefully handle all uncertain and unknown situations during their lifetime. Current technologies, such as digital twins (digital and live representations of systems), can enable continuous synchronisation with the systems to reflect their most up-to-date states [2]. Such representations are often in the form of prior-knowledge-based and ML models (i.e., model universe). The former is widely used to represent software systems; however, such models have a limited capability to support the runtime analyses, reasoning, validation, and validation of intelligent software systems during their operations in uncertain environments. This is simply because the prior knowledge required to create these models is only partially available and, in some cases, has yet to be discovered. Even worse, soon after their creation, these models become obsolete and useless. This obosel'cence is accelerated when ML techniques are employed since ML models face performance degradation over time due to, for example, data drift, and they must inevitably evolve when more data becomes available during the operation of such systems. To stay alive and, therefore, valid and functional, the model universe must continuously evolve to faithfully represent the system of interest and its environment (i.e., system universe).
## II Concept Formulation and State-of-the-art
We present the key concepts and their relationships in Figure 1. In the rest of the section, we discuss them in detail.
1) Model and system universes: A model is considered an 'informative representation of an object, person or system' [3]. System models are simplified representations of reality's essential or relevant entities and their properties at particular points in time and/or space that form particular interests, importance, and concerns and serve specific purposes. Models are prevailingly used in software/system engineering and are often classified into two categories regarding their construction: prior-knowledge-based and data-driven models. The former comprises models such as
3D models created with simulators (e.g., for virtual surgical planning), Simulink, and Systems Modeling Language (SysML) models for model-based system engineering [4], [5]; the latter mainly refers to ML models, e.g., AlexNet for image classification [6] and YOLO for object detection [7].
The model universe of a system universe provides the proper basis for reasoning about the system universe and enables decision makings of all kinds. Here, we use the term 'universe' to emphasise that our universe is 95% unknown [8]; similarly, the system and model universes contain many unknowns. Furthermore, knowing the universe is about understanding its formation and evolution, and such an understanding is essential for building a theory based on which scientific extrapolations can be made about the future of the universe.
2) Uncertainties: The concept of 'uncertainty' can be traced back to the philosophical question about the certainty of knowledge, debated by the ancient Greek philosophers, including Aristotle. Uncertainty has attracted significant attention since it is inherent in intelligent software systems and their operating environments [9]-[13]. One representative example is the uncertain operating environments of autonomous driving vehicles (i.e., external uncertainties) and inherent uncertainties of their behaviours due to the use of ML models for perception and path planning, among other decision-making tasks (i.e., internal uncertainties). Moreover, models aimed at understanding, reasoning, and predicting system behaviors make assumptions of all kinds. Therefore, a model universe devised for a system universe contains two types of uncertainties: objective uncertainties, which refer to phenomena whose existence and nature are independent of any observing agency, and subjective uncertainties, which refer to information existing within some agency derived from that agency's observations and/or reasoning (i.e., belief agents) [14]. When gaining more knowledge, subjective uncertainties can evolve into objective ones.
Uncertainties can also be classified into shallow uncertainties and deep uncertainties. Being shallow means that the probabilities of the outcomes are well known; therefore, future events can be reasonably predicted by the past. On the other hand, deep uncertainties refer to contexts in which the probabilities of the outcomes are poorly known, unknown, or unknowable, such that past events can give little insight into future ones [8]. Though deep uncertainties have been discussed in, for example, natural hazard risk assessment [15] and financial investments in climate change [16], they are rarely recognised in software engineering.
Uncertainties in ML refer to the lack of confidence in an ML model's output. Estimating them is essential to determine if they are low enough that the output can be trusted. Typically, they are classified into (irreducible) aleatory uncertainties and (reducible) epistemic uncertainties, referring to the inherent stochasticity of the observations and the lack of training data. The software engineering community has only recently studied uncertainty in ML. It, therefore, still primarily focuses on, for example, applying uncertainty quantification methods to supervise ML systems [17]-[20] with tool support [21]-[23]. All these works limit the scope to uncertainties caused by the (inherent) limitations of learned ML models, that is, not covering data quality uncertainty (e.g., where the quality of the input data is lower than the training data's) and scope compliance uncertainty (concerning differences between a modelled context and its intended application context), as classified in [24]. The gap between uncertainties understood and captured in prior-knowledge-based models of the model universe and uncertainties recognised and quantified in its ML models is also not yet bridged.
3) Evolution of Prior-knowledge-based and ML models: The common practice of software engineering is to manually and offline evolve models created based on prior knowledge with methods such as inference engines (e.g., Daikon [25]). For instance, Zhang et al. proposed data-augmented model evolution methods supported by model execution and simulation techniques [26] to semi-automatically evolve Unified Modeling Language (UML) state machines and uncertainty measurements. Considering the multi-paradigm modelling (MPM) nature of the model universe, its evolution involves the evolution of models belonging to the same modelling paradigm (e.g., SysML block definition diagrams and state machines) and the coevolution of models across MPM (e.g., Modelica models and 3D CAD models). In the literature, solutions have been proposed for MPM and co-simulations [27], [28], but not for the coevolution of such models. For instance, UncerTolve [29] advanced state of the art by using real operational data from CPSs to evolve test models in UML and subjective uncertainties offline, and DeepCollision [30] and LiveTCM [31] evolve test scenarios of autonomous driving vehicles in a 3D virtual environment with reinforcement learning.
ML models are often statically learned from historical data with ML techniques. Most domain adaptation and lifelong learning methods address data drift offline, requ
iring the availability of both source and target domain information beforehand, an assumption that prevents them from being applied in real contexts. For instance, RISE-DT [18] is one such approach: it automatically evolves a digital twin (with its model captured in automata and its capability enabled with an ML model) to be applied to a different application context of industrial elevator systems with transfer learning offline. However, online methods must fit the real-time context of evolving the model universe. Online domain adaptation has recently been proposed to continuously handle data drift for semantic segmentation under ever-changing conditions during deployment [32]. In addition, to learn from non-stationary (where data become incrementally available over time) real-world data, lifelong or continual learning - that is, continually evolving (via acquiring, turning and transferring) knowledge throughout lifespans across domains - seems promising [33]. The most recent advance in this field is online lifelong and continual learning [34], [35], suitable for model universe evolution. Still, we need to see applications and empirical studies, which are currently largely unavailable. Despite these efforts, there needs to be a solution enabling the holistic coevolution of the model universe.
4) Coevolution of model and system universes: Due to uncertainties, a system universe is naturally an evolutionary reality. Therefore, its corresponding model universe should be an evolutionary model of the evolutionary reality. Any mechanical model (without the dynamics of changing for 'good') is doomed to be useless. As well put by George Box in [36], 'all models are wrong, but some are useful'. This statement initially referred to statistical models but now generally applies to all models. During modelling, assumptions are made to understand and predict the system universe. When the system universe evolves, to be valid, the model universe needs to evolve itself accordingly by 1) validating knowns (captured in the model universe) with new information obtained from the system universe, 2) refining subjective uncertainties, 3) discovering unknowns to invalidate captured assumptions, and 4) recognise the unknowable.
Without a suitable evolution mechanism, the difference between the model and system universes becomes prominent, and decisions based on an outdated model universe are prone to errors. To maintain the model universe's usefulness, it must be continuously evolved to remain alive. We, therefore, define evolution in the model universe as its progression towards a direction from an uncertain or worse state to a more certain or better one in terms of supporting the system universe's development, operation, and maintenance. We consider coevolution in universes as an evolution involving interactions of more than one model type in the model universe or interactions across the model and system universes.
5) Coevolutionary algorithms: Evolutionary computation has been applied to solve many optimisation problems, e.g., test optimisation [37]-[39], product configuration optimisation [40], [41], and optimisations in requirements engineering [42], [43]. A subset of evolutionary algorithms, coevolutionary algorithms evaluate an individual's fitness based on how the individual performs against others in the population [44], known as indirect fitness. Relationships such as competition and cooperation among individuals are vital in designing coevolutionary algorithms, which can help simulate real-world scenarios such as pedestrian detection [44] and search for game-playing strategies [45].
Coevolutionary algorithms are often implemented in different metaheuristic algorithms - such as genetic algorithms (GA), genetic programming, and differential evolution [46] - evidence shows that coevolutionary and conventional evolutionary algorithms can complement each other very well. In software engineering, although coevolution has been leveraged in addressing software development and testing challenges - as in test case generation by coevolving test inputs and test oracles [47], automatic programming [48] and software correction [49], and the coevolution of models and tests [50] - it is still largely unexplored for addressing complex and practical problems requiring novel coevolution strategies implemented in suitable evolutionary algorithms.
6) Epigenetics and epigenetic algorithms: 'Epigenetics studies heritable changes in gene expression that occur without changes in DNA sequence' in response to environmental changes [51]. An example is an octopus temporarily changing colour to one not encoded in its DNA in response to environmental threats [52]. Several recent reviews have studied the role of epigenetics in domesticated animals [53], plants [54], and humans [55]. For instance, diverse environmental behaviours (e.g. stress, exercise, and exposure to a toxic environment) bring epigenetic changes (both positive and adverse) in humans during their life spans.
We postulate that epigenetic algorithms fit model universe evolution well because 1) genes passed down from parents (genetic inheritance) cannot react to sudden changes in the environment by themselves, but epigenetic inheritance, by controlling how genes work, allows for fast adaptation when appropriate, increasing the speed of convergence while maintaining stability in a changing environment; 2) epigenetic mechanisms have the potential to respond - and probably in most cases positively - to all types of uncertainties if we constrain the direction of evolution; and 3) epigenetic and coevolutionary algorithms are both nature-inspired and hence can be naturally incorporated.
Only a few epigenetic algorithms have been proposed: epiGA [56] implements gene silencing and integrates it into GA to control how genes are expressed or turned on or off in response to environmental uncertainties; EpiLearn [57] encodes dynamic environmental changes as an epigenetic layer in a learning process to allow for adaptive and efficient learning; and RELEpi [57] supports the coevolving
decision making of groups of agents (swarms) in uncertain environments, although it has not yet proven effective for real-world problems.
Though these works demonstrate that epigenetic algorithms are a promising direction for coping with uncertainty and unknowns, we are far from being able to apply them to handle uncertainties in model universe evolution due to 1) rarely seeing implementations of epigenetic mechanisms from biology, 2) the lack of real-world applications, and 3) the unavailability of experimental frameworks, empirical data, and research communities. This is because epigenetics in biology is relatively new and complex, and epigenetic encoding for real applications requires a deep understanding of application contexts, problems to be solved, and epigenetic mechanisms.
Concluding remark. There is no holistic method for evolving the model universe of a system universe under unknown uncertainties because the model universe's evolution: 1) needs to be online and autonomous, 2) is triggered at different times and in different spaces, 3) is tightly entangled with uncertainties of various types and degrees, 4) needs to coordinate diverse modelling paradigms, and 5) needs to interact with the system universe during its operation efficiently.
## III The Way forward
1) Identify universe coevolution patterns: We first need to understand primary coevolution triggers (e.g., discovering unknowns in prior-knowledge-based models, obtaining new data for adapting ML models), the time points that trigger each coevolution (e.g., upon receiving a new batch of data, as soon as an uncertain event occurs), and the conditions under which each model needs to be evolved to keep themselves alive and the data requirements (e.g., quality and quantity). Following the common practice of model-based engineering, these understandings can be specified as a metamodel, based on which methodologies and tools to specify, characterise, and automatically identify each pattern can be proposed.
2) Inspired by coevolution mechanisms in nature:
Coevolution mechanisms in nature can provide inspiration for designing and applying coevolutionary algorithms; however, not all coevolution mechanisms can be directly implemented in coevolutionary algorithms, mainly because these mechanisms are complex and we have only a limited understanding of them. It is therefore important to only implement their essential features. For instance, some interactions between the model and system universes might be mapped to commensalism, where the model universe benefits from the interactions with the system universe (e.g., evolving itself with data received from the system universe) while the system universe remains unaffected. Future research is needed to systematically map universe coevolution patterns (representing the problems to be solved) to coevolution mechanisms in nature, which will leverage the identification of existing coevolutionary algorithms and the development of novel coevolutionary algorithms to enable model universe evolution.
3) Develop uncertainty taxonomy, metamodel, quantification, and management methods: We need to develop a comprehensive uncertainty taxonomy to support end-to-end uncertainty management, characterising and quantifying uncertainties (with the uncertainty metamodel to be devised) which results in uncertainty models being managed as part of the model universe. We will also need to integrate various uncertainty quantification methods for ML and prior-knowledge-based models to efficiently enable holistic end-to-end uncertainty quantification that involves more than one quantification method, for example, connecting subjective uncertainties from prior-knowledge models to objective uncertainties from sensory data and, further, to uncertainties in ML models' predictions, which could lead to uncertain decision making and the actuation of physical devices. A solution needs to systematically select and base itself on various uncertainty-related theories, such as probability theory for quantifying the likelihood of known outcomes, possibility theory for situations in which the probability of an event is unknown or unknowable, Bayesian decision theory and Bayesian updating to update the prior knowledge of belief agents about given events, prospect theory for (subjective) human decision making under uncertainty and risk (e.g., pedestrians crossing roads), chaos theory for modelling, analysing, and improving system robustness, Dempster-Shafer theory in handling situations lacking complete information, and combinations of theories, powering end-to-end uncertainty-aware model universe evolution.
4) Propose coevolutionary algorithms and epigenetics-inspired algorithms: An envisioned solution must be autonomous, data-augmented, and online, which puts high requirements on its efficiency. To achieve this, we must first rely on coevolutionary algorithms by developing optimal encoding mechanisms for each coevolution pattern, defining subjective internal measures for fitness, defining adaptive problem decomposition structures, and overcoming challenges such as avoiding local optima, scaling, and measuring performance. In response to uncertainties, we need to develop epigenetics-inspired algorithms that mimic biological adaptations in species by encoding each model universe's evolution pattern in the form of genomes and epigenomes (both responsible for the regulation and expression of genetic information), implementing generic epigenetic operators based on three epigenetic mechanisms (DNA methylation, histone modification, and RNA editing), and simulating uncertainties in gene expressions through epigenetic changes by applying epigenetic operators to genes in the model universe through mechanisms such as the introduction of epigenetic drift (i.e., where the epigenetic marks change over time) and combing different epigenetic operators.
5) Design multi-agent evolutionary reinforcement learning methods: We need to introduce coevolutionary algo
rithms to multi-agent reinforcement learning by matching each model in the model universe as a learning agent, its environment as the system universe, and other models (i.e., agents) in the model universe. For instance, for policy-based reinforcement learning, one can let agents compete or cooperate (or other coevolution mechanisms) to learn from their interactions, evolve the agents' policies via the coevolutionary process, and design the agents' rewards to encourage or discourage their behaviours. We will introduce epigenetic mechanisms to coevolutionary algorithms based on uncertainties under study, for example, by controlling gene expressions in response to the environmental changes of individuals in an agent's population, influencing the evolution of individuals by controlling genetic operators such as mutation rates, and controlling the selection of individuals for reproduction. When integrating epigenetic and coevolutionary algorithms with multi-agent reinforcement learning, we can use coevolutionary algorithms to evolve the policies of multiple agents in a reinforcement learning setting and use the epigenetic mechanisms to modulate the agents' policies based on their experiences and interactions with other agents/models and the environment (i.e., the system universe).
Impact. Intelligent software systems are used in many applications, such as healthcare, agriculture, transportation, and manufacturing, and have an enormous impact on our lives and demand a high degree of dependability on these systems' operations. With the envisioned solution, the dependability of the current and future intelligent software systems will be significantly improved through fully-fledged model universes capable of robustly dealing with uncertainties in real time.
|
2309.15055 | Triple delooping for multiplicative hyperoperads | Using techniques developed by Batanin and the first author, we extend the
Turchin/Dwyer-Hess double delooping result to further iterations of the
Baez-Dolan plus construction. For $0 \leq m \leq n$, we introduce a notion of
$(m,n)$-bimodules which extends the notions of bimodules and infinitesimal
bimodules over the terminal non-symmetric operad. We show that a double
delooping always exists for these bimodules. For the triple iteration of the
Baez-Dolan construction starting from the initial $1$-coloured operad, we
provide a further reduceness condition to have a third delooping. | Florian De Leger, Maroš Grego | 2023-09-26T16:35:17Z | http://arxiv.org/abs/2309.15055v1 | # Triple delooping for multiplicative hyperoperads
###### Abstract.
Using techniques developed in [4], we extend the Turchin/Dwyer-Hess double delooping result to further iterations of the Baez-Dolan plus construction. For \(0\leq m\leq n\), we introduce a notion of \((m,n)\)-bimodules which extends the notions of bimodules and infinitesimal bimodules over the terminal non-symmetric operad. We show that a double delooping always exists for these bimodules. For the triple iteration of the Baez-Dolan construction starting from the initial \(1\)-coloured operad, we provide a further reduceness condition to have a third delooping.
The first author is supported by RVO:67985840 and Praemium Academiae of Martin Markl.
## 1. Introduction
The goal of this paper is to extend the Turchin/Dwyer-Hess double delooping result to further iterations of the Baez-Dolan plus construction.
The Turchin/Dwyer-Hess theorem [22, 12] concerns multiplicative (non-symmetric) operads. The notion of non-symmetric operad will be recalled in Example 2.2. A non-symmetric operad \(\mathcal{O}\) is called _multiplicative_ when it is equipped with an operad map \(Ass\to\mathcal{O}\), where \(Ass\) is the terminal non-symmetric operad. Such a map endows the collection \((\mathcal{O}_{n})_{n\geq 0}\) with a structure of cosimplicial object [22], which we will write \(\mathcal{O}^{\bullet}\). The theorem states that if a multiplicative operad \(\mathcal{O}\) is _reduced_, that is \(\mathcal{O}_{0}\) and \(\mathcal{O}_{1}\) are contractible, then there is a double delooping
\[\Omega^{2}\mathrm{Map}_{\mathrm{NOp}}(Ass,u^{*}(\mathcal{O}))\sim\mathrm{ Tot}(\mathcal{O}^{\bullet}), \tag{1}\]
where Map is the homotopy mapping space, taken in the category NOp of non-symmetric operads, \(u^{*}\) is the forgetful functor from multiplicative to non-symmetric operads and Tot is the homotopy totalization. This result is remarkable especially because of an earlier result of Sinha [20] which states that the space of _long knots modulo immersions_[12] is equivalent to the totalization of the Kontsevich operad. The double delooping \(1\) has been extended to general deloopings in higher dimensions in [10, 11]. Our goal is also to extend \(1\) but in another direction.
Non-symmetric operads appear when we iterate the Baez-Dolan plus construction, as we will now explain. This construction was first introduced in [1] in order to define weak \(n\)-categories. It is a construction which associates to a (symmetric coloured) operad \(P\) a new operad \(P^{+}\) for _operads over \(P\)_. More explicitly, if \(P\) is an \(S\)-coloured operad, \(P^{+}\) is the operad whose algebras are \(S\)-coloured operads equipped with an operad map to \(P\). For example, if \(I\) is the initial \(1\)-coloured operad, a \(1\)-coloured operad is equipped with an operad map to \(I\) if and only if it is a monoid, so \(I^{+}\) is the operad for monoids \(Ass\) (as a symmetric operad this time). Iterating this construction, one gets the operad \(I^{++}\) for non-symmetric operads. This construction can of course be iterated infinitely many times. The question that naturally arises and that we will try to answer in this paper is if there are delooping results analogous to \(1\) for the next iterations of the plus construction.
We will work with polynomial monads, which are equivalent to symmetric coloured operads with freely acting symmetric groups [14]. We will recall the notion in Section 2, as well as the description of the plus construction for polynomial monads from [15]. We iterate this construction from the identity monad on the category of sets, which corresponds to the initial \(1\)-coloured operad. This gives us a sequence of polynomial monads which we call the _opetopic sequence_. Our main tool in order to get delooping results such as \(1\) is the extension of some homotopy theory results from small categories to polynomial monads. Therefore we will recall an important notion of [4], namely the notion of _homotopically cofinal_ map of polynomial monads.
In Section 3, we introduce the notion of \((m,n)\)_-bimodule_, for \(0\leq m\leq n\). Let us motivate this notion now. The proofs of the double delooping \(1\) presented in
[22, 12], as well as in [4], all proceed in two steps. Indeed, we have the deloopings
\[\Omega\mathrm{Map}_{\mathrm{NOp}}(Ass,u^{*}(\mathcal{O}))\sim\mathrm{Map}_{ \mathrm{Bimod}}(Ass,v^{*}(\mathcal{O})) \tag{2}\]
if \(\mathcal{O}_{1}\) is contractible, and
\[\Omega\mathrm{Map}_{\mathrm{Bimod}}(Ass,v^{*}(\mathcal{O}))\sim\mathrm{Map}_{ \mathrm{IBimod}}(Ass,w^{*}(\mathcal{O})) \tag{3}\]
if \(\mathcal{O}_{0}\) is contractible, where Bimod and IBimod are the category of bimodules over \(Ass\) and infinitesimal bimodules over \(Ass\) respectively, \(Ass\) is seen as both a bimodule and infinitesimal bimodule over itself and \(v^{*}\) and \(w^{*}\) are the appropriate forgetful functors. Since infinitesimal bimodules over \(Ass\) are known to be equivalent to cosimplicial objects [21], we do indeed get the double delooping \(1\). Our notion of \((m,n)\)-bimodule is an extension of bimodule over \(Ass\) and infinitesimal bimodule over \(Ass\) to further iterations of the plus construction.
In Section 4, we construct a map of polynomial monads whose algebras involve these \((m,n)\)-bimodules. We prove in Theorem 4.1 that this map is homotopically cofinal, which extends a result of [4].
In Section 5, we investigate a general delooping for \((m,n)\)-bimodules. More precisely, we try to exhibit reduceness conditions for a delooping
\[\Omega\mathrm{Map}_{\mathrm{Bimod}_{m,n}}(\zeta,u^{*}(\mathcal{O}))\sim \mathrm{Map}_{\mathrm{Bimod}_{m-1,n}}(\zeta,v^{*}(\mathcal{O})),\]
where \(\zeta\) is our notation for the terminal object in the category \(\mathrm{Bimod}_{m,n}\) of \((m,n)\)-bimodules. We prove in Theorem 5.6 that in the cases \(m=n\) and \(m=n-1\), the reduceness conditions are analogous to the ones for the Turchin/Dwyer-Hess theorem. Theorem 5.6 gives us in particular the deloopings \(2\) and \(3\). We further investigate the third iteration of the plus construction, starting from the identity monad. We start by recalling the definition of the category \(\Omega_{p}\), the version of the dendroidal category for planar trees [18]. We prove that \(\Omega_{p}\) has the same role as the simplex category \(\Delta\) has in the Turchin/Dwyer-Hess theorem. We define a notion of _functor equipped with retractions_ for a covariant presheaf over \(\Omega_{p}\), which we use to exhibit a reduceness condition for a third delooping. The triple delooping we get in Corollary 5.22 is the analogue of the double delooping \(1\) for the next iteration of the plus construction. Finally, we apply our triple delooping to a non-trivial example, namely the desymmetrisation of the Kontsevich operad.
In a future work, we would like to investigate the geometric meaning of our triple delooping and try to find out if there is an analogue to Sinha's result in this case. We would also like to explore the connections between our direction of delooping results and the one from [11].
### Acknowledgement
Both authors are deeply grateful to Michael Batanin who suggested this project, for his guidance and the many illuminating discussions during our weekly meetings.
## 2. Preliminaries
### Polynomial monads
Recall [13] that a polynomial monad \(T\) is a cartesian monad whose underlying functor is given by the composite \(t_{!}p_{*}s^{*}\) for some diagram in Set, the category of sets, of the form
(4)
where \(p^{-1}(b)\) is finite for all \(b\in B\). The elements of the sets \(I\), \(B\) and \(E\) will be called _colours_, _operations_ and _marked operations_ respectively. The maps \(s\), \(p\) and \(t\) will be called _source map_, _middle map_ and _target map_ respectively. The diagram \(4\) will be called the _polynomial_ for the monad.
An algebra of such polynomial monad in a symmetric monoidal category \((\mathcal{E},\otimes,e)\) is given by a collection \((A_{i})_{i\in I}\) together with, for all \(b\in B\), structure maps
\[m_{b}:\bigotimes_{e\in p^{-1}(b)}A_{s(e)}\to A_{t(b)}\]
satisfying associativity and unitality axioms.
**Example 2.1**.: The free monoid monad **Mon** is a polynomial monad [4, Example 2.6] given by
where \(Ltr\) is the set of (isomorphism classes of) linear trees, \(Ltr^{*}\) is the set of linear trees with one vertex marked, the middle map forgets the marking. Multiplication is given by insertion of a linear tree inside a vertex.
**Example 2.2**.: Recall that a non-symmetric operad \(A\) in a symmetric monoidal category \((\mathcal{E},\otimes,e)\) in given by a collection of objects \(A_{n}\in\mathcal{E}\) for \(n\geq 0\) together with maps
\[A_{k}\otimes A_{n_{1}}\otimes\ldots\otimes A_{n_{k}}\to A_{n_{1}+\ldots+n_{k}}.\]
and a map \(e\to A_{1}\) satisfying associativity and unitality axioms. The polynomial **NOp** for non-symmetric operads [4, Example 2.7] is given by
where \(Ptr\) is the set of (isomorphism classes of) planar trees, \(Ptr^{*}\) is the set of planar trees with one vertex marked, the source map returns the number of edges directly above the marked vertex, the middle map forgets the marking, the target map returns the number of leaves. Multiplication is given by insertion of a planar tree inside a vertex.
### Baez-Dolan plus construction
The Baez-Dolan plus construction was first introduced in [1]. Let us recall this construction for polynomial monads from [15]. Assume \(T\) is a polynomial monad whose underlying polynomial is given by diagram \(4\). We construct a new polynomial monad \(T^{+}\):
where \(tr(T)\) is the set of \(T\)-trees, that is trees whose vertices are decorated with elements of \(B\) and edges are decorated with elements of \(I\), satisfying the coherence condition that if a vertex decorated with \(b\in B\), its outcoming edge is decorated with \(t(b)\) and its incoming edges are decorated with \(s(e)\), for \(e\in p^{-1}(b)\):
\(tr(T)^{*}\) is the set of \(T\)-trees with one vertex marked. The source map returns the element which decorates the marked vertex, the middle map forgets the marking, the target map is given by composition of all the elements which decorate the vertices. Multiplication is given by insertion of a tree inside a vertex.
### Opetopic sequence
For \(n\geq 0\), let us define the polynomial monad \(\mathbf{Id}^{+n}\) by induction. \(\mathbf{Id}^{+0}=\mathbf{Id}\), that is the identity monad on Set. For \(n>0\), \(\mathbf{Id}^{+n}=\left(\mathbf{Id}^{+(n-1)}\right)^{+}\). One can check that \(\mathbf{Id}^{+1}=\mathbf{Mon}\) and \(\mathbf{Id}^{+2}=\mathbf{NOp}\), the polynomial monads defined in Example 2.1 and Example 2.2 respectively. We will write the underlying polynomial of \(\mathbf{Id}^{+n}\) as follows:
(5)
Note that for \(n>0\), \(I_{n}=B_{n-1}\). To avoid too heavy notations, we will simply write \(s\), \(t\) and \(p\) instead of \(s_{n}\), \(t_{n}\) and \(p_{n}\) respectively when \(n\geq 0\) is clear from the context.
### Homotopically cofinal maps of polynomial monads
Since a polynomial monad \(T\) is in particular cartesian, it makes sense to talk about lax morphisms of categorical \(T\)-algebras.
**Definition 2.3**.: Let \(f:S\to T\) be a map of polynomial monads and \(A\) a categorical \(T\)-algebra. An _internal \(S\)-algebra_ in \(A\) is a lax morphism of \(T\)-algebras \(1\to f^{*}(A)\), where \(1\) is the terminal \(T\)-algebra and \(f^{*}\) is the restriction functor.
We have the following theorem [2]:
**Theorem 2.4**.: _Let \(f:S\to T\) be a map of polynomial monads. The \(2\)-functor_
\[Int_{S}:\operatorname{Alg}_{T}(\operatorname{Cat})\to\operatorname{Cat},\]
_which sends a categorical \(T\)-algebra \(A\) to the category of internal \(S\)-algebras in \(A\), is representable. We will write \(T^{S}\) the representing object and call it classifier induced by \(f\)._
**Remark 2.5**.: Let \(f:S\to T\) be a map of polynomial monads given by \(\phi:I\to J\) on colours. It was proved in [2] that the classifier induced by \(f\) can be computed as a truncated simplicial \(T\)-algebra
(6)
where \(1\) is the terminal \(I\)-collection, \(F_{T}\) is the free \(T\)-algebra functor and \(\phi_{\dagger}\) is the left adjoint of the restriction \(\phi^{*}\), given by coproduct over fibres of \(\phi\).
**Remark 2.6**.: The classifier \(T^{S}\) being a categorical \(T\)-algebra, it has an underlying collection of categories. We will consider classifiers as categories, implying implicitly that we are talking about an arbitrary category of the underlying collection. Any statement made about a classifier, seen as a category, will imply that it is true for any category of the underlying collection.
**Example 2.7**.: For a polynomial monad \(T\) given by diagram 4, we describe the classifier induced by the identity on the polynomial monad \(T^{+}\). The set of objects is the set of \(T\)-trees \(tr(T)\). Morphisms are contractions of edges and multiplication of the elements of the set \(B\) of operations of the polynomial monad \(T\) accordingly, or insertion of unary vertices decorated with the unit. In particular, this gives us a category structure on \(B_{n}\).
We now recall an important notion of [4]:
**Definition 2.8**.: A map of polynomial monads \(f:S\to T\) is _homotopically cofinal_ if \(N(T^{S})\) is contractible.
## 3. Bimodules in the opetopic sequence
### \(\downarrow\)-construction
**Definition 3.1**.: For \(n\geq 0\), we call _tree with white vertices_ a pair \((b,W)\), where \(b\in B_{n}\) and \(W\) is a subset of the set of vertices of \(b\). We call _white vertices_ the elements of \(W\). The other vertices of \(b\) will be called _black vertices._
**Definition 3.2**.: For \(n>0\), we associate to a tree with white vertices \((b\in B_{n},W)\), a tree with white vertices \((b^{\downarrow}\in B_{n-1},W^{\downarrow})\) as follows. Let \(b_{0}\) be the maximal subtree of \(b\) containing the root edge and only black vertices. We take \(b^{\downarrow}=t(b_{0})\in I_{n}=B_{n-1}\). Note that the vertices of \(b^{\downarrow}\) correspond to the leaves of \(b_{0}\) which are also edges of \(b\). We define \(W^{\downarrow}\) as the set of vertices of \(b^{\downarrow}\) which correspond to an internal edge in \(b\).
**Example 3.3**.: Let \((b,W)\) be the planar tree in the following picture:
Then \(b_{0}\) will be the planar tree on the left of the following picture and \((b^{\downarrow},W^{\downarrow})\) will be the linear tree on the right:
Indeed, \(b^{\downarrow}\) has four vertices which correspond to leaves of \(b_{0}\). The first and last vertices of \(b^{\downarrow}\) are white because they correspond to leaves in \(b_{0}\) which are internal edges in \(b\).
The following lemma will be useful later:
**Lemma 3.4**.: _Let \((b,W)\) be a tree with white vertices and \((b^{\downarrow},W^{\downarrow})\) be the tree obtained by applying the construction of Definition 3.2. Let \(\tilde{b}^{\downarrow}\) be a tree obtained from \(b^{\downarrow}\) by adding unary vertices and \(\tilde{W}^{\downarrow}\) be the set \(W^{\downarrow}\) plus the unary vertices which have been added. Then there is a tree with white vertices \((\tilde{b},\tilde{W})\) such that \((\tilde{b}^{\downarrow},\tilde{W}^{\downarrow})\) is the tree obtained from it by applying the construction of Definition 3.2._
Proof.: Recall from Example 2.7 that \(B_{n}\) has a category structure where morphisms are contractions of edges and insertion of unary vertices. Let us assume that the maximal subtree \(b_{0}\) of \(b\) containing the root and only black vertices has been contracted to a corolla. This can be done without loss of generality because the tree obtained by applying the construction of Definition 3.2 remains the same. Assume \(\tilde{b}^{\downarrow}\) is obtained from \(b^{\downarrow}\) by adding a unary vertex decorated with \(\eta(c)\in B_{n-2}\), where
\(c\in I_{n-2}\) is the decoration of the edge where the unary vertex is added. Then we take \(\tilde{b}\) as the tree obtained from \(b_{0}\) by adding a trunk above the root vertex. The vertex of the trunk is decorated with \(\nu(c)\in B_{n-1}\), where \(\nu(c)\) is the free living edge decorated with \(c\). The edge of the trunk is decorated with \(t\nu(c)=\eta(c)\). The tree \(b^{\downarrow}\) which decorates the root vertex of \(b\) is replaced by \(\tilde{b}^{\downarrow}\).
For example, in the following picture, the tree on the left is \(\tilde{b}\), and the inserted trunk is dotted. The tree on the right is \(\tilde{b}^{\downarrow}\), and the inserted unary vertex is dotted. It has four vertices, including the added unary vertex, since the root vertex of \(\tilde{b}\) has four edges above it, including the added trunk:
This can of course be done for multiple unary vertices added.
### \(m\)-dimensional sets of vertices
For \(0\leq i\leq n\) and a tree with white vertices (\(b\in B_{n},W\)), we will write \((b^{\downarrow i},W^{\downarrow i})\) for the tree with white vertices obtained by iterating \(i\) times the construction of Definition 3.2.
**Definition 3.5**.: Let \(0\leq m\leq n\) and \((b,W)\) a tree with white vertices. We say that \(W\) is _\(m\)-dimensional_ if
* for \(0\leq i<n-m\), \(W^{\downarrow i}\) does not contain any pairs of vertices one above the other,
* \(W^{\downarrow(n-m)}\) is the set of vertices of \(b^{\downarrow(n-m)}\).
**Remark 3.6**.: It is immediate from Definition 3.5 that if \((b,W)\) is \(m\)-dimensional, then \((b^{\downarrow i},W^{\downarrow i})\) is \(m\)-dimensional for all \(0\leq i\leq n-m\).
**Remark 3.7**.: Let \((b,W)\) be a tree with white vertices such that \(W\) is \(m\)-dimensional. Then all the sets \(W^{\downarrow i}\) for \(0\leq i\leq n-m\) are in bijection with each other. Indeed, the construction of Definition 3.2 induces a surjection from \(W^{\downarrow i}\) to \(W^{\downarrow(i+1)}\) for \(0\leq i<n-m\). The first condition of Definition 3.5 implies that these surjections are also injective.
**Remark 3.8**.: Let \(n>0\) and \((b\in B_{n},W)\) be a tree with white vertices. \(W\) is \(n\)-dimensional if it is the set of vertices of \(b\). It is \(0\)-dimensional if it is a singleton. It is \(n-1\)-dimensional if each path from the root to a leaf in \(b\) contains exactly one vertex of \(W\). Indeed, the first condition of Definition 3.5 ensures that any path contains at most one vertex, while the second one ensures that any path contains at least one vertex.
### \((m,n)\)-bimodules
**Definition 3.9**.: For \(0\leq m\leq n\), let \(B_{m,n}\) be the set of trees with white vertices \((b,W)\) where \(b\in B_{n}\), \(W\) is \(m\)-dimensional and the tree does not contain any pairs of adjacent black vertices or any unary black vertices.
**Definition 3.10**.: Let \(\mathbf{Bimod}_{m,n}\) be the polynomial monad represented by
\[I_{n}\xleftarrow{s}E_{m,n}\xrightarrow{p}B_{m,n}\xrightarrow{t}I_{n},\]
where \(E_{m,n}\) is the set of elements of \(B_{m,n}\) with a marked white vertex. The source, target and middle map are defined as in Subsection 2.2. Multiplication is given by inserting a tree inside a white vertex, then contracting all edges between two black vertices and removing all unary black vertices.
**Definition 3.11**.: For \(0\leq m\leq n\), an \((m,n)\)_-bimodule_ in a symmetric monoidal category \((\mathcal{E},\otimes,e)\) is an algebra of \(\mathbf{Bimod}_{m,n}\) in \(\mathcal{E}\).
For an \(I_{n}\)-collection \(A\) in \(\mathcal{E}\), \(b\in B_{n}\) and \(V\subset p^{-1}(b)\), we will write
\[A_{s(V)}=\bigotimes_{v\in V}A_{s(v)}.\]
**Remark 3.12**.: Explicitly, an \((m,n)\)_-bimodule_ in a symmetric monoidal category \((\mathcal{E},\otimes,e)\) is given by
* a collection of objects \((A_{i})_{i\in I_{n}}\) in \(\mathcal{E}\),
* for all \((b,V)\in B_{m,n}\), a map \[\mu_{b,V}:A_{s(V)}\to A_{t(b)},\]
satisfying associativity and unitality axioms.
**Remark 3.13**.: According to Remark 3.8, \(B_{n,n}=B_{n}\), so the polynomial monad \(\mathbf{Bimod}_{n,n}\) is just \(\mathbf{Id}^{+n}\). Also according to Remark 3.8, the elements of \(B_{0,n}\) are trees with exactly one white vertex. We deduce that \(E_{0,n}=B_{0,n}\) and the middle map for the polynomial monad \(\mathbf{Bimod}_{0,n}\) is the identity, so this polynomial monad is actually a small category.
**Remark 3.14**.: According to Remark 3.8, the sets \(B_{2,2}\), \(B_{1,2}\) and \(B_{0,2}\) are the sets of planar trees, planar trees with white vertices, called _beads_ in [22], lying on the same horizontal line and planar trees with one distinguished bead, respectively. One recognises the polynomial monad \(\mathbf{Bimod}_{m,2}\), when \(m=2\), \(m=1\) and \(m=0\), as the monad for non-symmetric operads, bimodules over the terminal non-symmetric operad \(Ass\) and infinitesimal bimodules over \(Ass\) described in [4], respectively.
### Universal classifier for \((m,n)\)-bimodules
Let us now describe the classifier induced by the identity on the polynomial monad \(\mathbf{Bimod}_{m,n}\). The set of objects is \(B_{m,n}\). The morphisms can be given by nested trees, that is trees \((b,W)\in B_{m,n}\), where each vertex \(v\in W\) has itself a tree inside it, called nest. The following picture is an example of such nested tree:
The source of the nested tree is obtained by inserting the nests into each corresponding vertex and then contracting the edges connecting black vertices if necessary. The target is obtained by forgetting the nests. For example, the nested tree
of the previous picture represents the following morphism:
### Bimodules over \((m,n)\)-bimodules
**Lemma 3.15**.: _Let \((b,W)\in B_{m,n}\) with an \(m-1\)-dimensional \(V\subset W\). Then the complement of \(V\) in \(W\) is the canonical union of two sets \(V_{-}\) and \(V_{+}\)._
Proof.: If \(m=n\), then according to Remark 3.8\(W\) is the set of vertices of \(b\) and each path from a leaf to the root in \(b\) contains exactly one vertex of \(V\). Then we can take \(V_{-}\) and \(V_{+}\) as the set of vertices in \(W\) that are below and above a vertex of \(V\), respectively.
If \(m<n\), let \((b_{V}^{\downarrow},V^{\downarrow})\) and \((b_{W}^{\downarrow},W^{\downarrow})\) be the pairs obtained by applying the construction of Definition 3.2 to \((b,V)\) and \((b,W)\), respectively. According to Remark 3.6, \(V^{\downarrow}\) and \(W^{\downarrow}\) are \(m-1\)-dimensional and \(m\)-dimensional, respectively. It is easy to see from the construction that, since \(V\subset W\), \(b_{W}^{\downarrow}\) can be obtained from \(b_{V}^{\downarrow}\) by contracting some of its edges, which connect black vertices. So \(V^{\downarrow}\) can be seen as a set of vertices of \(b_{W}^{\downarrow}\) and it is also \(m-1\)-dimensional as a set of vertices of this tree. By induction, the complement of \(V^{\downarrow}\) in \(W^{\downarrow}\) is the canonical union of two sets. According to Remark 3.7, \(W\) and \(W^{\downarrow}\), as well as \(V\) and \(V^{\downarrow}\), are in bijection. This concludes the proof.
**Definition 3.16**.: For \(0<m\leq n\) and \(A\) and \(B\) two \((m,n)\)-bimodules in \(\mathcal{E}\), an \(A-B\)_-bimodule_\(C\) is given by
* a collection of objects \((C_{i})_{i\in I_{n}}\),
* for all \((b,W)\in B_{m,n}\) with an \(m-1\)-dimensional \(V\subset W\), a map (7) \[A_{s(V_{-})}\otimes C_{s(V)}\otimes B_{s(V_{+})}\to C_{t(b)},\]
where \(V_{-}\) and \(V_{+}\) are given by Lemma 3.15, satisfying associativity and unitality axioms.
**Lemma 3.17**.: _For \(0<m\leq n\), let \(\zeta\) be the terminal \((m,n)\)-bimodule. If \(n-2\leq m\), the category of \((m-1,n)\)-bimodules is isomorphic to the category of \(\zeta-\zeta\)-bimodules._
Proof.: Let \((b,V)\in B_{m-1,n}\). We will construct \((\tilde{b},\tilde{W})\in B_{m,n}\) such that \(\tilde{W}\) contains an \(m-1\)-dimensional subset which is in bijection with \(V\).
* If \(m=n\), we take \(\tilde{b}=b\) and \(\tilde{W}\) as the set of vertices of \(b\).
* If \(m=n-1\), \(\tilde{b}\) is the tree obtained from \(b\) by adding a unary vertex on each leaf which is not above a vertex of \(V\). \(\tilde{W}\) is the union of \(V\) and all the unary vertices which have been added.
* If \(m=n-2\), let \((b^{\downarrow},V^{\downarrow})\) be the pair obtained from \((b,V)\) by applying the construction of Definition 3.2. Let \((\tilde{b}^{\downarrow},\tilde{W}^{\downarrow})\) be the tree obtained from \((b^{\downarrow},V^{\downarrow})\) by adding unary vertices as in the case \(m=n-1\). We take \((\tilde{b},\tilde{W})\) given by Lemma 3.4.
It is easy to see that the \(\zeta-\zeta\)-bimodule structure map induced by \((\tilde{b},\tilde{W})\) corresponds to the \((m-1,n)\)-bimodule structure induced by \((b,V)\).
### Pointed bimodules over \((m,n)\)-bimodules
**Definition 3.18**.: For \(0<m\leq n\) and \(A\) and \(B\) two \((m,n)\)-bimodules in \(\mathcal{E}\), a _pointed \(A-B\)-bimodule_\(C\) is given by
* a collection of objects \((C_{i})_{i\in I_{n}}\),
* for all \((b,W)\in B_{m,n}\) and partitions of \(W\) into \(V_{-}\), \(V_{+}\) and \(V\) such that there is an \(m-1\)-dimensional subset \(U\subset W\) satisfying \(U_{-}\subset V_{-}\) and \(U_{+}\subset V_{+}\), a map \[A_{s(V_{-})}\otimes C_{s(V)}\otimes B_{s(V_{+})}\to C_{t(b)},\]
satisfying associativity and unitality axioms.
**Lemma 3.19**.: _For \(0<m\leq n\) and \(A\) and \(B\) two \((m,n)\)-bimodules in \(\mathcal{E}\), there is an \(A-B\)-bimodule \(\alpha\) such that the category of pointed \(A-B\)-bimodules is isomorphic to the comma category of \(A-B\)-bimodules under \(\alpha\)._
Proof.: It is obvious that there is a forgetful functor from pointed \(A-B\)-bimodules to \(A-B\)-bimodules. Indeed, if one takes \(V=U\) in the definition of pointed \(A-B\)-bimodule, one gets the structure maps for \(A-B\)-bimodules. Let \(\alpha\) be the image of the initial pointed \(A-B\)-bimodule through this forgetful functor. If \(C\) is a pointed \(A-B\)-bimodule, then it is an \(A-B\)-bimodule equipped with a map from \(\alpha\). Now assume that \(C\) is an \(A-B\)-bimodule equipped with a map from \(\alpha\). Note that \(\alpha\), being a pointed \(A-B\)-bimodule, is equipped with maps \(A\to\alpha\gets B\) of collections, therefore \(C\) is also equipped with such maps. So \(C\) has a pointed bimodule structure given by the composite
\[A_{s(V_{-})}\otimes C_{s(V)}\otimes B_{s(V_{+})}\to A_{s(U_{-})}\otimes C_{s( U)}\otimes B_{s(U_{+})}\to C_{t(b)},\]
where the first map is given from the maps \(A\to C\gets B\) and the second is from the \(A-B\)-bimodule structure.
## 4. Cofinality
### Statement of the result
In this section, we assume \(0<m\leq n\) are fixed. We will closely follow [9, Section 3]. Let \(\mathbf{Bimod}_{m,n}^{\bullet+\bullet}\) be the polynomial monad for triples \((A,B,C)\) where \(A\) and \(B\) are \((m,n)\)-bimodules and \(C\) is a pointed \(A-B\)-bimodule. Let \(\mathbf{Bimod}_{m,n}^{\bullet\to\bullet-\bullet}\) be the polynomial monad for cospans \(A\to C\gets B\) of \((m,n)\)-bimodules.
**Theorem 4.1**.: _There is a homotopically cofinal map of polynomial monads_
\[f:\mathbf{Bimod}_{m,n}^{\bullet+\bullet}\to\mathbf{Bimod}_{m,n}^{\bullet\to \bullet-\bullet}. \tag{8}\]
### Description of the map of polynomial monads
The polynomial monad \(\mathbf{Bimod}_{m,n}^{\bullet+\bullet}\) is given by
(9)
where elements of \(B_{m,n}^{\bullet+\bullet}\) are pairs \((b,W)\in B_{m,n}\), equipped with a label in \(\{A,B,C\}\) called _target label_ and for each white vertex a label in \(\{A,B,C\}\) called _source label_, subject to the following restrictions. If the target label is \(A\) (resp. \(B\)), then all the source labels are also \(A\) (resp. \(B\)). If the target label is \(C\), there must be an \(m-1\)-dimensional subset \(V\subset W\) such that all the vertices in \(V_{-}\) have label \(A\) and all the vertices in \(V_{+}\) have label \(B\). \(E_{m,n}^{\bullet+\bullet}\) is the set of pairs \((b,W)\) of \(B_{m,n}^{\bullet+\bullet}\) with one white vertex marked. Note that there is a projection from 9 to 5. The
source map is given by the source label of the marked vertex and an element in \(I_{n}\) thanks to this projection. Similarly, the target map is given by the target label and an element in \(I_{n}\) thanks to this projection. The middle map forgets the marking. Multiplication is given by insertion of a tree inside a vertex, if the source and target labels correspond.
The description of the polynomial monad \(\mathbf{Bimod}_{m,n}^{\bullet\rightarrow\bullet\leftarrow\bullet}\) is completely similar. The difference is that we drop the condition that there must be an \(m-1\)-dimensional subset \(V\subset W\) such that all the vertices in \(V_{\!-}\) have label \(A\) and all the vertices in \(V_{\!+}\) have label \(B\). The map \(f\) in \(8\) is given by inclusion of sets.
### Construction of a smooth functor
For a functor \(F:\mathcal{X}\rightarrow\mathcal{Y}\) between categories and \(y\in\mathcal{Y}\), we will write \(F_{y}\) for the fibre of \(F\) over \(y\).
**Definition 4.2**.: A functor \(F:\mathcal{X}\rightarrow\mathcal{Y}\) is _smooth_ if, for all \(y\in\mathcal{Y}\), the canonical functor
\[F_{y}\to y/F\]
induces a weak equivalence between nerves.
Let us state the Cisinski lemma [8, Proposition 5.3.4]:
**Lemma 4.3**.: _A functor \(F:\mathcal{X}\rightarrow\mathcal{Y}\) is smooth if and only if for all maps \(f_{1}:y_{0}\to y_{1}\) in \(\mathcal{Y}\) and objects \(x_{1}\) in \(\mathcal{X}\) such that \(F(x_{1})=y_{1}\), the nerve of the lifting category of \(f_{1}\) over \(x_{1}\), whose objects are arrows \(f:x\to x_{1}\) such that \(F(f)=f_{1}\) and morphisms are commutative triangles_
_with \(g\) a morphism in \(F_{y_{0}}\), is contractible._
We have a commutative square of polynomial monads
where \(u\) is given by projection. This square induces a morphism of algebras [4, Proposition 4.7]
\[F:(\mathbf{Bimod}_{m,n}^{\bullet\rightarrow\bullet\leftarrow\bullet})^{ \mathbf{Bimod}_{m,n}^{\bullet+\bullet}}\to u^{*}\left((\mathbf{Bimod}_{m,n} )^{\mathbf{Bimod}_{m,n}}\right). \tag{10}\]
To simplify the notations, we will write \(\mathcal{X}\) and \(\mathcal{Y}\) for the domain and codomain of \(F\) respectively.
### Proof of cofinality
**Lemma 4.4**.: _For a tree \(b\in B_{m}\), let us consider the category \(\mathcal{C}(b)\) whose objects are decorations of the vertices of \(b\) by labels in \(\{A,B,C\}\), such that there is an \(m-1\)-dimensional subset for which the vertices below have label \(A\) and above have label \(B\). The morphisms turn vertices with label \(A\) or \(B\) to vertices with label \(C\). This category has contractible nerve._
Proof.: We proceed by induction on the number of vertices of \(b\). If \(b\) has no vertices, that is the free living edge, then \(\mathcal{C}(b)\) is the terminal category. Now assume \(b\) has at least one vertex. Let \(\mathcal{A}\) be the full subcategory of \(\mathcal{C}(b)\) containing the trees for which the root vertex has label \(A\). Let \(\mathcal{B}\) be the full subcategory of \(\mathcal{C}(b)\) containing the trees for which all the vertices which are not the root have label \(B\). The union of \(\mathcal{A}\) and \(\mathcal{B}\) gives the full category, the intersection is the terminal category, \(\mathcal{B}\) consists of a cospan. The category \(\mathcal{A}\) is isomorphic to \(\prod_{e\in E}\mathcal{C}(b(e))\), where \(E\) is the set of edges directly above the root and \(b(e)\) is the maximal subtree of \(b\) having \(e\) as root. So the nerve of \(\mathcal{A}\) is contractible by induction.
Note that an object of \(\mathcal{X}\) is an element of the set of operations of the polynomial monad \(\mathbf{Bimod}_{m,n}^{\bullet\to\bullet\bullet}\), which in particular gives us a pair \((b,W)\in B_{m,n}\).
**Lemma 4.5**.: _Let \(f_{1}:y_{0}\to y_{1}\) be a map in \(\mathcal{Y}\) and \(x_{1}=(b_{1},W_{1})\in\mathcal{X}\) such that \(F(x_{1})=y_{1}\). If \(W_{1}\) is a singleton, then the nerve of the lifting category \(\mathcal{X}(x_{1},f_{1})\) of \(f_{1}\) over \(x_{1}\) is contractible._
Proof.: Let us prove that the lifting category is trivial or isomorphic to a category of Lemma 4.4. First, we will describe the functor \(F:\mathcal{X}\to\mathcal{Y}\) more explicitly. As mentioned above, the objects of \(\mathcal{X}\) are operations of the polynomial monad \(\mathbf{Bimod}_{m,n}^{\bullet\to\bullet\bullet}\). So, they are pairs \((b,W)\in B_{m,n}\), equipped with a target label and, for each white vertex, a source label in \(\{A,B,C\}\) such that if the target label is \(A\) (resp. \(B\)), then all the source labels are also \(A\) (resp. \(B\)). The morphisms are given by nested trees, as in Subsection 3.4. It is important to note that there are morphisms which turn vertices with label \(A\) or \(B\) to vertices with label \(C\). The set of objects of \(\mathcal{Y}\) is just \(B_{m,n}\). Its set of morphisms can again be described in terms of nested trees. The functor \(F\) forgets all the labels.
Now let us describe the lifting category. If \(W_{1}\) is a singleton, \(x_{1}\) only depends on the label of the unique element of \(W_{1}\). Let us write \(y_{0}=(b_{0},W_{0})\in B_{m,n}\). The lifting category has as objects the pairs \((b,W)\in\mathcal{X}\) together with a morphism to \(x_{1}\). We must have \((b_{0},W_{0})=(b,W)\) as elements of \(B_{m,n}\), so the only degree of freedom is in the labels of the white vertices. Since there is a morphism to \(x_{1}\), it means by definition of the category \(\mathcal{X}\) that there is an \(m-1\)-dimensional subset of \(W\) for which the vertices below have label \(A\) and above have label \(B\). The morphisms in the lifting category can only be the morphisms which turn vertices with label \(A\) or \(B\) to vertices with label \(C\). If the label of \(W_{1}\) is \(A\) or \(B\), then the lifting category is the terminal category. So the only non-trivial case is when the label of \(W_{1}\) is \(C\). If \(W_{0}\) is a singleton, then the lifting category consists of a cospan. Otherwise, by definition of \(m\)-dimensional subset of vertices, there is \(b^{\prime}\in B_{m}\) obtained by iterating Construction 3.2 from \((b_{0},W_{0})\) such that \(p^{-1}(b_{0})\) is in bijection with \(W_{0}\). Then the lifting category is isomorphic to \(\mathcal{C}(b^{\prime})\). This concludes the proof.
**Lemma 4.6**.: _The functor \(F\) of 10 is smooth._
Proof.: The argument is the same as in [9, Lemma 3.15]. Let \(f_{1}:y_{0}\to y_{1}\) be a map in \(\mathcal{Y}\) and \(x_{1}=(b_{1},W_{1})\) be an object in \(\mathcal{X}\) such that \(F(x_{1})=y_{1}\). We want to prove that the lifting category of \(f_{1}\) over \(x_{1}\) has contractible nerve. For a white vertex \(v\in W_{1}\), let \(x_{1}^{v}\) given the corolla in \(B_{n}\) corresponding to \(v\). Let \(y_{1}^{v}\) be the same corolla but without the labels and \(f_{1}^{v}:y_{0}^{v}\to y_{1}^{v}\) be the restriction of \(f_{1}\) for this corolla. The lifting category of \(f_{1}\) over \(x_{1}\) is isomorphic to the product over the
white vertices \(v\) of the lifting categories of \(f_{1}^{v}\) over \(x_{1}^{v}\). It has therefore contractible nerve thanks to Lemma 4.5. We conclude the proof using Lemma 4.3.
Proof of Theorem 4.1.: The functor \(F\) of \(10\) is smooth according to Lemma 4.6. Its fibres have contractible nerve, since they have a terminal object, which is the object where all the source labels are the same as the target label. Using Quillen Theorem A, we deduce that \(F\) induces a weak equivalence between nerve. Again, the nerve of \(\mathcal{Y}\) is contractible since this category has a terminal object. So the nerve of \(\mathcal{X}\) is contractible, which concludes the proof.
## 5. Delooping theorems
### General delooping for \((m,n)\)-bimodules
For \(0\leq m\leq n\), we will write \(\mathrm{Bimod}_{m,n}\) for the category of \((m,n)\)-bimodules. We will write \(\zeta\) for the terminal object in this category.
**Definition 5.1**.: For \(0\leq m\leq n\), we will say \(X\in\mathrm{Bimod}_{m,n}\) is _multiplicative_ if it is equipped with a map \(\zeta\to X\).
For \(0<m\leq n\) fixed, let \(\kappa\) be the composite \(I_{m-1}\to B_{m}\to B_{n}\to I_{n}\) where the first map picks the free living edge, the second map is obtained by applying the unit \(n-m\) times, the last map is the target map \(t_{n}\).
The objective for the rest of this paper is to determine whether, for a multiplicative \((m,n)\)-bimodule \(X\), there is a fibration sequence
\[\Omega\mathrm{Map}_{\mathrm{Bimod}_{m,n}}(\zeta,u^{*}X)\to\mathrm{Map}_{ \mathrm{Bimod}_{m-1,n}}(\zeta,v^{*}X)\to\prod_{i\in I^{m-1}}X_{\kappa(i)}, \tag{11}\]
where \(u^{*}\) and \(v^{*}\) are the appropriate forgetful functors.
**Lemma 5.2**.: _The following categories are left proper:_
* _the category of_ \((m,n)\)_-bimodules,_
* _the category of triples_ \((A,B,C)\)_, where_ \(A\) _and_ \(B\) _are_ \((m,n)\)_-bimodules and_ \(C\) _is an_ \(A-B\)_-bimodule,_
Proof.: We want to prove that the polynomial monad for \((m,n)\)-bimodules is tame [3, Definition 6.19]. By definition, we have to prove that the classifier for semi-free coproducts is a coproduct of categories with terminal object. The set of objects for the classifier is the set \(B_{m,n}\) of Definition 3.9, where the white vertices are also coloured with \(X\) or \(K\). The morphisms can be given by nested trees, as it was done in Subsection 3.4. For each vertex of the nested tree, if the vertex is \(X\)-coloured, the tree inside it can be any tree with all vertices \(X\)-coloured. If the vertex is \(K\)-coloured, the tree inside it must be the corolla with the only vertex \(K\)-coloured. When \(m=n\), the local terminal objects are trees in \(B_{m,n}\) with white vertices coloured by \(X\) and \(K\) such that adjacent vertices have different colours, and such that vertices incident to the root or to the leaves are \(X\)-coloured, as in [3, Subsection 9.2]. If \(m<n\), let us pick an object of the classifier, that is a tree \((b,W)\in B_{m,n}\) with white vertices coloured by \(X\) and \(K\). Let \((b^{\downarrow},W^{\downarrow})\) be the tree obtained by applying the construction of Definition 3.2, with white vertices coloured by \(X\) and \(K\) as the corresponding vertices of \(b\). A morphism from \((b,W)\) corresponds to the contraction of edges and insertion of unary vertices in \(b^{\downarrow}\). If \(m=n-1\), the tree \((b,W)\in B_{m,n}\) is again terminal when in \((b^{\downarrow},W^{\downarrow})\), the \(X\)-vertices and \(K\)-vertices alternate. If \(m<n-1\), it is terminal when, in \((b^{\downarrow},W^{\downarrow})\), there are no vertices
above an \(X\)-vertex and all vertices which are below an \(X\)-vertex are also below a \(K\)-vertex.
Now let us prove that the polynomial monad for triples \((A,B,C)\), where \(A\) and \(B\) are \((m,n)\)-bimodules and \(C\) is an \(A-B\)-bimodule, is quasi-tame [6, Definition 4.11]. For \(m=n\) and \(m=n-1\), this can be done using the fact that the polynomial monad for \((m,n)\)-bimodules is tame and applying [6, Theorem 4.22]. The strategy is completely similar as in the proof of [6, Proposition 4.26]. If \(m<n-1\), the strategy is slightly different. Unfortunately, the subcategory of trees which do not have vertices above an \(X\)-vertex and such that all vertices which are below an \(X\)-vertex are also below a \(K\)-vertex, is not always discrete, because there might be non-trivial morphisms which turn vertices with label \(A\) or \(B\) to vertices with label \(C\). However, it is still final subcategory. Indeed, the inclusion functor of this subcategory has a left adjoint given by the functor which automatically contracts the necessary edges. It also has a contractible nerve, because the only degree of freedom is in the labels of the vertices, so we get a category as in Lemma 4.4.
Note that any tame polynomial monad is also quasi-tame [6, Proposition 4.20]. We get the conclusion from [6, Theorem 4.17], which states that the category of algebras over a quasi-tame polynomial monad is left proper.
**Lemma 5.3**.: _Let \(D^{0}\) be a cofibrant replacement of \(\zeta\). There is a Quillen equivalence between the category of pointed \(D^{0}-D^{0}\)-bimodules and the category of pointed \(\zeta-\zeta\)-bimodules._
Proof.: Let \(\mathcal{C}\) be the category of pairs of \((m,n)\)-bimodules and \(\operatorname{PBimod}_{-,-}:\mathcal{C}^{op}\to\operatorname{CAT}\) the functor which sends a pair \((A,B)\) to the category of pointed \(A-B\)-bimodules. The Grothendieck construction over this functor is left proper according to Lemma 5.2. The desired Quillen adjunction is induced by the unique map \(\tau:(D^{0},D^{0})\to(\zeta,\zeta)\) in \(\mathcal{C}\). To prove that it is indeed a Quillen equivalence, we need to prove that the unique map \(0\to\tau^{*}(0)\) is a weak equivalence. The initial \(D^{0}-D^{0}\)-bimodule is the left adjoint of the projection from the Grothendieck construction to the base \(\mathcal{C}\) applied to the pair \((D^{0},D^{0})\). The projection is the restriction functor induced by a map of polynomial monads, and the initial \(D^{0}-D^{0}\)-bimodule can be computed as the nerve of the classifier induced by this map. The objects of this classifier are pair \((b,W)\in B_{m,n}\) with white vertices labelled with \(A\) or \(B\). There must be an \(m-1\)-dimensional subset \(V\subset W\) such that all the vertices in \(V_{-}\) have label \(A\) and all the vertices in \(V_{+}\) have label \(B\). The morphisms can be given by nested trees, as it was done in Subsection 3.4. The trees inside a nested tree must have all vertices with the same label. If \(m=n\) or \(m=n-1\), his category is a coproduct of categories with terminal objects. A typical terminal object for \(m=n\) has the root vertex labelled with \(A\) and all edges above the root vertex is the root of a corolla with label \(B\). For \(m=n-1\), an object given by \((b,W)\in B_{m,n}\) is terminal when the tree \((b^{\downarrow},W^{\downarrow})\) has the same description as for the case \(m=n\). For \(m<n-1\), the classifier is not a coproduct of categories with terminal object. Let us consider the subcategory of objects given by \((b,W)\in B_{m,n}\) such that in \(b^{\downarrow}\), there are no black vertices above a white vertex and a black vertex is below an \(A\)-vertex if and only if it is also below a \(B\)-vertex. It is a final subcategory because the inclusion functor has a left adjoint. It is a coproduct of categories with initial object because the only degree of freedom for the morphisms is in the colours of the vertices. The local initial objects are when the number of black vertices is maximal.
The initial \(\zeta-\zeta\)-bimodule is discrete, it is given by the set of connected components of the initial \(D^{0}-D^{0}\)-bimodule. This proves that the unique map \(0\to\tau^{*}(0)\) is a weak equivalence. Therefore the conditions of [5, Theorem 3.22] are satisfied, which means that \(\tau\) indeed induces a Quillen equivalence.
Let \(\alpha\) be the \(\zeta-\zeta\)-bimodule such that the category of pointed \(\zeta-\zeta\)-bimodules is equivalent to the comma category of \(\zeta-\zeta\)-bimodules under \(\alpha\), given by Lemma 3.19.
**Lemma 5.4**.: _For any multiplicative \((m,n)\)-bimodule \(X\), there is a fibration sequence_
\[\Omega\mathrm{Map}_{\mathrm{Bimod}_{m,n}}(\zeta,u^{*}X)\to\mathrm{Map}_{ \mathrm{Bimod}_{\zeta,\zeta}}(\zeta,v^{*}X)\to\mathrm{Map}_{\mathrm{Bimod}_{ \zeta,\zeta}}(\alpha,v^{*}X),\]
_where \(\mathrm{Bimod}_{\zeta,\zeta}\) is the category of \(\zeta-\zeta\)-bimodules._
Proof.: According to Lemma 5.2, \(\mathrm{Bimod}_{m,n}\) is left proper. We can therefore apply [9, Theorem 4.5] to get the delooping
\[\Omega\mathrm{Map}_{\mathrm{Bimod}_{m,n}}\left(\zeta,u^{*}X\right)\to\mathrm{ Map}_{S^{0}/\mathrm{Bimod}_{m,n}}\left(\zeta,h^{*}X\right), \tag{12}\]
where \(S^{0}:=D^{0}\amalg D^{0}\).
The map \(f\) of polynomial monads \(8\) induces a Quillen adjunction \(f_{!}\dashv f^{*}\) between categories of algebras. Note that there is a Quillen adjunction \(g_{!}\dashv g^{*}\) between \(\mathrm{P}\mathrm{Bimod}_{D^{0},D^{0}}\) and \(S^{0}/\mathrm{Bimod}_{m,n}\), such that \(f_{!}(D^{0},D^{0},C)=(D^{0},D^{0},g_{!}(C))\) and \(f^{*}(D^{0},D^{0},C)=(D^{0},D^{0},g^{*}(C))\). Thanks to Theorem 4.1, \(f\) is homotopically cofinal. According to [9, Remark 4.8], this means that \(f_{!}\) is _a left cofinal Quillen functor_, that is, it preserves cofibrant replacements of the terminal objects [9, Definition 4.7]. Therefore, \(g_{!}\) is also left cofinal. We deduce by adjunction that there is a weak equivalence
\[\mathrm{Map}_{S^{0}/\mathrm{Bimod}_{m,n}}\left(\zeta,h^{*}X\right)\to\mathrm{ Map}_{\mathrm{P}\mathrm{Bimod}_{D^{0},D^{0}}}\left(\zeta,g^{*}h^{*}X\right). \tag{13}\]
According to Lemma 5.3, there is a weak equivalence
\[\mathrm{Map}_{\mathrm{P}\mathrm{Bimod}_{\zeta,\zeta}}(\zeta,w^{*}X)\to\mathrm{ Map}_{\mathrm{P}\mathrm{Bimod}_{D^{0},D^{0}}}(\zeta,g^{*}h^{*}X). \tag{14}\]
Using Lemma 5.2, \(\mathrm{Bimod}_{\zeta,\zeta}\) is left proper. According to Lemma 3.19, \(\mathrm{P}\mathrm{Bimod}_{\zeta,\zeta}\) is isomorphic to \(\alpha/\mathrm{Bimod}_{\zeta,\zeta}\). This means that we can apply [9, Theorem 4.13] and [19, Proposition 2.7] to get the fibration sequence
\[\mathrm{Map}_{\mathrm{P}\mathrm{Bimod}_{\zeta,\zeta}}\left(\zeta,w^{*}X\right) \to\mathrm{Map}_{\mathrm{Bimod}_{\zeta,\zeta}}\left(\zeta,v^{*}X\right)\to \mathrm{Map}_{\mathrm{Bimod}_{\zeta,\zeta}}(\alpha,v^{*}X). \tag{15}\]
We get the desired result by combining 12, 13, 14 and 15.
### The cases \(m=n\) and \(m=n-1\)
**Lemma 5.5**.: _Let \(0<m\leq n\) and a triple \((A,B,C)\) where \(A\) and \(B\) are \((m,n)\)-bimodules in \((\mathcal{E},\otimes,e)\) and \(C\) is an \(A-B\)-bimodule. Let us assume that \(m=n\) or \(m=n-1\). Then \(C\) is pointed if and only if it is equipped with a map \(e\to C_{\kappa(i)}\) for all \(i\in I_{m-1}\)._
Proof.: First let us assume that \(C\) is pointed. Let \(\lambda\) be the composite \(I_{m-1}\to B_{m}\to B_{n}\) where the first map picks the free living edge and the second map is obtained by applying the unit \(n-m\) times. For \(i\in I_{m-1}\), \((\lambda(i),\varnothing)\), where \(\varnothing\) is the empty set, is \(m\)-dimensional. The pointed \(A-B\)-bimodule map induced by \((\lambda(i),\varnothing)\) is a map \(e\to C_{\kappa(i)}\).
Now let us assume that \(C\) is equipped with a map \(e\to C_{\kappa(i)}\) for all \(i\in I_{m-1}\). Let \((b,W)\in B_{m,n}\) and partitions of \(W\) into \(V_{-}\), \(V_{+}\) and \(V\) such that there is an \(m-1\)-dimensional subset \(U\subset W\) satisfying \(U_{-}\subset V_{-}\) and \(U_{+}\subset V_{+}\). We want to construct a map \(7\).
If \(m=n\), according to Remark 3.8, each path in \(b\) from the root to a leaf meets vertices in \(V_{-}\), then at most one vertex in \(V\), then vertices in \(V_{+}\). Let \(\tilde{b}\) be the tree obtained from \(b\) by adding a unary vertex on each edge between a vertex in \(V_{-}\) and a vertex in \(V_{+}\). Let \(\tilde{V}\) and \(\tilde{W}\) be the sets \(V\) and \(W\), respectively, plus the set of unary vertices which have been added. Then \((\tilde{b},\tilde{W})\in B_{m,n}\) and \(\tilde{V}\subset\tilde{W}\) is \(m-1\)-dimensional. The desired map is given by the composite
\[A_{s(V_{-})}\otimes C_{s(V)}\otimes B_{s(V_{+})}\to A_{s(\tilde{V}_{-})} \otimes C_{s(\tilde{V})}\otimes B_{s(\tilde{V}_{+})}\to C_{t(b)}, \tag{16}\]
where the first map is using the maps \(e\to C_{\kappa(i)}\) for \(i\in I_{m-1}\) and the second map is from the \(A-B\)-bimodule structure.
If \(m=n-1\), let \((b^{\downarrow},W^{\downarrow})\) be the tree obtained by applying the construction of Definition 3.2 to \((b,W)\). We can construct \((\tilde{b}^{\downarrow},\tilde{W}^{\downarrow})\) as in the case \(m=n\). Let \((\tilde{b},\tilde{W})\) given by Lemma 3.4. The desired map is given by the composite 16 again.
**Theorem 5.6**.: _We do have the fibration sequence 11 if \(m=n\) or \(m=n-1\)._
Proof.: We want to prove that the desired fibration sequence is equivalent to the fibration sequence of Lemma 5.4. The first terms of both fibration sequences are the same. According to Lemma 3.17, the category \(\mathrm{Bimod}_{\zeta,\zeta}\) of \(\zeta-\zeta\)-bimodules is isomorphic to the category \(\mathrm{Bimod}_{m-1,n}\), so the second terms are also equivalent. We deduce from Lemma 5.5 that \(\alpha\) is the image of the terminal \(I_{m-1}\)-collection through the left adjoint of the forgetful functor from \(\zeta-\zeta\)-bimodules to \(I_{m-1}\)-collections. So, by an adjunction argument, the third terms are also equivalent.
**Remark 5.7**.: Recall from Example 3.14 what are \((m,n)\)-bimodules in the case \(n=2\). We deduce that Theorem 5.6 in this case gives us the Turchin/Dwyer-Hess theorem. Interestingly, the fibration sequence 11 does not seem to hold in general for \(m<n-1\) without extra assumptions. The rest of this paper will consist in investigating the case \((m,n)=(1,3)\).
### Dendroidal category for planar trees \(\Omega_{p}\)
Let \(\Omega_{p}\) be the version of the dendroidal category for planar trees [18, Definition 2.2.1]. The objects are isomorphism classes of planar trees and the morphisms are generated by:
* _inner face maps_ of the form \(\partial_{e}:T/e\to T\), where \(e\) is an internal edge of \(T\) and \(T/e\) is the tree obtained from \(T\) by contracting \(e\): \(T/e\)
* _outer face maps_ of the form \(\partial_{v}:T/v\to T\), where \(v\) is a vertex of \(T\), possibly the root, with exactly one inner edge attached to it and \(T/v\) is the tree obtained from \(T\) by removing the vertex \(v\) and all the outer edges incident to it: \(\partial_{v}\)
_degeneracy maps_ of the form \(\sigma_{v}:T\to T\backslash v\), where \(v\) is a unary vertex of \(T\) and \(T\backslash v\) is the tree obtained from \(T\) by removing the vertex \(v\) and merging the two edges incident to it into one: \(\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0) *{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{ \bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{ \bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet} \xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{ \bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet} \xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{ \bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet} \xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)* {\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet} \xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)* {\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet} \xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)* {\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet} \xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)* {\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet} \xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)* {\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet} \xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)* {\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet} \xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)* {\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{ \bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet} \xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{ \bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet} \xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{ \bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet} \xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{ \bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet} \xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{ \bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet} \xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{ \bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet} \xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{ \bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet} \xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{ \bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet} \xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{ \bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet} \xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{ \bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet} \xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{ \bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet} \xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{ \bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet} \xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{ \bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet} \xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{ \bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet} \xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet} \xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet} \xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet} \xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet} \xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet} \xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet} \xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet} \xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet} \xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet} \xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet} \xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{\bullet}\xy(0,0)*{
labels correspond bijectively to a connected component since it is invariant in each connected component. They also correspond bijectively with the set of strata of \(t(b)\). This proves that \(\alpha\) is indeed the initial pointed \(\zeta-\zeta\)-bimodule.
### Functors equipped with retractions
**Definition 5.12**.: We will say that a morphism in \(\Omega_{p}\)_consists of blowing up a vertex to add a trunk_ when it is an inner face map \(\partial_{e}:T/e\to T\), where the vertex directly above \(e\) has no inputs:
**Definition 5.13**.: We will say that a functor \(\mathcal{K}:\Omega_{p}\to\mathrm{Top}\) is _equipped with retractions_ if for all morphisms \(\partial_{e}:T/e\to T\) in \(\Omega_{p}\) consisting of blowing up a vertex to add a trunk, there is a retraction \(r_{T}:\mathcal{K}(T)\to\mathcal{K}(T/e)\) of \(\mathcal{K}(\partial_{e})\). Moreover, the retractions are natural, that is, for all \(h:S\to T\) in \(\Omega_{p}\), the following square commutes:
### Computation of homotopy limit
For \(n\geq 0\), we will write \([n]\) for the set \(\{0,\ldots,n\}\) and \(\mathbf{P}[n]\) for the category of subsets of \([n]\) and inclusions.
**Definition 5.14**.: For \(n\geq 0\), let \(\mathbf{C}[n]\) be the full subcategory of
\[\{A\stackrel{{\sigma}}{{\leftarrow}}B\stackrel{{ \tau}}{{\rightarrow}}C\stackrel{{\upsilon}}{{\leftarrow}}D \}\times\mathbf{P}[n]\]
of pairs \((l,S)\) where
* \(S=[i]\) for some \(i\in[n]\) if \(l=A\),
* \(S\) is non-empty if \(l\in\{B,C\}\),
* \(S\) is empty if \(l=D\).
For example, here is a picture of \(\mathbf{C}[1]\):
Recall that for \(n\geq 0\), the _topological \(n\)-simplex_ is the topological space:
\[\Delta^{n}=\left\{(x_{0},\ldots,x_{n})\in\mathbb{R}^{n+1}\middle|\sum_{i=0}^{n} x_{i}=1\text{ and }x_{i}\geq 0\text{ for }0\leq i\leq n\right\}\]
**Lemma 5.15**.: _For \(n\geq 0\), there is a canonical isomorphism between the realisation of the nerve of \(\mathbf{C}[n]\) and \(\Delta^{n+1}\)._
Proof.: We describe a map \(f:ob(\mathbf{C}[n])\to\Delta^{n+1}\), where \(ob(\mathbf{C}[n])\) is the set of objects of \(\mathbf{C}[n]\). Let \((e_{0},\ldots,e_{n+1})\) be the standard basis of \(\mathbb{R}^{n+2}\). For \(S\subset[n]\), let \(\max(S)\) be the maximum and \(|S|\) be the number of elements of \(S\). We define
\[f(l,S)=\begin{cases}e_{\max(S)}&\text{if }l=A,\\ \frac{2}{3|S|}\sum_{i\in S}e_{i}+\frac{1}{3}e_{n+1}&\text{if }l=B,\\ \frac{1}{3|S|}\sum_{i\in S}e_{i}+\frac{2}{3}e_{n+1}&\text{if }l=C,\\ e_{n+1}&\text{if }l=D.\end{cases}\]
It is easy to check that this map induces the desired isomorphism.
**Definition 5.16**.: Let \(\Omega_{p}^{*}\) be the category of elements of the presheaf \(\alpha:\Omega_{p}\to\operatorname{Set}\) which sends a tree to its set of strata. So \(\Omega_{p}^{*}\) is the category of trees with a chosen stratum.
In the following lemma, we will consider \([n]\) as a category, where there is a morphism \(i\to j\) if \(i<j\). It is a subcategory of \(\mathbf{C}[n]\) through the inclusion \(i\mapsto(A,[i])\). Also, we will write \(\gamma_{0}\in\Omega_{p}^{*}\) for the trunk, with the unique choice of stratum.
**Lemma 5.17**.: _For \(n\geq 0\), any functor \(T:[n]\to\Omega_{p}^{*}\) can be naturally extended to a functor \(\bar{T}:\mathbf{C}[n]\to\Omega_{p}^{*}\) such that, for \(S\subset[n]\) non-empty, the map \(\bar{T}(\tau,id_{S}):\bar{T}(B,S)\to\bar{T}(C,S)\) consists of blowing up a vertex to add a trunk and \(\bar{T}(D,\varnothing)=\gamma_{0}\)._
Proof.: Let \(T:[n]\to\Omega_{p}^{*}\) be a functor. We will extend it to a functor \(\bar{T}:\mathbf{C}[n]\to\Omega_{p}^{*}\). By assumption, we should have \(\bar{T}(A,[i])=T(i)\) for \(i\in[n]\), and \(\bar{T}(D,\varnothing)\) should be the trunk. It remains to define \(\bar{T}(B,S)\) and \(\bar{T}(C,S)\) for \(S\subset[n]\) non-empty. For a non-trivial map \(i\to j\) in \([n]\), using Remark 5.8, there is \(T_{ij}\in\Omega_{p}^{*}\) such that \(T(i)\to T(j)\) factorises as \(T(i)\twoheadrightarrow T_{ij}\to T(j)\), where the first map is active and the second map is inert. Observe that we can add a circle \(c_{ij}\) on the tree \(T(j)\) such that the tree inside this circle is \(T_{ij}\):
We define \(\bar{T}(B,S)\) as the tree obtained from \(T(\max(S))\) by contracting all the internal edges except the ones that are crossed by a circle \(c_{ij}\) for \(i,j\in S\) and \(j=\max(S)\). Note that in particular, \(\bar{T}(B,\{i\})\) is the tree obtained from \(T(i)\) by contracting all the internal edges. In the case of the free living edge, we add a unary vertex. For example, if we start with \(T:[2]\to\Omega_{p}^{*}\) as in the previous picture, we will get the following trees \(\bar{T}(B,S)\) for \(S\subset[2]\) non-empty (forgetting about the
chosen strata):
\(\{0,1\}\)\(\{0,2\}\)\(\{0,2\}\)\(\{0,2\}\)\(\{1\}\)\(\{1\}\)\(\{0,1\}\)\(\{0,1\}\)\(\{1\}\)\(\{0,1\}\)\(\{0,1\}\)\(\{1\}\)\(\{0,1\}\)\(\{0,1\}\)\(\{1\}\)\(\{0,1\}\)
Proof.: Let us write \(\mathcal{L}=\pi^{*}(\mathcal{K})\) and \(\gamma_{0}\in\Omega_{p}^{*}\) be the trunk. We will construct two maps
and prove that they are homotopy inverse of each other. The map \(q\) is the projection which sends \(\beta\in\operatorname{holim}\mathcal{L}\) to the point given by the map \(\beta(\gamma_{0}):\Delta^{0}\to\mathcal{L}(\gamma_{0})\) of Lemma 5.18. We can construct the map \(j\) using Lemma 5.17 and the assumption that \(\mathcal{K}\) is equipped with retractions. Let \(\xi\in\mathcal{L}(\gamma_{0})\). A functor \(T:[n]\to\Omega_{p}^{*}\) can be extended to \(\bar{T}:\mathbf{C}[n]\to\Omega_{p}^{*}\) according to Lemma 5.17. We have the zigzag
(18)
where the map in the middle consists of blowing up a vertex to add the trunk. Applying \(\mathcal{L}\) to this zigzag gives us another zigzag, where the map in the middle has a retraction, since \(\mathcal{K}\) is equipped with retractions. Then \(j(\xi)(T)\) given is by the composite \(\Delta^{n}\to\mathcal{L}(\gamma_{0})\to\mathcal{L}T(n)\), where the first map is constant to \(\xi\) and the second map is the composite obtained by applying \(\mathcal{L}\) to 18 and replacing the map in the middle by its retraction. The composite \(qj\) is the identity. We will now prove that the composite \(jq\) is also homotopic to the identity.
Let us fix \(\beta\in\operatorname{holim}\mathcal{L}\) and a functor \(T:[n]\to\Omega_{p}^{*}\), which can again be extended to \(\bar{T}:\mathbf{C}[n]\to\Omega_{p}^{*}\). We will describe a map \(H(\beta)(T):\Delta^{n+1}\to\mathcal{L}T(n)\). Let \(i:[n+1]\to\mathbf{C}[n]\) non-degenerate, that is \(i(f)\neq id\) if \(f\neq id\). We define \(H(\beta)(T)|_{i}:\Delta^{n+1}\to\mathcal{L}T(n)\) as follows. Using Lemma 5.18, we can associate to the functor \(\bar{T}i\) a map \(\beta(\bar{T}i):\Delta^{n+1}\to\mathcal{L}\bar{T}i(n+1)\). Note that since \(i\) is non-degenerate, \(i(n+1)=(A,[n])\) or \(i(n+1)=(C,[n])\). So \(\bar{T}i(n+1)=T(n)\) or \(\bar{T}i(n+1)=\bar{T}(C,[n])\). We define
\[H(\beta)(T)|_{i}=\begin{cases}\beta(\bar{T}i)&\text{if }i(n+1)=(A,[n]),\\ \mathcal{L}\bar{T}(\sigma,id_{[n]})\mathcal{L}\bar{T}(\tau,id_{[n]})^{-1}\beta (\bar{T}i)&\text{if }i(n+1)=(C,[n]),\end{cases}\]
where \(\mathcal{L}\bar{T}(\tau,id_{[n]})^{-1}\) is the retraction of \(\mathcal{L}\bar{T}(\tau,id_{[n]})\). According to Lemma 5.15, \(i\) induces a canonical map \(|N|(i):\Delta^{n+1}\to\Delta^{n+1}\). We can define the restriction of \(H(\beta)(T)\) to the image of \(|N|(i)\) to be given by \(H(\beta)(T)|_{i}\). It remains to check that \(H(\beta)(T)\) is well-defined. Let \(i_{A},i_{C}:[n+1]\to\mathbf{C}[n]\) be two non-degenerate functors such that \(i_{A}(n+1)=(A,[n])\), \(i_{C}(n+1)=(C,[n])\) and \(i:=i_{A}d_{n+1}=i_{C}d_{n+1}\), where \(d_{n+1}:[n]\to[n+1]\) is the inclusion. Note that \(i_{A}(n\to n+1)=(\sigma,id_{[n]})\) and \(i_{C}(n\to n+1)=(\tau,id_{[n]})\). Using the naturality 17, we have
\[H(\beta)(T)|_{i_{A}}\delta_{n+1} =\beta(\bar{T}i_{A})\delta_{n+1},\] \[=\mathcal{L}\bar{T}(\sigma,id_{[n]})\beta(\bar{T}i),\] \[=\mathcal{L}\bar{T}(\sigma,id_{[n]})\mathcal{L}\bar{T}(\tau,id_{ [n]})^{-1}\mathcal{L}\bar{T}(\tau,id_{[n]})\beta(\bar{T}i),\] \[=\mathcal{L}\bar{T}(\sigma,id_{[n]})\mathcal{L}\bar{T}(\tau,id_{ [n]})^{-1}\beta(\bar{T}i_{C})\delta_{n+1},\] \[=H(\beta)(T)|_{i_{C}}\delta_{n+1},\]
where \(\delta_{n+1}=\Delta^{d_{n+1}}:\Delta^{n}\to\Delta^{n+1}\). This proves that \(H(\beta)(T)\) is well-defined.
Finally, the desired homotopy \(H:[0,1]\times\operatorname{holim}\mathcal{L}\to\operatorname{holim}\mathcal{L}\) is given by \(H(t,\beta)(T)(x)=H(\beta)(T)((1-t)x,t)\)
### The case \((m,n)=(1,3)\)
**Theorem 5.20**.: _We do have the fibration sequence 11 for \((m,n)=(1,3)\) with the extra condition that \(X\) is such that \(v^{*}X\) is equipped with retractions._
Proof.: Since \(\pi:\Omega_{p}^{*}\to\Omega_{p}\) is a discrete fibration, the left Kan extension can be computed as a coproduct over fibres. In case of the terminal presheaf, we get the coproduct of \(1\) over the fibres of \(\pi\), that is the fibres of \(\pi\) themselves. So, \(\pi_{!}(1)=\alpha\) and by adjunction,
\[\operatorname{Map}_{[\Omega_{p},\operatorname{SSet}]}(\alpha,Y)\sim \operatorname{Map}_{[\Omega_{p}^{*},\operatorname{SSet}]}(1,\pi^{*}Y)= \operatorname{holim}_{\Omega_{p}^{*}}\pi^{*}Y,\]
where \(Y:=v^{*}(X)\). We get the desired result by combining Lemma 3.17, Lemma 5.4 and Lemma 5.19.
**Definition 5.21**.: We will call _hyperoperads_ algebras of the polynomial monad \(\operatorname{\mathbf{Id}}^{+3}\). We will write \(\operatorname{HOp}\) for the category of hyperoperads.
**Corollary 5.22**.: _Let \(\mathcal{O}\) be a multiplicative hyperoperad. Assume that for all planar trees \(T\) with zero or one vertices, that is, the free living edge or the corollas, \(\mathcal{O}_{T}\) is contractible. Assume further that \(\mathcal{O}^{\bullet}\) is equipped with retractions. Then we have a weak equivalence_
\[\Omega^{3}\operatorname{Map}_{\operatorname{HOp}}(\zeta,u^{*}(\mathcal{O})) \sim\operatorname{holim}_{\Omega_{p}}\mathcal{O}^{\bullet}.\]
### Example: desymmetrisation of the Kontsevich operad
Recall [6, Section 3.5] that for any polynomial monad \(T\), there is a canonical map of polynomial monads from \(T^{+}\) to the polynomial monad for symmetric operads. In particular, for \(T=\operatorname{NOp}\), this map of polynomial monad induces a _desymmetrisation_ functor \(des:\operatorname{SOp}\to\operatorname{HOp}\), where \(\operatorname{SOp}\) is the category of symmetric operads. Explicitly, it is given, for a planar tree \(T\), by \(des(\mathcal{P})(T)=\mathcal{P}(|T|)\), where \(|T|\) is the set of vertices of \(T\).
Let \(\mathcal{O}\) be a multiplicative hyperoperad. If \(u^{*}(\mathcal{O})\) is the desymmetrisation of a reduced symmetric operad, then \(\mathcal{O}^{\bullet}\) is equipped with retractions. Indeed, let \(\mathcal{P}\) be the symmetric operad such that \(des(\mathcal{P})=u^{*}(\mathcal{O})\). The retraction, for a morphism \(\partial_{e}:T/e\to T\) in \(\Omega_{p}\), is given by the map
\[\mathcal{O}(T)=\mathcal{P}(|T|)\otimes\mathcal{P}(\varnothing)\stackrel{{ \circ_{v}}}{{\longrightarrow}}\mathcal{P}(|T|\setminus\{v\})=\mathcal{O}(T/e),\]
where \(v\in|T|\) is the vertex directly above \(e\), and \(\circ_{v}\) is the multiplication of the symmetric operad given in terms of partial operations [17]. We will now give a non-trivial example of multiplicative hyperoperad.
For \(m\geq 2\), let \(\tilde{C}_{m}(n)\) be the quotient of the configuration space
\[\operatorname{Conf}_{n}(\mathbb{R}^{m})=\{(x_{1},\dots,x_{n})\in(\mathbb{R}^{ m})^{n},\ x_{i}\neq x_{j}\text{ if }i\neq j\}\]
with respect to the action of the group \(G_{m}=\{x\mapsto\lambda x+v|\lambda>0,v\in\mathbb{R}^{m}\}\).
**Definition 5.23**.: [16, Definition 12] The _Kontsevich operad_\(\mathcal{K}_{m}(n)\) is the closure of the image \(\tilde{C}_{m}(n)\) in \((S^{m-1})^{\binom{n}{2}}\) under the map
\[G_{m}\cdot(x_{1},\dots,x_{n})\mapsto\left(\frac{x_{j}-x_{i}}{|x_{j}-x_{i}|} \right)_{1\leq i<j\leq n}\]
Set-theoretically, the operad \(\mathcal{K}_{m}\) is the same as the free operad generated by the symmetric collection of sets \((\tilde{C}_{m}(n))_{n\geq 0}\).
**Lemma 5.24**.: _The hyperoperad obtained by desymmetrisation of the Kontsevich operad \(\mathcal{K}_{m}\) has a multiplicative structure._
Proof.: We write \(x=(x_{ij})_{i\neq j\in|T|}\) for an element of \(\mathcal{K}_{m}(T)\). Let \(e_{1},e_{2}\in S^{m-1}\) given by the standard inclusion of \(\mathbb{R}^{2}\) into \(\mathbb{R}^{m}\). Let \(x(T)\in\mathcal{K}_{m}(T)\) given by
\[x(T)_{ij}=\begin{cases}e_{1}&\text{if $i$ is below $j$},\\ e_{2}&\text{if $i$ is to the left of $j$}.\end{cases}\]
It is easy to check that this does give a multiplicative structure.
In particular we can apply Corollary 5.22 to the desymmetrisation of the Kontsevich operad.
|
2309.05246 | Deep photonic reservoir computing recurrent network | Deep neural networks usually process information through multiple hidden
layers. However, most hardware reservoir computing recurrent networks only have
one hidden reservoir layer, which significantly limits the capability of
solving real-world complex tasks. Here we show a deep photonic reservoir
computing (PRC) architecture, which is constructed by cascading
injection-locked semiconductor lasers. In particular, the connection between
successive hidden layers is all optical, without any optical-electrical
conversion or analog-digital conversion. The proof of concept is demonstrated
on a PRC consisting of 4 hidden layers and 320 interconnected neurons. In
addition, we apply the deep PRC in the real-world signal equalization of an
optical fiber communication system. It is found that the deep PRC owns strong
ability to compensate the nonlinearity of fibers. | Cheng Wang | 2023-09-11T05:40:44Z | http://arxiv.org/abs/2309.05246v1 | # Deep photonic reservoir computing recurrent network
###### Abstract
Deep neural networks usually process information through multiple hidden layers. However, most hardware reservoir computing recurrent networks only have one hidden reservoir layer, which significantly limits the capability of solving real-world complex tasks. Here we show a deep photonic reservoir computing (PRC) architecture, which is constructed by cascading injection-locked semiconductor lasers. In particular, the connection between successive hidden layers is all optical, without any optical-electrical conversion or analog-digital conversion. The proof of concept is demonstrated on a PRC consisting of 4 hidden layers and 320 interconnected neurons. In addition, we apply the deep PRC in the real-world signal equalization of an optical fiber communication system. It is found that the deep PRC owns strong ability to compensate the nonlinearity of fibers.
## 1 Introduction
Deep neural networks with multiple hidden layers have been substantially advancing the development of artificial intelligence. In comparison with the digital electronic computing based on the von Neumann architecture, optical computing can boost the energy efficiency while reduce the computation latency [1-3]. In recent years, a large variety of optical computing architectures have been proposed, and most focused on the linear multiply-accumulation operation [4-8]. Together with the nonlinear activation function in the digital domain, optical convolutional neural networks and multilayer perceptrons have been extensively demonstrated. In contrast to the above two feedforward neural networks, recurrent neural networks (RNN) have inherent memory effect and are favorable for solving time-dependent tasks such as natural language processing and temporal signal processing [9]. Reservoir computing (RC) is such a kind of RNN, but with fixed weights in the input layer and in the hidden reservoir layers [10,11]. Only weights in the readout layer require training, which leads to a simple training algorithm and a fast training speed. Optoelectronics-based [12-14] and memristor-based [15-17] RCs have been intensively investigated, while various types of hardware RCs have been discussed as well [18]. However, most hardware RCs only have one hidden reservoir layer, which substantially limits the capability of dealing with real-world problems. A comprehensive theoretical analysis by C. Gallicchio _et al._ has pointed out that the deep hierarchy of RCs owned multiple time scales and frequency components, and thereby boosted the richness of dynamics and the diversity of representations [19,20]. Several paradigms of combining multiple reservoirs have been theoretically compared in literatures, and it
was found that a unidirectional coupling scheme of hidden reservoirs was beneficial to improve the performance of RGs [21, 22]. Indeed, the deep configuration raises both the linear and the nonlinear memory capacities of RCs [23, 24]. Interestingly, Penkovsky _et al_ showed that a deep RC with time-delay loops was equivalent to a deep convolutional neural network [25]. In experiment, Nakajima constructed a deep RC based on a Mach-Zehnder modulator associated with an optoelectronic feedback loop [26]. However, there is only one piece of hardware, which is reused in each hidden layer. The interconnection between successive layers requires optical-electrical conversion (OEC), analog-digital conversion (ADC), as well as the inverse conversions. The above four conversion processes consume high power and introduce a large amount of latency, which significantly counteract the merits of optical computing. Lupo _et al_ recently proposed a two-layer RC based on two groups of frequency combs, which were produced by the phase modulation of light [27]. The interconnection between the two layers is implemented in the electrical domain through the OEC. Nevertheless, the scalability of the RC depth is limited by its tradeoff with the width.
This work presents a deep PRC based on cascading injection-locked semiconductor lasers. The hidden-layer interconnections are fully optical without any OEC and ADC. The deep PRC architecture with 4 hidden layers and 320 neurons is successfully demonstrated in experiment. In particular, the PRC depth is highly scalable without any power and coherence limitation. The deep PRC is applied in the signal equalization of an optical fiber communication system. It is proved that the deep PRC has strong ability to mitigate the Kerr nonlinearity of optical fibers, and hence to improve the signal quality at the optical receiver.
## 2 Deep PRC architecture and experimental setup
Figure 1(a) illustrates the architecture of the deep PRC. A single-mode master laser uni-directionally injects into the slave laser (Laser 1) in the first hidden layer of the reservoir. The optical injection is operated in the stable regime, which is bounded by the Hopf bifurcation and the saddle-node bifurcation [28, 29]. Partial light of Laser 1 goes to the second layer of the reservoir and locks Laser 2 through optical injection. In the same way, Laser 2 locks Laser 3 in the third layer, and then Laser 3 locks Laser 4 in the fourth layer. As a result, the lasing frequencies of all the four slave lasers are locked to be the same as that of the master laser. Besides, the phases of all the slave lasers are synchronized with the master laser as well. In each hidden layer, the laser is subject to an optical feedback loop, which produces a large number of virtual neurons through nonlinear laser dynamics [14, 30]. The optical feedback is also operated in the stable regime, which is separated from the unstable regime through a critical feedback level [28, 31]. In the input layer, the input signal is multiplied by a random mask, and this pre-processed signal is superimposed onto the carrier wave of the master laser through an optical modulator. The masking process plays a crucial role in the PRC system. On one hand, the fast varying mask sequence maintains the instantaneous state of all the time-delay reservoirs [30, 32]. On the other hand, the mask interval defines the interval of virtual neurons. The neuron number in each hidden layer is determined by the clock cycle divided by the neuron interval.
Figure 1: (a) Schematic architecture of the deep PRC. (b) Experimental setup of the deep PRC. AWG: arbitrary waveform generator; OSC: oscilloscope; PD: photodiode; EDFA: erbium-dope fiber amplifier; Ch: channel. The hidden layers are interconnected by the optical injection. The optical feedback loops provides virtual neurons.
In the readout layer, the neuron states in all the four hidden layers are tracked simultaneously. The target value is obtained through the weighted sum of all the neuron states, and the weights are trained through the algorithm of ridge regression [32]. Based on the deep PRC scheme, Fig. 1(b) shows the corresponding experimental setup. A tunable external cavity laser (Santec TSL-710) serves as the master laser, and its output power is amplified by an erbium-dope fiber amplifier (EDFA). The polarization of the light is aligned with a Mach-Zehnder intensity modulator (EOSPACE, 40 GHz bandwidth) through a polarization controller. The input signal is multiplied by a random binary mask consisting of \(\{1,0\}\). This pre-processed signal is generated from an arbitrary waveform generator (AWG, Keysight 8195A, 25 GHz bandwidth), which then drives the modulator. The polarization of the modulated light is re-aligned with the polarization of the slave laser in the first hidden layer. The four slave lasers in the hidden layers are commercial Fabry-Perot lasers with multiple longitudinal modes. In each layer, the optical feedback loop is formed by an optical circulator and two 90:10 couplers. The feedback strength is adjusted by an optical attenuator. At the output of each hidden layer (except the fourth layer), 70% light is uni-directionally injected into the subsequent layer to lock the slave laser, and the polarization of the light is re-aligned. Between the second and the third layers, the laser power is amplified by using another EDFA. The neuron states of all the four layers are detected by broadband photodiodes (PD), and then recorded on the four channels (Ch) of a high-speed digital oscilloscope (OSC, Keysight DSAZ594A, 59 GHz bandwidth), simultaneously. The optical spectrum is measured by an optical spectrum analyzer with a resolution of 0.02 nm (Yokogawa). In the experiment, the time interval of neurons in each hidden layer is fixed at 9=0.05 ns, which is determined by the modulation rate of the optical modulator at 20 Gbps. The number of neurons in each layer is set at N=80, resulting in a total neuron number of 320 in the deep PRC of four hidden layers. Consequently, the dock cycle of the PRC system is \(T_{c}\)=4.0 ns (\(T_{c}\)=9\(\times N\)). The sampling rate of the AWG is 60 GSa/s and the rate of the OSC is 80 GSa/s, respectively.
## 3 Experimental Results
In the experiment, all the four FP lasers in the hidden layers exhibit an identical lasing threshold of Ith=8.0 mA. The pump currents and the corresponding output power of all the lasers are listed in Table 1, respectively. The delay times of the four optical feedback loops are fixed in the range of 63 to 68.5 ns, without any optimization. It is stressed that the delay times are more than 15 times longer than the clock cycle of the computing system, unlike the common synchronous case. Our recent work has proved that this asynchronous architecture is helpful to improve the PRC performance [29, 33], owing to the rich neuron interconnections [34]. The feedback ratio is defined as the power ratio of the reflected light to the emitted light, which is set around -30 dB for all the four layers. The critical feedback level of the lasers is about -19.3 dB, and hence the optical feedback is operated in the stable regime. The injection ratio is defined as the ratio of the injected power from the laser in the previous layer to the emission power of the laser in the subsequent layer. As shown in Table 1, the injection ratios of each layer vary from about 2.0 up to 4.0. In addition, the detuning frequency is defined as the lasing frequency difference between the two lasers. All the detuning frequencies in Table 1 are set within the stable locking regime without optimization. Figure 1 shows the optical spectra of the FP lasers of multiple longitudinal modes in all the four reservoir layers. The spectrum peaks of the lasers are around 1550.98, 1542.63, 1548.91, and 1540.86 nm, respectively. Meanwhile, the free spectral ranges are 154.6, 154.8, 172.7, and 171.9 GHz, respectively. When applying optical injection from the master laser at 1546.5 nm, only one mode of the slave lasers closest to the injection wavelength is locked in the stable regime. All side modes are suppressed and the suppression ratio is more than 50 dB. This is because the optical injection reduces the gain of the laser medium [35].
The performance of the deep PRC is tested in the real-world task of nonlinear channel equalization in optical fiber communications. The optical signal in optical fibers is distorted by the linear chromatic dispersion and the nonlinear Kerr nonlinearity [36]. The linear distortion is usually mitigated by the feedforward equalizer (FFE) in the digital signal processing (DSP) of the optical receiver [37, 38]. On the other hand, the Kerr nonlinearity can be compensated by solving the nonlinear Schrodinger equation [36]. However, common solving algorithms like the digital back propagation are too complex for the DSP implementation [38, 39]. An alternative solution is deploying neural networks to compensate the fiber nonlinearity with reduced computational complexity [39, 40, 41]. In particular, several literatures have experimentally demonstrated that shallow PRCs were capable to compensate the linear impairments of optical fibers instead of FFEs [42, 43, 44, 45, 33]. Here we show that the deep PRC has strong ability to mitigate the nonlinear impairments of optical fibers. The nonlinear Schrodinger equation describing the propagation of light in an optical fiber reads [36]:
\[\frac{\partial E}{\partial z}+\frac{\alpha}{2}\,E+j\,\frac{\beta_{z}}{2}\, \frac{\partial^{2}E}{\partial z^{2}}=j\gamma\left|E\right|^{2}\,E \tag{1}\]
where \(E\)(z,\(t\)) is the slowly varying envelope of the electric field, z is the transmission distance (50 km), \(\alpha\) is the attenuation constant (\(\alpha\)=0.2 dB/km), \(\beta_{z}\) is the fiber dispersion coefficient (-21.4 ps\({}^{2}\)/km), and \(\gamma\) is the fiber nonlinearity coefficient (1.2 /(W-km)) [46]. The signal under investigation is a non-return-to-zero (NRZ) signal with a modulation rate of 25 Gbps. The training set consists of 35000 random symbols of \(\{0,1\}\), and the testing set consists of 15000 symbols. Each symbol consists of 8 samples, and the tap number of the nonlinear equalizer is set at 21. In the experiment, each measurement is repeated four times, and the mean bit error rate (BER) and the standard deviation are recorded.
Figure 3(a) shows an example of the random NRZ signal sequence sent at the transmitter, with a launch power of 4.0 mW. After a transmission distance of 50 km, nevertheless, the signal received at the receiver in Fig. 3(b) is substantially distorted. Generally, increasing the launch power raises the nonlinear effect, and the signal distortion becomes stronger [36]. The task aims to reproduce the original signal in Fig. 3(a) based on the degraded one in Fig. 3(b). When applying the shallow 1-layer PRC to equalize the received
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Parameters & Layer 1 & Layer 2 & Layer 3 & Layer 4 \\ \hline Laser current & 6.3\(\times\)_ln_ & 2.8\(\times\)_ln_ & 6.5\(\times\)_ln_ & 2.5\(\times\)_ln_ \\ Laser power & 9.0 mW & 2.1 mW & 11.2 mW & 2.4 mW \\ Feedback delay & 63.0 ns & 68.4 ns & 63.7 ns & 68.0 ns \\ Feedback ratio & -30.2 dB & -30.2 dB & -29.7 dB & -30.8 dB \\ Injection ratio & 4.0 & 2.2 & 3.8 & 1.9 \\ Detuning freq. & -55.0 GHz & -15.0 GHz & -33.7 GHz & -13.7 GHz \\ \hline \hline \end{tabular}
\end{table}
Table 1: Operation conditions of the deep PRC
Figure 2: Optical spectra of the four FP lasers with and without optical injection.
signal in Fig. 3(c), the BER firstly decreases from 5.0\(\times\)10\({}^{3}\)at 1.0 mW down to the minimum value of 3.4\(\times\)10\({}^{3}\)at 100 mW. Above 100 mW, the BER increases with the launch power nonlinearly. Meanwhile, the BERs for launch powers ranging from 80 to 120 mW are below the hard-decision forward error correction (FEC) threshold (3.8\(\times\)10\({}^{3}\), dashed line) [47]. This is because the PRC inherently owns both linear memory effect and nonlinear memory effect, which are commonly quantified by the linear memory capacity (MC) and nonlinear MC, respectively [32, 24]. For low launch powers (see 1.0 mW), the Kerr nonlinearity of the optical fiber is negligible and the signal distortion is mainly induced by the linear chromatic dispersion. Therefore, the impairment compensation only requires the linear memory effect of the PRC, while the nonlinear memory effect plays a negative role. When increasing the launch power (see 20-100 mW), the Kerr nonlinearity appears and hence the nonlinear memory effect of the PRC becomes beneficial to mitigate this nonlinear distortion. The BER reaches the minimum value when the inherent nonlinear memory effect of the PRC matches with the strength of the Kerr nonlinearity of the fiber (see 100 mW). On the other hand, the BER increases when the nonlinear memory capacity is not high enough to compensate the strong fiber nonlinearity (see 120-200 mW). Therefore, the nonlinear equalization ability of the PRC is limited by its maximum nonlinear memory effect. For the deep PRC with two reservoir layers, the BER reduces from 4.4\(\times\)10\({}^{3}\) at 1.0 mW down to the minimum of 1.5\(\times\)10\({}^{3}\)at 120 mW. The BERs for launch powers ranging from 20 to 160 mW are below the FEC threshold. The PRC performance further improves when we increase the PRC depth to three. It is shown that the corresponding BER declines from 4.2\(\times\)10\({}^{3}\)at 1.0 mW down to the minimum of 1.0\(\times\)10\({}^{3}\)at 120 mW. The BERs of the 3-layer PRC are better than those of the 2-layer PRC, for all the studied launch powers ranging from 1.0 mW up to 200 mW. However, the performance of the 4-layer PRC is similar to or slightly worse than that of the 3-layer PRC. This suggests the PRC performance saturates at the depth of three, for this nonlinear signal equalization task. In comparison with the shallow PRC, all the three deep PRCs exhibit better performance for the whole launch power range. In particular, the BERs are significantly reduced in the power range of 80 to 160 mW. Therefore, unlike the shallow PRC, the deep PRCs have very strong ability to mitigate the nonlinearity of optical fibers and hence to improve the transmission signal quality. This compensation ability can be attributed to the strengthened nonlinear memory effect of the deep PRCs, which is discussed in the next section.
Figure 4 explores the contribution of each reservoir layer in the 3-layer PRC. For each evaluation, only one of the three reservoirs is used for the signal equalization. Therefore, the virtual neuron number becomes 80 instead of 240, both for the training and for the test. It is found that the performance of the second-layer reservoir is generally better than the first-layer one. In particular, the
Figure 3: Time sequences of the signal at (a) the transmitter and (b) the receiver. The launch power is 4.0 mW. (c) Performance of the PRCs with different depth. The error bar stands for the standard deviation of the measurement. The dashed line indicates the FEC threshold.
minimum BER of the second-layer reservoir achieved at 120 mW is 20\(\times\)10\({}^{3}\). This value further goes down to 1.4\(\times\)10\({}^{3}\)for the third-layer reservoir, which is 2.6 times smaller than the first-layer case (3.7\(\times\)10\({}^{3}\)). The different performance of the three hidden layers suggests that the neuron dynamics are different from one layer to another. Generally, the neuron states at the deeper layer are richer than those at the shallower one, which results in the better performance in the former case. This behavior is different to the parallel PRC, where several reservoirs are connected in parallel instead of in series. Our recent experimental work demonstrated that the neuron states in every parallel reservoir were similar to each other [33]. Owing to the rich neuron dynamics in each layer, the nonlinear memory effect in the deep PRC is improved, and thereby the performance of the 3-layer PRC is boosted. In comparison, the FFE commonly used in the DSP of optical receivers only compensates the linear chromatic dispersion of optical fibers [37]. Figure 4 shows that the BER of the FFE increases nonlinearly from 5.2\(\times\)10\({}^{3}\)at 1.0 mW down to the minimum of 3.8\(\times\)10\({}^{2}\)at 200 mW. For the launch power of 120 mW, the BER of the FFE (7.7\(\times\)10\({}^{3}\)) is 7.7 times larger than that of the 3-layer PRC (1.0\(\times\)10\({}^{3}\)). This comparison proves that the deep PRC can indeed compensate strong nonlinearity of optical fibers. For low launch powers (1 mW), nevertheless, the BER of the 3-layer PRC is only slightly better than that of the FFE. This suggests the deep PRC has similar compensation ability of chromatic dispersion as the FFE.
## 4 Discussion
The experimental results in Fig. 3 and in Fig. 4 have shown that raising the depth of the PRC substantially improves the performance of nonlinearity compensation at high launch powers. However, the deep PRC shows similar performance of linearity compensation as the shallow PRC at low launch powers. In order to understand the behavior, we numerically analyze both the linear MC and the nonlinear MC of the PRC, respectively. The deep PRC model includes four hidden reservoir layers as in the experiment. We assume that the slave lasers in the four layers are all identical to simplify the simulation. The carrier dynamics, the photon dynamics, and the phase of the electrical field are taken into account through the framework of rate equations. Both the optical feedback effect and the optical injection are characterized through the classical Lang-Kobayashi model [48, 49]. The main simulation parameters are listed in Table 2. The detailed deep PRC model and other simulation parameters refer to [24]. The linear MC (LMC) measures the ability of the PRC of reproducing the past input signal, which is quantified by [50, 51]:
\[MC_{L}=\sum_{i=1}^{\infty}\frac{\left\langle u(k-i)y(k)\right\rangle^{2}}{ \sigma^{2}\left[u(k)\right]\sigma^{2}\left[y(k)\right]} \tag{2}\]
where the input signal \(u(k)\) is a random sequence uniformly distributed in the range of [-1, 1]. y(k) is the corresponding output of the PRC at the step \(k\). The aim of the evaluation is to reproduce the input signal \(u(k\)-\(i)\) shifted \(i\)-step backward using \(y(k)\). \(\sigma^{2}\) represents
Figure 4: Performance comparison between the 3-layer PRC (dots) and the FFE (squares). The open symbols represent the BERs of each reservoir layer, respectively. The dashed line indicates the FEC threshold.
the variance operation and \(\prec\) stands for the average operation. On the other hand, the nonlinear MC characterizes the ability of reproducing high-order Legendre polynomials of the input signal, which is defined as:
\[MC_{{}_{NL}}=\sum_{i=1}^{\infty}\frac{\left\langle p(k-i)y(k)\right\rangle^{2}} {\sigma^{2}\left[p(k)\right]\sigma^{2}\left[y(k)\right]} \tag{3}\]
where the polynomial is \(p(k)\)=[3\(u^{2}\)(\(k\))-1]/2 for the quadratic MC (QMC), and is \(p(k)\)=[5\(u^{2}\)(\(k\))-3\(u\)(\(k\))]/2 for the cubic MC (CMC), respectively. The aim of the evaluation is to reproduce the polynomial \(p(k\)-\(i\)) using the PRC output \(y(k)\). In addition to LMC, QMC, and CMC, the PRC also has higher-order memory effect and cross memory effect, which are not considered in this work. Figure 5 shows that both the linear MC and the nonlinear MCs rise with the increasing depth of the PRC. The LMC increases from 8.47 for the 1-layer PRC up to 16.87 for the 4-layer PRC. However, the deep PRC in Fig. 3 only slightly reduces the BER at low launch powers. This suggests the LMC of the shallow PRC is already high enough for compensating the chromatic dispersion. On the other hand, the nonlinear QMC increases from 5.25 to 11.27, while the CMC increases from 3.33 to 6.57. The enhanced nonlinear MC can be attributed to the rich neuron states of deep reservoir layers as proved in Fig. 4. As a result, the deep PRC exhibits strong ability in mitigating the nonlinearity of optical fibers. On the other hand, all the three MCs almost saturate as the depth of three, resulting in the performance saturation of the nonlinear signal equalization in Fig. 3.
## 5 Conclusion
In summary, we have experimentally demonstrated a deep PRC architecture based on the cascading injection-locked Lasers. The connection between successive reservoir layers is all optical, without any OEC or ADC. In addition, this scheme is highly scalable because the laser in each layer provides optical power. The deep PRC with a depth of four is used to solve the real-word problem of nonlinear signal equalization of optical fibers. It is proved that the deep PRC exhibits strong ability to compensate the Kerr nonlinearity of optical fibers and hence to improve the quality of the received signal. In comparison with the linear FFE, the deep PRC reduces the
\begin{table}
\begin{tabular}{c c} \hline \hline Parameters & Values \\ \hline Laser current & 1.6\(\times\)\(\mathrm{d}_{\mathrm{th}}\) \\ Feedback delay & 3.6/28/2.0/12.ns \\ Feedback ratio & -30 dB \\ Injection ratio & -5 dB \\ Detuning freq. & 0 GHz \\ Neuron per layer & 80 \\ Neuron interval & 10 ps \\ \hline \hline \end{tabular}
\end{table}
Table 2: Main parameters of the deep PRC in the simulation
Figure 5: Memory capacity of the PRCs with different depth.
BER of the transmission link as much as 7.7 times. In comparison with the shallow PRC, the improved performance of the deep PRC is owing to the rich neuron dynamics of the deep reservoir layers, which in turn boosts the nonlinear memory effect. Future work will optimize the operation parameters of the deep PRC, including the injection ratio, the detuning frequency, the feedback ratio, and the feedback delay time.
**Funding** National Science Foundation of China (NSFC) (61804095). ShanghaiTech University (2022X0203-902-01).
|
2308.00017 | Jet Bundle Geometry of Scalar Field Theories | For scalar field theories, such as those EFTs describing the Higgs, it is
well-known that the 2-derivative Lagrangian is captured by geometry. That is,
the set of operators with exactly 2 derivatives can be obtained by pulling back
a metric from a field space manifold $M$ to spacetime $\Sigma$. We here
generalise this geometric understanding of scalar field theories to higher-
(and lower-) derivative Lagrangians. We show how the entire EFT Lagrangian with
up to 4-derivatives can be obtained from geometry by pulling back a metric to
$\Sigma$ from the 1-jet bundle that is (roughly) associated with maps from
$\Sigma$ to $M$. More precisely, our starting point is to trade the field space
$M$ for a fibre bundle $\pi:E \to \Sigma$, with fibre $M$, of which the scalar
field $\phi$ is a local section. We discuss symmetries and field redefinitions
in this bundle formalism, before showing how everything can be `prolongated' to
the 1-jet bundle $J^1 E$ which, as a manifold, is the space of sections $\phi$
that agree in their zeroth and first derivatives above each spacetime point.
Equipped with a notion of (spacetime and internal) symmetry on $J^1 E$, the
idea is that one can write down the most general metric on $J^1 E$ consistent
with symmetries, in the spirit of the effective field theorist, and pull it
back to spacetime to build an invariant Lagrangian; because $J^1 E$ has
`derivative coordinates', one naturally obtains operators with more than
2-derivatives from this geometry. We apply this formalism to various examples,
including a single real scalar in 4d and a quartet of real scalars with $O(4)$
symmetry that describes the Higgs EFTs. We show how an entire non-redundant
basis of 0-, 2-, and 4-derivative operators is obtained from jet bundle
geometry in this way. Finally, we study the connection to amplitudes and the
role of geometric invariants. | Mohammad Alminawi, Ilaria Brivio, Joe Davighi | 2023-07-31T17:54:59Z | http://arxiv.org/abs/2308.00017v2 | # Jet Bundle Geometry of Scalar Field Theories
###### Abstract
For scalar field theories, such as those EFTs describing the Higgs, it is well-known that the 2-derivative Lagrangian is captured by geometry. That is, the set of operators with exactly 2 derivatives can be obtained by pulling back a metric from a field space manifold \(M\) to spacetime \(\Sigma\). We here generalise this geometric understanding of scalar field theories to higher- (and lower-) derivative Lagrangians. We show how the entire EFT Lagrangian with up to 4-derivatives can be obtained from geometry by pulling back a metric to \(\Sigma\) from the _1-jet bundle_ that is (roughly) associated with maps from \(\Sigma\) to \(M\). More precisely, our starting point is to trade the field space \(M\) for a fibre bundle \(\pi:E\to\Sigma\), with fibre \(M\), of which the scalar field \(\phi\) is a local section. We discuss symmetries and field redefinitions in this bundle formalism, before showing how everything can be 'prolongated' to the 1-jet bundle \(J^{1}E\) which, as a manifold, is the space of sections \(\phi\) that agree in their zeroth and first derivatives above each spacetime point. Equipped with a notion of (spacetime and internal) symmetry on \(J^{1}E\), the idea is that one can write down the most general metric on \(J^{1}E\) consistent with symmetries, in the spirit of the effective field theorist, and pull it back to spacetime to build an invariant Lagrangian; because \(J^{1}E\) has 'derivative coordinates', one naturally obtains operators with more than 2-derivatives from this geometry. We apply this formalism to various examples, including a single real scalar in 4d and a quartet of real scalars with \(O(4)\) symmetry that describes the Higgs EFTs. We show how an entire non-redundant basis of 0-, 2-, and 4-derivative operators is obtained from jet bundle geometry in this way. Finally, we study the connection to amplitudes and the role of geometric invariants.
## 1 Introduction
* 2 Preliminaries
* 2.1 Two-derivative Lagrangians from field space geometry
* 2.2 Other terms in the scalar EFT
* 2.3 What is a (non-derivative) field redefinition?
* 2.3.1 Change of coordinates
* 2.3.2 Change of maps
* 2.3.3 Field space diffeomorphisms
* 2.3.4 Local field space diffeomorphisms, and HEFT _vs._ SMEFT
* 2.4 Geometrizing amplitudes
* 3 From Field Space to Bundles
* 3.1 Fields as sections of fibre bundles
* 3.2 Field redefinitions are changes of section
* 3.3 Morphisms on the field space bundle
* 3.3.1 Internal _vs_ spacetime symmetries
* 3.3.2 Non-derivative field redefinitions (again)
* 3.3.3 More general field redefinitions?
* 3.3.4 Example: \(\phi^{4}\) theory
* 3.3.5 A limiting lemma
* 4 Jet Bundles
* 5 Higher-derivative EFTs from Jet Bundle Geometry
* 5.1 Defining geometry on the 1-jet bundle
* 5.2 Our first four-derivative Lagrangian
* 5.3 General geometries for general Lagrangians
* 5.4 Field redefinitions are (prolonged) changes of section
* 5.5 Morphisms on the jet bundle
* 5.5.1 Prolongation of bundle morphisms
* 5.5.2 Prolonging internal and spacetime symmetries to the 1-jet bundle
* 5.5.3 Non-derivative field redefinitions (last time!)
* 5.5.4 Example: free particle quantum mechanics
* 6 Examples
* 6.1 Quantum mechanics on the line
* 6.2 Real scalar in 4d
* 6.3 Two real scalars in 4d, without internal symmetries
* 7
Four scalars in 4d, with \(O(4)\) internal symmetry * 7.1 The EFT Lagrangian up to 4 derivatives * 7.2 The EFT Lagrangian from 1-jet bundle geometry * 7.3 Beyond the custodial limit * 7.4 Topological terms
* 8 From Jets to Amplitudes
* 8.1 Normal coordinates on the jet bundle (and why they fail)
* 8.2 Expansion of the jet bundle metric
* 8.3 Geometrizing amplitudes on the jet bundle
* 8.4 Comment on the role of geometric invariants
* 9 Conclusion
* A Diagnosing redundancies in the metric using invariants
* B Higher jet bundles for higher derivatives: a completeness proof
* B.1 Proof of point 1.
* B.2 Proof of point 2.
## 1 Introduction
Effective Field Theories (EFTs) have long played a crucial role in guiding our understanding of fundamental physics to higher and higher energies. In the past decade, EFTs have been increasingly regarded as essential tools to explore physics beyond the Standard Model (SM), becoming prominent in both the theoretical and experimental programs of new physics searches [1; 2]. Two EFTs are particularly pertinent in this context: the so-called Standard Model EFT (SMEFT) [3; 4] and the Higgs EFT (HEFT) [5; 6; 7; 8; 9; 10]. Both theories extend the SM Lagrangian with higher-dimensional interactions, but they differ in the choice of scalar fields and in the power-counting adopted to organize the EFT series. While the SMEFT is formulated in terms of the \(SU(2)\) Higgs doublet, that transforms linearly under the (gauge) symmetries, the HEFT builds upon the formalism of the electroweak chiral Lagrangian, whereby the three Goldstone bosons are embedded in a chiral field that transforms non-linearly [11; 12; 13; 14], and the physical Higgs particle is treated as a singlet excitation [15]. This difference is expected to capture different dynamics underlying the electroweak symmetry breaking (EWSB) mechanism, which gives great significance to the question of which EFT should be used to go beyond the SM [16].
The phenomenological characterization of SMEFT and HEFT is a long-standing open challenge (see _e.g._[17; 18; 19; 20; 21; 22; 23; 24]). The main obstacle to overcome in realising this program stems from ambiguities in the formulation of these EFTs; namely, the fact that the two can be partially mapped onto each other via field redefinitions, indicating that at least a fraction of the phenomenological differences emerging in an order-by-order comparison might be
unphysical. In particular, some of these differences might be washed out upon (partially) resumming the infinite tower of EFT operators.
Geometrical formulations of these EFTs [25; 26; 27; 28; 29; 30] were introduced in recent years partly to address this issue, and have already proven very powerful in characterizing HEFT and SMEFT at a more fundamental level [31; 32; 33; 34; 35; 36; 37; 38; 39]; see [40] for a pedagogical overview. The core idea is that the (generalised) kinetic term of the scalar sector, by which we mean the set of all 2-derivative operators, could be seen as defining a metric on a real 4-manifold \(M\) spanned (locally) by the four scalar fields. In this language, the invariance of physical scattering amplitudes under field redefinitions has been interpreted in terms of the invariance of geometric quantities (specifically, curvature invariants) of the field space under coordinate transformations. This approach allowed a more rigorous (and representation-independent) formulation of the differences between HEFT and SMEFT, and of the conditions under which each EFT is appropriate. Because the scalar manifold geometry emerges as a remnant of the decoupling of heavy BSM particles, these conditions can also be related to specific properties of the UV sector [27; 33; 41]. For instance, a SMEFT expansion cannot be constructed if the scalar geometry is singular at the point where the EW symmetry is restored. This can be associated to the presence of so-called 'loryons', heavy particles that have been integrated out in the EFT but which take at least 'half' of their mass from the Higgs mechanism [42].
Arguably, this geometric approach has two important limitations when applied to EFTs: (i) the scalar manifold geometry can only capture interaction terms with exactly 2 derivatives, and (ii) there is no geometric understanding of invariance under field redefinitions with derivatives, which, from the field theory perspective, are entirely reasonable'symmetries' of the path integral. Point (i) is problematic because higher-derivative operators obviously play an important role in EFTs, and carry information that is in principle independent of that encoded in the 2-derivative terms. Point (ii) implies that geometric quantities are in fact basis-dependent. For instance, upon applying derivative field redefinitions, the curvature on \(M\) changes, in general. First steps towards obviating these issues were taken recently in [43; 44].
Of particular relevance to limitation (i), and to the present paper, Ref. [44] recently introduced the idea of 'Lagrange spaces' as a generalisation of the field space geometry, wherein directional derivatives of the field space coordinates are treated as independent coordinates. Lagrangian functions can be built with these coordinates, which encode a subset of operators with more than 2 derivatives in a covariant formalism - in particular, because different spacetime derivatives \(\partial_{\mu}\) cannot be distinguished, only operators with specific 'flavour' symmetries can be realised using these derivative coordinates. The present paper is in a similar spirit, in that we also seek a geometric formulation of scalar field theories that extends to higher-derivatives. We approach the problem in a largely orthogonal direction, using, instead, pure geometry on 'jet bundles' to formulate scalar EFT Lagrangians.
In differential geometry, _jet bundles_ provide a coordinate-free way to describe higher-derivatives of maps between manifolds; in the context of our EFTs, jet bundles are spaces on which derivatives of the scalar field are treated as extra coordinates in a consistent
fashion. Roughly speaking, given a spacetime \(\Sigma\), a field space \(M\), and an integer \(r\), one can construct an '\(r\)-jet bundle' which is itself a manifold, with local coordinates corresponding to spacetime coordinates \(x^{\mu}\), field space coordinates \(u^{i}\), and '\(r\)-derivative coordinates' \(u^{i}_{\mu}\), \(u^{i}_{\mu_{1}\mu_{2}}\),..., \(u^{i}_{\mu_{1}\ldots\mu_{r}}\). This set of manifolds offers a promising arena for extending the geometric approach to EFTs to Lagrangians with more than 2 derivatives - suggesting a route for overcoming limitation (i) described above, which we aim to explore in this paper.
To that end, we formulate scalar field theory Lagrangians using geometry on the 1-jet bundle. The geometries of the various manifolds involved in our discussion, and maps between them, are illustrated in Figure 1. Our strategy is to write down the most general metric on the 1-jet bundle consistent with the symmetries of the EFT, which are themselves naturally extended to (or, more precisely, 'prolongated' to) the jet bundle. An invariant EFT Lagrangian function is then formed by pulling back this metric all the way to spacetime \(\Sigma\) along a section of the jet bundle (which is itself obtained by prolonging a scalar field configuration \(\phi\)), and contracting with the inverse spacetime metric. One immediately obtains terms with more than 2 derivatives from this recipe; for example, if we pull back a metric component \(\delta_{ij}\eta^{\mu\nu}du^{i}_{\mu}\,du^{j}_{\nu}\) along a prolongation of \(\phi\), and then contract with \(\eta^{\mu\nu}\), we obtain the 4-derivative term \((\partial_{\mu}\partial_{\nu}\phi)^{2}\). We show that, in the case of the SMEFT and HEFT, one obtains a _complete basis_ of non-redundant EFT operators with 0-, 2-, and 4-derivatives in this way, starting from a 1-jet bundle geometry that is invariant under Poincare and \(O(4)\) custodial symmetry.
The conclusion, that one can construct a complete EFT basis with up to 4-derivatives
Figure 1: A schematic illustration of three key players in this paper: a spacetime manifold \((\Sigma,\eta)\) with local coordinates \(x^{\mu}\) on a patch thereof; a ‘field space bundle’ \(E\) whose base space is \(\Sigma\) and fibre is \(M\), which has fibre coordinates \(u^{i}\); and lastly the 1-jet manifold \(J^{1}E\) of the bundle \(E\), which is itself a fibre bundle over both \(E\) and over \(\Sigma\). A ‘scalar field’ configuration is a section \(\phi\) of the bundle \(E\), and the composition \(u^{i}\circ\phi=\phi^{i}(x)\) returns its ‘field value’, given the choice of local fibre coordinates \(u^{i}\), at \(x\). From \(\phi\) one can construct a section \(j^{1}\phi\) of the bundle \(j^{1}E\to\Sigma\) by ‘prolongation’. The 1-jet bundle also admits a local fibred coordinate system \((x^{\mu},u^{i},u^{i}_{\mu})\), where the extra \(\{u^{i}_{\mu}\}\) are ‘derivative coordinates that, when evaluated on a section \(j^{1}\phi\), agree with the first derivatives \(\partial_{\mu}\phi(x)\). Finally, metrics \(g^{(0)}\) and \(g^{(1)}\) can be defined on \(E\) and \(J^{1}E\) such that, upon pulling back to \(\Sigma\) along \(\phi\) or \(j^{1}\phi\) respectively, one can obtain the most general EFT Lagrangian for that scalar field theory with up to 2- or 4-derivative operators (and any number of fields).
from a jet bundle metric, remains unchanged upon allowing a breaking of custodial symmetry (SS7.3), as occurs in the real-world electroweak theory. In fact, it generalises to scalar Lagrangians with arbitrary 'flavour' structure, relying in no way on the assumption of an internal symmetry. Nor is this an accident that occurs only at the 1-jet bundle order. We prove in Appendix B that, following a similar recipe, the most general, Poincare invariant \(r\)-jet bundle metric can be used to construct an EFT Lagrangian that contains a complete basis of operators with up to \(2(r+1)\) derivatives and arbitrary number of field insertions. In this sense, we suggest that jet bundles offer a natural way of constructing generic scalar EFTs (barring only topological terms) using only geometry - albeit on an increasingly high-dimensional space if we want to cover an increasingly large number of derivatives.
As well as extending the geometric formulation of scalar field theories to higher-derivatives, we try to develop an understanding of field redefinitions in this picture that includes derivative field redefinitions. An important realisation is that the scalar field \(\phi(x)\) is not really identified with coordinates on the field space; rather, as the notation suggests, \(\phi(x)\) is the _map_ from spacetime \(\Sigma\) to field space \(M\) (or, better still, \(\phi\) is a local section of a bundle \(\pi:E\to\Sigma\) with fibre \(M\), which we take as our starting point in this work). The image of this map (or section) is of course evaluated in the coordinate chart, for each \(x\), to return the 'field value' at \(x\). A field redefinition is then understood to be a change of map, or better a change of section, rather than just a change of coordinates (the latter leaves any metric \(g\) trivially invariant, simply by virtue of \(g\) being a tensor). Given that the maps (sections) 'know about' the spacetime source manifold \(\Sigma\), it is not surprising that they can be differentiated with respect to \(x\), whereas a coordinate on the target space cannot. Therefore a change of section can depend on derivatives of the original section, offering a more general understanding of field redefinitions - and one that is, in essence, geometric.
In certain cases, in particular those a physicist would call 'non-derivative field redefinitions', the change of section can be equivalently described by doing a morphism on the bundle (or field space). All these notions lift to the 1-jet bundle formalism in a natural and consistent way. Lastly, derivative field redefinitions can, in a very limited range of examples, be replicated by (jet) bundle morphisms, but these are essentially accidents; indeed, there are mathematical arguments (see SS3.3.5) that suggest 'derivative changes of section' cannot be induced by a (jet) bundle morphism. We try to elucidate these various points throughout this paper.
We remark that jet bundles have been recently used in the study of scalar field theories in Ref. [45] by Gripaios and Tooby-Smith, but in an altogether different context. There, the authors use jet manifolds to rigorously formulate so-called 'inverse Higgs constraints'. The use of jet bundles is natural (and fruitful) in that context, because the inverse Higgs constraint equations relate fields to their derivatives, and so can be embedded as submanifolds of the 1-jet bundle that are invariant under the action of the symmetry. While we were completing this paper, we became aware of the forthcoming work [46] by Craig and Lee, that builds on [44] and also utilizes the jet bundle formalism in the context of the Higgs EFTs.
#### Summary of results
In this work we elucidate a geometric description of EFTs with up to four derivatives, in terms of geometry on the 1-jet bundle associated to our original field space maps. In doing so, here are some things we achieve (the various concepts and terminology we use will be duly explained in the main text):
1. We construct a map from the space of 1-jet bundle metrics \(g\) to 4-derivative Lagrangians \(\mathcal{L}\), in a natural way. Namely, given a section \(\phi\) of the field space bundle, _a.k.a._ a scalar field configuration, one pulls back the jet bundle metric \(g\) to spacetime along the associated section \(j^{1}\phi\) of the jet bundle, and then contracts with the inverse metric \(\eta^{-1}\) on spacetime to get an ordinary function or 'Lagrangian': \[\mathcal{L}[g]=\frac{1}{2}\langle\eta^{-1},\,(j^{1}\phi)^{*}g\rangle\] (1) This map does not inject to the space of physically inequivalent Lagrangians, as one might expect given the target space does not know about _e.g._ integration by parts. So, lastly (and somewhat naively in our formalism), one can implement integration by parts (IBP) identities to reduce the Lagrangian to a particular basis.
2. We prove that this map surjects, in the sense that every Lagrangian with up to 4-derivatives can be obtained by pulling back a metric on the 1-jet bundle. We emphasize that all terms with 0-, 2-, or 4-derivatives are obtained in this way, so that even the potential is captured 'geometrically'. We show this result explicitly in a number of toy examples, and we also prove that it holds at any order, for a map from the space of \(r\)-jet bundle metrics into the space of scalar Lagrangians with up to \(2(r+1)\) derivatives.
3. Along the way, we discuss how the symmetries of, say, the Higgs EFTs, can be implemented in this (jet) bundle formalism. The internal \(O(4)\) symmetry and the spacetime \(SO(1,3)\) Poincare symmetry are described by different kinds of bundle morphisms in this picture (depending on whether they move points in the base space or not). Both kinds of symmetry naturally carry over to the 1-jet bundle, and we enforce these symmetries on our metrics (hence Lagrangians).
4. We give a geometric meaning to the notion of general (including derivative) field redefinitions in terms of a (suitably smooth) change of section, which induces a change of section of the 1-jet bundle after prolongation.
5. In doing so, we also hope to clarify the geometric meaning of _non_-derivative field redefinitions. These are a special class of change of section that can be equivalently described by doing a bundle morphism (under which the metric changes), before pulling back to spacetime along the _same_ section. Such an equivalent description is not available for changes of section that involve derivative maps.
6. Thanks to the components of the metric being smooth functions on the jet bundle, we are able to expand the metric around a point, which after specifying a section
allows us to directly extract _n-point amplitudes_, here done for \(n=2,\,3,\,4\), in terms of products of momenta and the components of the metric evaluated at that point.
Here are some things that we do _not_ do in this paper and that are left for future work:
1. We do not describe how to implement gauging of a subgroup of the global symmetry in this jet bundle formulation.
2. We do not show how to couple this jet bundle geometry to fermions. For 2-derivative Lagrangian terms with two fermion insertions, this issue has been tackled recently in Refs. [47; 48; 49].
3. We do not go into any details concerning phenomenology.
The paper is structured as follows: in SS2 we review and refine the formulation of 2-derivative EFT Lagrangians using field space geometry. In particular, we hope to clarify a little the notion of (non-derivative) field redefinitions. In SS3 we recast this formalism using a field space bundle, which is more general (and motivated by locality). We again revisit the issue of field redefinitions, now equipped with the notion of a bundle morphism. Jet bundles are introduced in SS4 and their relation to scalar Lagrangians is discussed in great detail in SS5. Section 6 contains a few toy examples that will be useful to see how 1-jet bundle geometry captures operator bases including up to 4 derivatives; these examples also invite us to take a first look at the redundancies present in the description. In SS7 we turn to the SMEFT/HEFT case of four scalars respecting a custodial \(O(4)\) symmetry. Finally, in SS8 we comment on the interplay with scattering amplitudes, before concluding in SS9.
## 2 Preliminaries
Our goal is to eventually describe the higher-derivative expansion of the Higgs EFTs, namely 'HEFT' or 'SMEFT', using geometry. We begin by reviewing the basic idea for geometrically formulating the Higgs EFTs.
The Higgs sector of the electroweak theory, by which we refer to the part of the Lagrangian involving _only_ scalar fields, is described by a 4d sigma model with a target space \((M,g)\), which is a smooth manifold of real dimension 4, with Riemannian metric \(g\). This means that, given also a 4d spacetime manifold with a (pseudo-)Riemannian metric, \((\Sigma,\eta)\), the degrees of freedom of the QFT are smooth (\(C^{\infty}\)) maps
\[\phi(x):\Sigma\to M\,, \tag{1}\]
with dynamics described by an action that is a particular functional \(S[\phi]\). On a patch (open set) \(\mathcal{F}_{M}\) of \(M\) one can provide local coordinates \(u^{i}:\mathcal{F}_{M}\to\mathbb{R}^{4}\), where \(i=1,\ldots,4\), and on a patch \(\mathcal{U}_{\Sigma}\) of \(\Sigma\) one can provide local coordinates \(x^{\mu}\). In this paper, we always take \(\Sigma\) to be a fixed non-dynamical background manifold, with a fixed metric - usually flat Minkowski space \(\mathbb{R}^{3,1}\), with \(\eta=\text{diag}(1,-1,-1,-1)\).1
Of course this geometric picture is not reserved for the particular 4d theories that describe the Higgs, but equally describes scalar fields in any spacetime dimension \(d\),2 with any target space geometry. In SS6.1, for example, we consider a quantum mechanical example _i.e._ a scalar field theory in \(0+1\) dimensions (related examples have been recently discussed in [51, 52]). An important class of examples in physics is those (pseudo-)scalars arising from spontaneous symmetry breaking \(G\to H\), for \(d>2\)[53]. In this case the scalar fields, here pseudo Nambu Goldstone bosons (or pNGBs), are known to parametrize the coset space \(M\cong G/H\). In addition to the chiral Lagrangian [54] that describes QCD pions, arguably a pillar of theoretical particle physics, there exist physically important examples of such pNGB Lagrangians in condensed matter physics, _e.g._ describing fluids [55, 56] and more exotic phases [57, 58], and even cosmology [59, 60].
Footnote 2: The source manifold \(\Sigma\) need not be identified with spacetime, either; more generally it is the ‘world-volume’ of the theory. The most familiar example in which \(\Sigma\) is not identified with spacetime occurs in string theory, where one usually identifies \(\Sigma\) with the 2d string worldsheet, and the target space \(M\) with the physical spacetime in which the string moves. A similar description can be used for any extended object.
Example: Standard Model (SM).For the special case of the SM Higgs sector, we can take \(M\cong\mathbb{R}^{4}\) with globally defined Cartesian coordinates \(u^{i}\), and a flat metric.
Symmetries of SMEFT and HEFT
An essential bit of data required to define these Higgs EFTs is an _internal symmetry_, by which we mean (for now) that the target space \(M\)_is equipped with the smooth action \(\sigma\) by a Lie group \(G\)_. This induces a group action on the fields \(\phi(x)\) which are (for now) maps into \(M\), under which we require that the field theory action functional \(S[\phi]\), or more precisely the phase \(e^{iS[\phi]}\), be invariant. It is in this Lie group action where the difference between HEFT and SMEFT lies. For both cases the group is the same, namely \(G=O(4)\) in the custodial limit3 (which we assume throughout most of this paper), but its action \(\sigma\) on \(M\) is different.
Footnote 3: Actually, in neither SMEFT nor HEFT do we know the global structure of the custodial symmetry group \(G\), only that its Lie algebra is \(\mathfrak{so}(4)\). We take \(SO(4)\) to be enlarged by a parity symmetry _e.g._\(u^{4}\to-u^{4}\) to an \(O(4)\cong SO(4)\rtimes\mathbb{Z}/2\) symmetry; in other literature, the symmetry is taken to be the universal cover \(SU(2)_{L}\times SU(2)_{R}\cong\text{Spin}(4)\), which is a non-isomorphic central extension of \(SO(4)\) also by \(\mathbb{Z}/2\). This global ambiguity is unsurprising given that even the electroweak _gauge_ symmetry is ambiguous, being either \(SU(2)\times U(1)\) or \(U(2)\) (see _e.g._[61, 62]).
This distinction is easy to express given a particular set of local coordinates on a patch, but is harder to formulate in a coordinate-free (or indeed field-redefinition-invariant) way. One of the successes of the geometric formulation of Higgs EFTs has been to provide coordinate-free criteria for detecting an \(O(4)\) action that is decisively non-SMEFT [16, 33]. For now let us use particular local coordinates to get started.
In the case of SMEFT, and for a particular set of local coordinates \(u^{i}=h^{i}\) on some open set \(\mathcal{F}_{M}\subset M\), the group action \(\sigma_{\text{S}}\) is simply
\[\sigma_{\text{S}}:O(4)\times M\to M:\;(O,h^{i})\mapsto Oh^{i}\,, \tag{2}\]
where \(O\in O(4)\) acts on 4-component real vectors by matrix multiplication.4 Evidently, this means that electroweak symmetry \(SU(2)_{L}\times U(1)_{Y}\subset G\) acts _linearly_. A consequence of this linear group action is that the patch \({\cal F}_{M}\) always contains a _fixed point_ of the group action, namely the origin \(o\) (\(h^{i}=\vec{0}\)), meaning that \(\sigma_{\rm S}:(O,o)\mapsto o\ \forall O\in O(4)\). This is the electroweak symmetry preserving point. As shown in [27], the converse is also true, namely if \(M\) contains an \(O(4)\) fixed point then, in the vicinity of that point, there is always a coordinate chart \(u^{i}=h^{i}\) in which \(O(4)\) acts linearly on \(h^{i}\). Moving away from this fixed point, any other point on \({\cal F}_{M}\), written in the local coordinates \(h^{i}\), preserves an \(O(3)\subset O(4)\) subgroup. The vacuum of our Universe is such a symmetry breaking point \(h^{i}\neq\vec{0}\), with length squared \(\delta_{ij}h^{i}h^{j}=v^{2}\).
Footnote 4: We are being a little cavalier in expressing the group action, which is well-defined on all of \(M\), only by its action on points in the patch \({\cal F}_{M}\) that is coordinatized by \(\{h^{i}\}\). In principle, one should provide an atlas of coordinate charts covering \(M\) and define the group action everywhere. Also note that the fixed patch \({\cal U}\) will itself be moved by the group action, and so there can be points ‘either side of the edge’ of the patch for which Eq. (2.2) cannot be used. We invite the reader to look past such subtleties here.
For HEFT, the symmetry breaking is made manifest by the choice of group action. Let us write our local coordinates in the form \(u^{i}=(h,\pi^{a}/v)\), where \(a=1,\ldots,3\). HEFT is characterised by a _non-linear_ group action \(\sigma_{\rm H}\) in these local coordinates
\[\sigma_{\rm H}:O(4)\times M\to M:\ (O,h,\pi^{a}/v)\mapsto\left(h,O\left(\pi^{a} /v,\sqrt{1-\pi\cdot\pi/v^{2}}\right)^{T}\right)\,. \tag{2.3}\]
In words, the radial mode \(h\) is invariant under the \(O(4)\) group action, while the Goldstone modes \(\pi^{a}\) transform non-linearly, via the usual action of \(O(4)\) on a unit vector spanning \(S^{3}\cong O(4)/O(3)\), the coset space associated with the electroweak symmetry breaking.
Spacetime \((\Sigma,\eta)\) is also equipped with a symmetry, which is a smooth action \(\rho\) by the Poincare group \(\cong\mathbb{R}^{1,3}\rtimes O(1,3)\). For our coordinates \(x^{\mu}\) on Minkowski spacetime \(\mathbb{R}^{1,3}\), the (non-linear) group action is
\[\rho:\text{Poinc}\times\Sigma\to\Sigma\ :\ (a^{\mu},L_{\nu}^{\mu},x^{\mu}) \mapsto L_{\nu}^{\mu}x^{\nu}+a^{\mu} \tag{2.4}\]
Throughout this paper we only consider Poincare-invariant theories (including, but not only, SMEFT and HEFT).
### Two-derivative Lagrangians from field space geometry
The dynamics of such scalar field theories lend themselves to a geometric description. This means that certain terms in the Lagrangian can be built using geometric structures on the target space, which we have already stipulated is a Riemannian manifold (and sometimes, more basically, a topology alone is sufficient).
The most important and natural geometric construction is of the kinetic term or, more generally, the Lagrangian terms featuring exactly two derivatives. This requires a geometry on \(M\), in the form of the metric \(g\), which recall is a symmetric and everywhere non-degenerate \((0,2)\)-tensor on \(M\). The two-derivative Lagrangian terms are then
\[{\cal L}_{2\partial}[\phi,g]=\frac{1}{2}\left\langle\eta^{-1},\,\phi^{*}(g) \right\rangle. \tag{2.5}\]
That is, we pull back the metric \(g\) along the map \(\phi\),5 to obtain a \((0,2)\) tensor on spacetime, which we contract with the inverse spacetime metric to get an ordinary function. Note that this formula for the Lagrangian is 'coordinate-free'; the map \(\phi:\Sigma\to M\) is defined irrespective of our choices of coordinate chart, and both \(\eta^{-1}\) and \(g\) are tensors. The Lagrangian will be moreover invariant under the symmetry of our theory if the metric \(g\) is \(\sigma\)-invariant,6 assuming also that \(\eta\) is Poincare invariant. Our notation \(\mathcal{L}_{2\partial}[\phi,g]\) emphasizes that the Lagrangian is a functional of the map (field) \(\phi\), given \(g\). To obtain the action \(S[\phi,g]\), we integrate \(\mathcal{L}_{2\partial}[\phi,g]\) over \(\Sigma\) using the canonical volume form \(\omega_{\Sigma}:=\sqrt{|\eta|}d^{4}x\), as we would for any Lagrangian 'term' that is a function (rather than a form).
Footnote 5: We use the notation \(f^{*}(\cdot)\), standard in differential geometry, to denote the pullback of an object along the map \(f\). Likewise, pushforwards are denoted \(f_{*}(\cdot)\). In order to avoid confusion, any notion of ‘complex conjugation’ _e.g._ of a scalar Higgs field, will be denoted with a dagger, _viz._\(\phi\)!.
Footnote 6: Note that the stipulation of ‘invariance’ refers to the specific group _action_, which recall differs for HEFT _vs_ SMEFT, and which we emphasize by writing ‘\(\sigma\)-invariant’ rather than just ‘\(G\)-invariant’.
The compact, coordinate-free formula (5) can be written in a notation more familiar to physicists by using our local coordinates \(x^{\mu}\) (on \(\mathcal{U}_{\Sigma}\subset\Sigma\)) and \(u^{i}\) (on \(\mathcal{F}_{M}\subset M\)). In these local coordinates, the metrics are
\[\eta =\eta_{\mu\nu}\,dx^{\mu}\otimes dx^{\nu}\,,\qquad\text{(flat)}, \tag{6}\] \[g =g_{ij}(u^{i})\,du^{i}\otimes du^{j}\,, \tag{7}\]
and then Lagrangian (5) is
\[\mathcal{L}_{2\partial}[\phi,g]=\frac{1}{2}\,\eta^{\mu\nu}\,g_{ij}\left(\phi^ {i}(x)\right)\,\partial_{\mu}\phi^{i}(x)\partial_{\nu}\phi^{j}(x)\,, \tag{8}\]
where
\[\phi^{i}(x):=(u^{i}\circ\phi)(x)\,. \tag{9}\]
In words, the quantity \(\phi^{i}(x)\) returns the values of the local coordinates \(u^{i}\) evaluated at the image of spacetime point \(x\) under the map \(\phi\) - that is, the 'field value' at \(x\).
Example (ctd.): Sm.Take \(M=\mathbb{R}^{4}\) and \(u^{i}\) to be global Cartesian coordinates thereon. The SM construction takes only the subset of Lagrangian terms that are renormalisable. With this extra condition, symmetry dictates that \(g=\delta_{ij}du^{i}du^{j}\) is just the flat metric on \(\mathbb{R}^{4}\), as anticipated above. This pulls back under \(\phi(x)\) to give the usual Higgs kinetic term,
\[\mathcal{L}_{\text{kin}}=\frac{1}{2}\eta^{\mu\nu}\delta_{ij}\partial_{\mu} \phi^{i}(x)\partial_{\nu}\phi^{j}(x)\,,\]
upon evaluating (5).
Example: SMEFT.In the case of SMEFT the most general \(\sigma_{\text{S}}\)-invariant metric can be expressed in terms of two arbitrary functions \(A(z)\) and \(B(z)\), via [26, 27]
\[g_{ij}(u^{i})=A\left(\frac{u\cdot u}{\Lambda^{2}}\right)\,\delta_{ij}+B\left( \frac{u\cdot u}{\Lambda^{2}}\right)\,\delta_{ik}\delta_{jl}\frac{u^{k}u^{l}}{ \Lambda^{2}}\,,\]
in our chosen SMEFT coordinate chart, where \(v\cdot w:=\delta_{ij}v^{i}w^{j}\). We have introduced appropriate factors of the formal power counting parameter \(\Lambda\), which will be identified with
the EFT cut-off scale in a physical theory. Here, \(A(0)=1\) and \(B(0)=0\) in order to recover the SM in the limit \(\Lambda\to\infty\). Substituting this metric into (8), or equivalently (5), yields the most general set of \(\sigma_{\mathrm{S}}\)-invariant SMEFT operators with exactly two derivatives.
### Other terms in the scalar EFT
We emphasize that this procedure constructs only a subset of the non-renormalisable EFT Lagrangian, namely all operators with exactly two derivatives (but any number of field insertions), from the metric \(g\) on field space \(M\). Other terms in the EFT are, in this picture, included 'by hand', and can be considered as extra structures on \(M\) beyond the metric. These other interactions fall into three classes:
1. The 'potential' is the part of the Lagrangian with _zero derivatives_; it is the pullback of a \(\sigma\)-invariant function \(V(\phi):M\to\mathbb{R}\) from \(M\) to \(\Sigma\), which is again integrated on \(\Sigma\) using \(\omega_{\Sigma}\) to obtain the action contribution; that is, \(S[\phi]\supset\int_{\Sigma}\phi^{*}(V)\omega_{\Sigma}\).
2. There are all the local operators appearing in the EFT construction of Callan, Coleman, Wess, and Zumino [11; 63] with _more than two derivatives._ Capturing these operators systematically using geometry is a primary purpose of the present paper.
3. The action might also admit topological terms like the Wess-Zumino-Witten term [64; 65], that are the pullback of (possibly locally-defined) differential forms from \(M\) to \(\Sigma\), then integrated directly on \(\Sigma\). These require only a topology on each space. Invariance under \(\sigma\) is rather more subtle here than for local operators [66], especially when the map \(\phi\) is topologically non-trivial; a large class of such terms are given by \(\sigma\)-invariant differential cocycles [67] on \(M\), of degree \(d+1\) where \(d=\dim(\Sigma)\) in general. Examples of such terms abound in Composite Higgs models (which are SMEFT) [68; 69], and _e.g._ in the condensed matter examples mentioned previously [70; 71].
We largely neglect terms of type (3) in the rest of this paper, focussing rather on the non-topological (_i.e._ metric-dependent) sector of the EFT. We briefly comment on topological terms in the Higgs EFTs in SS7.4.
### What is a (non-derivative) field redefinition?
One central feature of the geometric formulation of Higgs EFTs has been the identification of (some subset of) field redefinitions, which are symmetries of the partition function and thence of observables (see _e.g._[72]), with coordinate transformations on field space \(M\). Specifically, 'non-derivative field redefinitions' of the form \(\phi(x)\mapsto\psi(\phi(x))\) have been identified with changes of coordinates. It is not clear how to think about derivative field redefinitions, whereby \(\phi\mapsto\psi(\phi,\partial\phi,\dots)\), in this geometric picture, since the target space \(M\) knows nothing about spacetime derivatives. At this stage, we want to revisit this established lore concerning non-derivative field redefinitions. Our discussion will be fairly pedantic, but in doing so will open the way to a simple geometric picture for more general smooth field redefinitions, even before we consider the jet bundle extension of field space in later Sections.
To begin, we emphasize that we have been careful, so far, to distinguish local coordinates \(u^{i}\) on \({\cal F}_{M}\) from the _field variable_\(\phi\), which is a map from \(\Sigma\) into \(M\). This is again distinguished from \(\phi^{i}=u^{i}\circ\phi\), which evaluates the local coordinate values of the field for each spacetime point. A 'change of coordinates', _a.k.a._ a 'coordinate transformation' or a 'change of chart', is, when restricted to \({\cal F}_{M}\), a map \(u^{i}\to u^{i\prime}(u^{i})\). This is not obviously related to a redefinition of the _field_, which we would more readily identify with some transformation on the map \(\phi\) (inducing a transformation on the \(\phi^{i}\)). In this Subsection we contrast how these two kinds of transformation, namely a change in coordinates \(u^{i}\to u^{i\prime}\)_vs._ a change in map \(\phi\to\psi\), affect the Lagrangian that we construct from the metric on \(M\) via (5). The main concepts presented in this Subsection are illustrated in Figure 2.
#### 2.3.1 Change of coordinates
The first key point is simply that the field space metric \(g=g_{ij}(u^{i})du^{i}\otimes du^{j}\)_does not change at all_ under a change of coordinates, when evaluated at the same point \(m\) in \(M\). The components change via appropriate factors of the Jacobian matrix, _viz._
\[g_{ij}=\frac{\partial u^{k\prime}}{\partial u^{i}}\frac{\partial u^{\prime}}{ \partial u^{j}}g^{\prime}_{kl}\,, \tag{10}\]
which is compensated by the variation of the 1-forms
\[du^{\prime i}=\frac{\partial u^{\prime i}}{\partial u^{k}}du^{k}\,. \tag{11}\]
This is just the statement that the metric is a tensor, and so \(g(m)\) evaluated at any point \(m\in M\) is independent of the choice of the coordinate chart \(\{u^{i}\}\). Once we pullback the metric along the map \(\phi(x)\) and form the Lagrangian \({\cal L}=(1/2)\langle\eta^{-1},\,\phi^{*}(g)\rangle\), one obtains exactly the same function of the field variable \(\phi(x)\), regardless of having done the change of coordinates.
If this argument sounds a little thin, we can make the point totally explicit with an example. Consider a 1-dimensional field space manifold \(\mathbb{R}\), on which we define a local coordinate \(\{u\}\). In this coordinate chart, consider a (flat) metric \(g=du\otimes du\); in components, \(g_{uu}=1\). Choose a map \(\phi:\Sigma\to\mathbb{R}\), namely a scalar field configuration. The Lagrangian formed as in Eq. (5) is then \({\cal L}=\frac{1}{2}\partial_{\mu}\phi(x)\partial^{\mu}\phi(x)\), the kinetic energy for a real scalar.7 Now, let us do a change of coordinates in the target space, and send
Footnote 7: We slightly abuse notation here and let \(\phi(x)\) denote both the map itself and the coordinate value \(u\circ\phi\); there is no component index because the target space is 1-dimensional.
\[u\mapsto u^{\prime}=u^{2}\,. \tag{12}\]
In the illustration of Fig. 2, this corresponds to reading off coordinates using the scale on the right side of the plot, rather than the left, for the _same_ points on \(M\). The metric expressed in the new coordinate system, but at the same point, is
\[g^{\prime}=\frac{1}{4u^{\prime}}du^{\prime}du^{\prime}(=g)\,, \tag{13}\]
and so its components are \(g_{u^{\prime}u^{\prime}}=1/(4u^{\prime})\).
To form the 'new' Lagrangian after doing the coordinate transformation, we then pullback the metric, in the new coordinates, along the same map \(\phi(x)\). Note that the map \(\phi(x)\) is defined independently of coordinates - it takes points in \(\Sigma\) to points in \(M\) - but that we evaluate its image in the given coordinate chart. We find the Lagrangian
\[\mathcal{L}[\phi,g]\mapsto\mathcal{L}[\phi,g^{\prime}] =\frac{1}{2}\langle\eta^{-1},\,\phi^{*}(g^{\prime})\rangle\] \[=\frac{1}{2}\eta^{\rho\sigma}\left\langle\partial_{\rho}\otimes \partial_{\sigma},\,\frac{1}{4(u^{\prime}\circ\phi)}\partial_{\mu}(u^{\prime} \circ\phi)\partial_{\nu}(u^{\prime}\circ\phi)\,dx^{\mu}\otimes dx^{\nu}\right\rangle\] \[=\frac{1}{2}\eta^{\rho\sigma}\left\langle\partial_{\rho}\otimes \partial_{\sigma},\,dx^{\mu}\otimes dx^{\nu}\right\rangle\ \frac{1}{4\phi^{2}}\partial_{\mu}(\phi^{2})\partial_{\nu}(\phi^{2})\] \[=\frac{1}{2}\eta^{\mu\nu}\,\frac{1}{4\phi^{2}}\,(2\phi\partial_{ \mu}\phi)(2\phi\partial_{\nu}\phi)\] \[=\frac{1}{2}\partial_{\mu}\phi\partial^{\mu}\phi=\mathcal{L}[ \phi,g]\,, \tag{14}\]
the same as before. The tensorial nature of the metric means that, in general, the Lagrangian (5) is not changed (as a functional of its argument) by any change of coordinates on \(M\). The freedom to change coordinate charts is akin to a gauge redundancy in this geometric formulation of the theory.
#### 2.3.2 Change of maps
Let us instead consider what happens to the Lagrangian (5) when we change field space maps, from \(\phi(x)\) to a different map \(\psi(x):\Sigma\to M\). We assume the change in map \(\phi\to\psi\) is
Figure 2: Illustration of the notion of non-derivative field redefinitions in a toy example describing a single real scalar (take \(M=\mathbb{R}\), with coordinate \(u\)) in 1d spacetime (take \(\Sigma=\mathbb{R}\), with coordinate \(x\)). A field configuration is a map from \(\Sigma\) to \(M\); here \(\phi\) (red-solid) and \(\psi\) (blue-dashed) denote two such maps. A change of field space coordinates, such as \(u\mapsto u^{\prime}=u^{2}\), simply corresponds to reading off coordinate values using the right axis rather than the left axis. It does not move points in \(M\), nor change the metric \(g\) (or any other tensor) evaluated at any point \(m\in M\); if we pull back \(g\) to \(\Sigma\) using a fixed map \(\phi\) to form a Lagrangian, we obtain the same result regardless of whether we used the \(u\) or \(u^{\prime}\) coordinate chart on \(M\). A (non-derivative) field redefinition is rather a particular change of maps such that \(u\circ\psi\) can be expressed as a function of \(u\circ\phi\). In the figure, \(u\circ\psi=(u\circ\phi)^{2}\), such that evaluating \(\psi(x)\) is equivalent to first evaluating \(\phi(x)\), then doing a diffeomorphism \(f\) on \(M\) that sends a point in \(M\) with coordinate \(u\) to a different point with coordinate \(u^{2}\), expressed in a fixed coordinate chart. Reading off the value of the curve \(\psi(x)\) on the left is equivalent to reading off the value of \(\phi(x)\) on the right. The figure also illustrates that if \(\phi\) sends multiple points on \(\Sigma\) to the same point on \(M\), the change of map can be equivalent to a diffeomorphism on \(M\) only if \(\psi\) preserves this feature.
smooth. Recall that \(\phi^{i}(x)=(u^{i}\circ\phi)(x)\) denotes the local coordinates of the image of point \(x\) under the original map \(\phi\). A new map \(\psi\) sends each \(x\) to a different point \(\psi(x)\in M\), for which \(\psi^{i}(x)=(u^{i}\circ\psi)(x)\) are its local coordinates. The two different maps are indicated in Fig. 2 by the red-solid and blue-dashed curves.
We again use the simple example of a real scalar field for illustration, with field space \(M=\mathbb{R}\), local coordinate \(u\), and flat metric \(g=du\otimes du\). Consider the particular change of map, defined in terms of coordinate values (in fixed chart \(\{u\}\)) of field space points,
\[\phi(x)\mapsto\psi(x)\quad|\quad u\circ\psi=(u\circ\phi)^{2}\,. \tag{15}\]
(If we continue to abuse notation as above, we would write this simply as \(\psi=\phi^{2}\), denoting both the map and the coordinate value by the same symbol \(\phi\).) Now, when we pull back the metric \(g\) along this new map \(\psi\), the Lagrangian (5) is not necessarily the same (even though the metric \(g\) is unchanged) because \(\phi^{*}(g)\neq\psi^{*}(g)\). We obtain
\[\mathcal{L}[\phi,g]\mapsto\mathcal{L}[\psi,g] =\frac{1}{2}\langle\eta^{-1},\,\psi^{*}(g)\rangle\] \[=\frac{1}{2}\eta^{\rho\sigma}\,\langle\partial_{\rho}\otimes \partial_{\sigma},\,dx^{\mu}\otimes dx^{\nu}\rangle\,\,\,\partial_{\mu}(u \circ\psi)\partial_{\nu}(u\circ\psi)\] \[=\frac{1}{2}\eta^{\mu\nu}\,\,\partial_{\mu}(\phi^{2})\partial_{ \nu}(\phi^{2})=2\phi^{2}\partial_{\mu}\phi\partial^{\mu}\phi\,. \tag{16}\]
When expressed in terms of the original field variable, the functional form of the Lagrangian has indeed changed; the key difference to the manipulations in (14) is that there is no compensating factor of \(1/4\phi^{2}\) coming from the Jacobian.
Intuitively, the reason we get a different answer is because we pulled back the metric from _different points in field space_, namely those in the image of \(\psi\) rather than \(\phi\); this is not true for a simple change of coordinates, which does not move points in \(M\) but only relabels them. The transformation (16) of \(\mathcal{L}\) under change of section \(\phi\to\psi\) is what one would usually call a 'field redefinition'.
#### 2.3.3 Field space diffeomorphisms
The change of maps \(\phi\to\psi\) that we have performed is, in this case, precisely equivalent to doing a _(local) field space diffeomorphism_\(f\) before pulling back the transformed metric \(f^{*}(g)\) along the same section \(\phi\). Recall that a diffeomorphism is a smooth invertible function between manifolds \(M\) and \(N\):
\[f:M\to N\,. \tag{17}\]
In words, every point in the manifold \(N\) can be related uniquely to a point in the manifold \(M\); if \(f\) moves the points hit by \(\phi\) to the points hit by \(\psi\), then it exactly replicates the effect of changing maps. To relate two maps \(\phi\) and \(\psi\) we try to construct a diffeomorphism \(f\) to fill in the commutative diagram:
(18)
so that
\[\psi=f\circ\phi\,. \tag{2.19}\]
When considering field redefinitions, as we do here, we can take \(M\) and \(N\) to be the same manifold, and so \(f\) becomes a map from \(M\) to itself.
It is clear that such a diffeomorphism \(f\) cannot be found for _any_ pair \(\phi\) and \(\psi\) of field space maps. So, when can a change of maps (_a.k.a._ a field redefinition) be realised as a field space diffeomorphism? Consider a map \(\phi\) that sends two spacetime points \(x\) and \(y\) to the same point in field space, _viz._\(\phi(x)=\phi(y)=m_{x}\). Then, via (2.19), we can only change to a new map \(\psi\) that satisfies \(\psi(x)=f(\phi(x))=f(m_{x})\) and \(\psi(y)=f(\phi(y))=f(m_{x})\), thus \(\psi(x)\) and \(\psi(y)\) must also agree (see Fig. 2 for illustration). This means that \(\psi\) must be a function of \(\phi\). Explicitly, evaluating Eq. (2.19) on our coordinate chart \(\{u^{i}\}\) on \(M\),
\[u^{i}\circ\psi=u^{i}\circ f\circ\phi=u^{i}\circ f(\phi)\,, \tag{2.20}\]
thus the particular change of map induced by a smooth map \(f\) can be expressed as
\[\phi^{i}\mapsto\psi^{i}=f^{i}(\phi^{j})\,. \tag{2.21}\]
This is precisely what physicists call a 'non-derivative field redefinition', and applies to the example (2.15) considered above. If we additionally can write
\[\psi^{i}\mapsto\phi^{i}=(f^{-1})^{i}(\psi^{i}) \tag{2.22}\]
then the change of maps is a diffeomorphism and the theories are equivalent at the level of the manifolds on which they live.
Indeed, if we reconsider the example considered in the previous Subsection, we can see explicitly how one recovers the shift (2.16) in the Lagrangian that we obtained by changing map \(\phi\to\psi\), by instead doing a diffeomorphism \(f:M\to M\) before pulling back along the same map \(\phi\). The required diffeomorphism8 is one that sends field space point \(m\in\mathbb{R}\) with local coordinate \(u\) to the point \(m^{\prime}\) with coordinate \(u^{2}\) (written in the same coordinate chart); in local coordinates, we might simply write this as \(f:u\mapsto u^{2}\). In Fig. 2, this can be visualised by interpreting the right axis of the plot as the image of the left axis under \(f\), such that the two axes represent _different_ sets of points on \(M\). Under this diffeomorphism the _metric changes_, as
Footnote 8: If the manifold contains a point \(m\in M\) with \(u(m)=0\), then this map is only a _local_ diffeomorphism, _i.e._ a diffeomorphism on a patch that does not include this point, because \(f^{-1}:u\to\sqrt{u}\) is not differentiable at \(m\). Alternatively, the map \(f\) can be ‘promoted’ to a proper diffeomorphism by instead taking \(M=\mathbb{R}_{>0}\), the positive half-line.
\[g=du\otimes du\mapsto f^{*}g=\left(\frac{\partial u\circ f}{\partial u}\right) ^{2}du\otimes du=4u^{2}du\otimes du\,. \tag{2.23}\]
Pulling back \(f^{*}g\) along \(\phi\) then gives the transformed Lagrangian
\[\mathcal{L}[\phi,g]\mapsto\mathcal{L}[\phi,f^{*}(g)]=\frac{1}{2}\langle\eta^{-1 },\,\phi^{*}(f^{*}(g))\rangle\]
\[=\frac{1}{2}\eta^{\mu\nu}4(u\circ\phi)^{2}\partial_{\mu}(u\circ\phi) \partial_{\nu}(u\circ\phi)\] \[=2\phi^{2}\partial_{\mu}\phi\partial^{\mu}\phi=\mathcal{L}[\psi,g]\,, \tag{24}\]
matching the result (16) that we obtained by changing maps.
In a similar way, any non-derivative field redefinition of the form (21) can be implemented as a smooth map on the field space. The transformed Lagrangian can be equivalently expressed in terms of the transformed map \(\psi\), or in terms of the smooth map \(f\):
\[\mathcal{L}\mapsto\frac{1}{2}\langle\eta^{-1},\,\psi^{*}(g)\rangle=\frac{1}{2 }\langle\eta^{-1},\,\phi^{*}(f^{*}(g))\rangle\,. \tag{25}\]
At risk of being overly pedantic, we wish to distinguish a diffeomorphism on \(M\) from a change of coordinates, although the two notions are intimately related. A diffeomorphism involves a change of coordinates _plus_ a pushforward to the new point; equivalently, a change of coordinates means doing a diffeomorphism, but then pulling back the tensor to be evaluated at the original point.
Finally, we stress that a change of maps \(\phi(x)\to\psi(x)\) is more general than the special case (21) that can be captured by field space diffeomorphisms (or even general smooth maps on field space). Differentials of the smooth map \(\phi\) can be used to construct, at least infinitesimally, what one would call 'derivative field redefinitions'.
#### 2.3.4 Local field space diffeomorphisms, and HEFT _vs._ SMEFT
There is an important subtlety in this discussion that we have so far glossed over, that was hinted at in footnote 8. Namely, it may be the case that the diffeomorphism \(f\) is _only defined locally_, meaning in words that not all points in \(M\) correspond to points in \(N\). A particularly relevant physics example of this situation occurs for the map from SMEFT to HEFT, as follows. Let us start with a coordinate system \(\{u^{1},u^{2},u^{3},u^{4}\}\) on a patch \(\mathcal{F}_{M}\subset M\) in which the metric is
\[g_{ij}=\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\end{pmatrix}\,. \tag{26}\]
Pulling this back to \(\Sigma\) along a section \(\phi\) whose components in the chart are \(u^{i}\circ\phi=\phi^{i}(x)\), we find the familiar 2-derivative Lagrangian
\[\mathcal{L}=\frac{1}{2}\langle\eta^{-1},\phi^{*}g\rangle=\frac{1}{2}\delta_{ ij}\partial_{\mu}\phi^{i}\partial^{\mu}\phi^{j} \tag{27}\]
of a canonically normalised set of Higgs fields.
Now consider the map \(f:\mathcal{F}_{M}\to\mathcal{F}_{N}\subset N\) defined by
\[f:\begin{pmatrix}u^{1}\\ u^{2}\\ u^{3}\\ u^{4}\end{pmatrix}\to\begin{pmatrix}1+\frac{u^{4}}{v}\end{pmatrix}\begin{pmatrix}u ^{1}\\ u^{2}\\ u^{3}\\ \sqrt{v^{2}-(u^{1})^{2}-(u^{2})^{2}-(u^{3})^{2}}\end{pmatrix} \tag{28}\]
expressed in the same coordinate chart \(\{u^{i}\}\). After doing this smooth mapping, we compute the new metric to be
\[f^{*}g_{ij}=\begin{pmatrix}\left(1+\frac{u^{4}}{v}\right)^{2}\left(1+\frac{(u^{1} )^{2}}{v^{2}-\vec{u}^{2}}\right)&\left(1+\frac{u^{4}}{v}\right)^{2}\frac{u^{1} u^{2}}{v^{2}-\vec{u}^{2}}&\left(1+\frac{u^{4}}{v}\right)^{2}\frac{u^{1}u^{3}}{v^{2}- \vec{u}^{2}}&0\\ \left(1+\frac{u^{4}}{v}\right)^{2}\frac{u^{1}u^{2}}{v^{2}-\vec{u}^{2}}&\left(1+ \frac{u^{4}}{v}\right)^{2}\left(1+\frac{(u^{2})^{2}}{v^{2}-\vec{u}^{2}}\right) &\left(1+\frac{u^{4}}{v}\right)^{2}\frac{u^{2}u^{3}}{1-\vec{u}^{2}}&0\\ \left(1+\frac{u^{4}}{v}\right)^{2}\frac{u^{1}u^{3}}{v^{2}-\vec{u}^{2}}&\left(1 +\frac{u^{4}}{v}\right)^{2}\frac{u^{2}u^{3}}{1-\vec{u}^{2}}&\left(1+\frac{u^{4 }}{v}\right)^{2}\left(1+\frac{(u^{3})^{2}}{1-\vec{u}^{2}}\right)&0\\ 0&0&0&1\end{pmatrix}\,, \tag{2.29}\]
where \(\vec{u}=(u^{1},u^{2},u^{3})\). Once again we can pullback along a section \(\phi\) and contract with \(\eta^{-1}\) to obtain the Lagrangian function
\[\mathcal{L} =\frac{1}{2}\partial_{\mu}\phi^{4}\partial^{\mu}\phi^{4}+\frac{1} {2}\left(1+\frac{\phi^{4}}{v}\right)^{2}\left(\partial_{\mu}\phi^{1}\partial^ {\mu}\phi^{1}+\partial_{\mu}\phi^{2}\partial^{\mu}\phi^{2}+\partial_{\mu} \phi^{3}\partial^{\mu}\phi^{3}\right) \tag{2.30}\] \[+\frac{(1+\phi^{4}/v)^{2}}{v^{2}-(\phi^{1})^{2}-(\phi^{2})^{2}-( \phi^{3})^{2}}\bigg{(}\phi^{1}\partial_{\mu}\phi^{1}\phi^{2}\partial^{\mu} \phi^{2}+\phi^{1}\partial_{\mu}\phi^{1}\phi^{3}\partial^{\mu}\phi^{3}+\phi^{2 }\partial_{\mu}\phi^{2}\phi^{3}\partial^{\mu}\phi^{3}\bigg{)}\] \[+\frac{1}{2}\frac{(1+\phi^{4}/v)^{2}}{v^{2}-(\phi^{1})^{2}-(\phi^ {2})^{2}-(\phi^{3})^{2}}\bigg{(}(\phi^{1})^{2}\partial_{\mu}\phi^{1}\partial^ {\mu}\phi^{1}+(\phi^{2})^{2}\partial_{\mu}\phi^{2}\partial^{\mu}\phi^{2}+( \phi^{3})^{2}\partial_{\mu}\phi^{3}\partial^{\mu}\phi^{3}\bigg{)}\,.\]
Introducing the notation \(\vec{\phi}=(\phi^{1},\phi^{2},\phi^{3})\), this can be written in the more compact form:
\[\mathcal{L}=\frac{1}{2}\partial_{\mu}\phi^{4}\partial^{\mu}\phi^{4}+\frac{1} {2}\left(1+\frac{\phi^{4}}{v}\right)^{2}\left[\partial_{\mu}\vec{\phi}\cdot \partial^{\mu}\vec{\phi}+\frac{1}{v^{2}-\vec{\phi}^{2}}\bigg{(}\vec{\phi} \cdot\partial_{\mu}\vec{\phi}\bigg{)}^{2}\right]\,. \tag{2.31}\]
If we identify \(\phi^{4}\) with the physical Higgs boson field and the other \(\phi^{i}\) with the Goldstones, we see that the Higgs has a canonical kinetic term and does not enter any of the scalar products. Eq. (2.31) is in agreement with the results in Refs. [22, 73]. The map from SMEFT to HEFT is known to always exist and be smooth at the level of the Lagrangian.
On the other hand, the inverse map (that would take one from HEFT to SMEFT) may have a singularity. The inverse map \(f^{-1}\) would be
\[f^{-1}:\begin{pmatrix}u^{1}\\ u^{2}\\ u^{3}\\ u^{4}\end{pmatrix}\rightarrow\frac{v}{\sqrt{\mathbf{u}^{2}}}\begin{pmatrix}u^{ 1}\\ u^{2}\\ u^{3}\\ \frac{\sqrt{\mathbf{u}^{2}}}{v}\left(\sqrt{\mathbf{u}^{2}}-v\right)\end{pmatrix}\,, \tag{2.32}\]
where \(\mathbf{u}=(u^{1},u^{2},u^{3},u^{4})\). This is clearly only well defined if \(\mathbf{u}^{2}\neq 0\).
The map from this Lagrangian back to (2.27) may have a singularity at the origin, in which case \(f\) is simply a smooth map on the whole manifold and only becomes a diffeomorphism (which must be invertible) when restricted to a patch that does not contain the origin (the \(O(4)\) fixed point). Mathematically speaking, this means that the two manifolds
on which SMEFT and HEFT are formulated are indistinguishable _away_ from this point. When SMEFT is not enough [33], we really only have a'smooth map' \(f\) from SMEFT to HEFT, which cannot be inverted.
### Geometrizing amplitudes
As we saw in SS2.3.1, a change of coordinates does not change the Lagrangian (whereas, we reiterate, a smooth map does - and the latter is what we mean by a non-derivative field redefinition). The coordinate invariance of the Lagrangian nevertheless remains a useful redundancy to exploit, because it means we are free to pick any valid coordinate system on a given local patch.
In particular, this opens up the choice of 'normal coordinates' in the vicinity of some point \(m\) in patch \(\mathcal{F}_{M}\subset M\) of field space, defined by the following properties:9
Footnote 9: Normal coordinates always exist under the conditions we assume. This is because, for any pseudo-Riemannian manifold \((M,g)\) and any point \(m\in M\), there exists a neighbourhood \(U\subset M\) and a neighbourhood \(V\subset T_{m}M\) such that the map
\[\exp_{m}:V\to U \tag{33}\]
is a diffeomorphism. Using a torsion free connection (such as the Levi-Civita connection) we can construct an isomorphism \(T_{m}M\sim\mathbb{R}^{n}\) allowing us to consider the triple \(\{\exp_{m}^{-1},V,U\}\) as a coordinate chart, which we refer to as normal coordinates (see _e.g._[74]). The chart is unique up to the choice of isomorphism.
1. For all \(1\leq i,j\leq n\) we have \[g_{ij}(m)=\bar{g}_{ij},\] (34) which is a diagonal matrix whose entries are either \(+1\) or \(-1\), depending on the signature of the metric;
2. For all \(1\leq i,j,k\leq n\) we have \[\Gamma^{k}_{ij}(m)=0;\] (35)
3. For all \(1\leq i,j,k\leq n\) we have \[\partial_{k}g_{ij}(m)=0.\] (36)
Using this system of normal coordinates, the metric tensor \(g_{ij}\) admits the following expansion around point \(m\in\mathcal{F}_{M}\)
\[g=g_{ij}(u)du^{i}\otimes du^{j}=\bigg{(}\bar{g}_{ij}+\frac{1}{3}R_{iklj}(m)u^ {k}u^{l}+\frac{1}{6}R_{iklj;r}(m)u^{k}u^{l}u^{r}+\ldots\bigg{)}du^{i}\otimes du ^{j} \tag{37}\]
where the normalization of the first term in the expansion naturally leads to a canonical kinetic term once we fix the signature of the metric.
The choice of normal coordinates is useful because the expansion then naturally organizes itself based on the number of fields, thus allowing for an easy relation to be established between components of the Riemann tensor and \(n\)-point amplitudes [75; 34]:
* \(\delta_{ij}du^{i}\otimes du^{i}\) enters in the propagator;
* \(\frac{1}{3}R_{iklj}(m)u^{k}u^{l}du^{i}\otimes du^{j}\) enters in four point amplitudes;
* \(\frac{1}{6}R_{iklj;r}(m)u^{k}u^{l}u^{r}du^{i}\otimes du^{j}\) enters in five point amplitudes.
In this field space picture, a potential (0-derivative) contribution to the Lagrangian is added to the Lagrangian by hand (see SS2.2), independently of the field space geometry. Choosing coordinates \(u^{i}\) on \(\mathcal{F}_{M}\) such that \(u^{i}\circ m_{\rm vac}=\vec{0}\), where \(m_{\rm vac}\in M\) is the vacuum point, we have
\[\begin{split}\mathcal{L}&=\bigg{(}\delta_{ij}+ \frac{1}{3}R_{iklj}(0)\phi^{k}\phi^{l}+\frac{1}{6}R_{iklj;r}(0)\phi^{k}\phi^{l} \phi^{r}+\dots\bigg{)}\partial_{\mu}\phi^{i}\partial^{\mu}\phi^{j}\\ &-\bigg{(}\frac{1}{2}\frac{\partial^{2}V}{\partial\phi^{k} \partial\phi^{l}}(0)\phi^{k}\phi^{l}+\frac{1}{6}\frac{\partial^{3}V}{ \partial\phi^{k}\partial\phi^{l}\partial\phi^{r}}(0)\phi^{k}\phi^{l}\phi^{r}+ \dots\bigg{)}\end{split} \tag{38}\]
It can be seen that the contribution to the three point amplitude at this point comes entirely from the potential as a result of the vanishing of the Christoffel symbols [34]. When we eventually pass to the 1-jet bundle, we will see (SS8) how to adapt this story to incorporate higher-derivative terms (and also how, in the jet bundle picture, the potential contribution also comes from components of the metric).
## 3 From Field Space to Bundles
Before we get into the main business of this paper, namely our consideration of EFT terms with \(>2\) derivatives, we find it helpful to first generalise slightly the above account of EFTs in terms of field space geometry. The generalisation, which is motivated by locality, is to replace the description in terms of maps (1) to a field space \(M\) by one in terms of fibre bundles, which is almost equivalent but a bit more general.10 While the generalisation to bundles might seem like a technicality here, we will describe several advantages over the field space picture in this Section; moreover, the bundle picture provides the starting point for passing to jet bundles when we turn to describe higher-derivative Lagrangians.
Footnote 10: JD is grateful to Ben Gripaios and Joseph Tooby-Smith for past discussion and correspondence related to ideas in this Section. See also Ref. [45].
### Fields as sections of fibre bundles
The idea is simple, but first let us recap some basic concepts. Recall that a fibre bundle is a triple \((E,\Sigma,\pi)\), where
* \(E\) is called the _total space_ of the bundle;
* \(\Sigma\) is called the _base space_. Both \(E\) and \(\Sigma\) are smooth manifolds;
* The _projection map_\(\pi:E\to\Sigma\) is a surjective submersion;
* _Local trivializations:_ for any point \(x\in\Sigma\) there exists an open neighbourhood \(\mathcal{U}_{x}\) and a diffeomorphism \(\varphi:\pi^{-1}(\mathcal{U}_{x})\to\mathcal{U}_{x}\times F\), where \(F\) is itself a manifold called the 'fibre', such that the projection \(\pi\) agrees with projection from \(\mathcal{U}_{x}\times F\) onto its first factor. In other words, locally the total space \(E\) looks like a direct product space, but this need not be true globally.
A _section_ of the fibre bundle is then a smooth map \(\phi:\Sigma\to E\) such that \(\pi\circ\phi=\mathrm{id}_{\Sigma}\). A section is like an 'inverse' of the projection map \(\pi\), taking you from the base space into the total space of the bundle, while preserving the base space point. Identifying the base \(\Sigma\) with spacetime, what a physicist calls a 'field' is thus a section of some bundle over \(\Sigma\). Sections will moreover play a starring role in defining jet bundles. We denote the set of all sections of \(\pi\) by \(\Gamma(\pi)\), and the set of _local_ sections whose domains include a point \(x\in\Sigma\) by \(\Gamma_{x}(\pi)\). With these definitions, let us return to scalar field theory on target space \(M\).
The sigma model map \(\phi(x):\Sigma\to M\) that we encountered before trivially defines a _section_\(\phi\) of a particular fibre bundle \((\Sigma\times M,\Sigma,\pi)\) with fibre \(F=M\) coinciding with the 'field space' of the previous picture, and where the projection is trivially \(\pi(x,m)=x\)\(\forall m\in M\), \(x\in\Sigma\). This bundle can moreover be equipped with a product metric \(\eta\oplus g\), given the metrics \(\eta\) and \(g\) on \(\Sigma\) and \(M\) respectively (6, 7). Such a bundle that is globally a direct product is called a trivial bundle, for which \(\phi\) is a global section.
#### Locality
This field space formulation described in SS2 implicitly assumes that \(\phi\) is a global section of a trivial bundle; but this restriction is unecessarily strong. Given that we, as local physicists exploring our patch of the Universe, can only know about the structure of field space on some open set \(\mathcal{U}_{\Sigma}\subset\Sigma\), there is no reason to enforce that the field space bundle, which _locally_ has a product structure \(\cong\mathcal{U}_{\Sigma}\times M\), has a direct product structure \(=\Sigma\times M\) globally. More generally, therefore, one can (and should) allow fibre bundles \((E,\Sigma,\pi)\) with fibre \(F=M\) that are topologically non-trivial.
We should therefore pass from a description in which the field \(\phi(x)\) is a map from \(\Sigma\) to \(E\) to a slightly more general picture, in which:11
Footnote 11: This viewpoint is emphasized in Ref. [45], which adopts an even more general setup, again motivated by locality arguments, whereby the fibre bundle is replaced by a _fibred manifold_. For a fibred manifold, the fibres \(\pi^{-1}(x)\) above each spacetime point \(x\), which the reader can think of as the ‘field space’ associated to the point \(x\), are not even required to be diffeomorphic to one another. We are happy to limit ourselves to plain old fibre bundles in the present work.
A scalar field \(\phi\in\Gamma_{x}(\pi)\) is a local section of a fibre bundle \((E,\Sigma,\pi)\) with fibre \(M\). (3.1)
Given a patch \(\mathcal{U}_{\Sigma}\subset\Sigma\) with local coordinates \(x^{\mu}\), one can introduce a local fibred coordinate system \((x^{\mu},u^{i})\) on \(\pi^{-1}(\mathcal{U}_{\Sigma})\subset E\), that matches our previous choice. To guide the reader, we illustrate the basic 'local product' structure of a fibre bundle and the notion of a section in Fig. 3 (left), for a toy example in which the total space \(E\) has dimension two.
#### Lower derivative Lagrangian terms from geometry
In these coordinates, and starting from metrics \(\eta\) and \(g\) on \(\Sigma\) and \(M\) respectively (in the field space picture), a natural choice of geometry on \(E\) is to take the metric on \(E\) to be locally the product metric, \(g_{E}=\eta\oplus g\). Pulling \(g_{E}\) back to \(\Sigma\) along a section \(\phi\), and contracting with the inverse metric \(\eta^{-1}\) on \(\Sigma\), via essentially the same formula as (5), we get the same 2-derivative Lagrangian (8) that we obtained from the metric \(g\) in the 'field
space maps' picture of SS2, up to a constant shift coming from pulling back the \(\eta_{\mu\nu}dx^{\mu}dx^{\nu}\) part of \(g_{E}\).
Even though this geometry is a natural choice, one can consider more general geometries on the field space bundle; in particular, consider a metric on the field space bundle \(E\) of the form
\[g_{E}=-\frac{2}{d}V(u)\eta_{\mu\nu}\,dx^{\mu}\otimes dx^{\nu}+g_{ij}(u)du^{i} \otimes du^{j}\,, \tag{3.2}\]
where \(d\) is the spacetime dimension (say, \(d=4\) for the Higgs EFTs). Upon pulling this metric back to spacetime \((\Sigma,\eta)\), which we emphasize is still equipped with the flat geometry of Minkowski space, and contracting with the inverse spacetime metric, we get the Lagrangian
\[\mathcal{L}[\phi,g_{E}]=\frac{1}{2}\,\langle\eta^{-1},\,\phi^{*}(g_{E})\rangle =\frac{1}{2}g_{ij}(\phi)\partial_{\mu}\phi^{i}\partial^{\mu}\phi^{j}-V(\phi)\,. \tag{3.3}\]
We thus recover the most general scalar EFT Lagrangian with _up to_ 2-derivatives, but including also the 0-derivative potential terms, by passing from geometry on a target space \(M\) to a geometry on the fibre bundle \(E\). As an example, this is already enough to capture the whole (renormalisable) Higgs sector of the SM Lagrangian via geometry. We will revisit this feature when we pass to jet bundles later in the paper.
#### Symmetries on the field space bundle
For the case of SMEFT or HEFT, in this fibred coordinate system \((x^{\mu},u^{i})\) there is also a natural implementation of the symmetries described in SS2 at the level of field space bundles \((E,\Sigma,\pi)\). The group action on the bundle is, at least locally, simply
\[(a^{\mu},L^{\mu}_{\nu},O;\,x^{\mu},u^{i})\mapsto(L^{\mu}_{\nu}x^{\nu}+a^{\mu}, \sigma(O,u^{i}))\,, \tag{3.4}\]
where one can insert the group action \(\sigma=\sigma_{\rm S}\) or \(\sigma_{\rm H}\) for SMEFT or HEFT respectively. It is, however, instructive to still think of the spacetime and internal symmetries as two distinct group actions, both in this case of Higgs EFTs and more generally. As we will see
Figure 3: Left: Illustration of a fibre bundle \((E,\Sigma,\pi)\) with fibre \(M\), which here we take to be a trivial bundle (\(E\cong\Sigma\times M\)) for ease of illustration, on which we have a fibred coordinate system \((x,u)\). This toy example describes a single real scalar (_e.g._ take \(M=\mathbb{R}\)) in 1d spacetime (_e.g._ take \(\Sigma=\mathbb{R}\)). The field \(\phi\) is a _section_ of such a fibre bundle, which in general need not be a trivial bundle. Right: bundle morphisms are pairs of maps \((f,\bar{f})\) that generically move points on \(E\) and \(\Sigma\) respectively, in a way that is compatible with the bundle projection maps. A field redefinition is a change of section. A non-derivative field redefinition is equivalent to a bundle morphism with \(\bar{f}=\mathrm{id}_{\Sigma}\), as indicated in our sketch by the map \(f_{E}\), for which the arrows are directed ‘vertically’.
in SS3.3.1, we can think of these symmetries as bundle morphisms, and the identification of a symmetry as 'internal' can be given a natural definition in these terms.
### Field redefinitions are changes of section
In addition to the motivation by locality that we have described, plus the straightforward inclusion of lower-derivative Lagrangian terms, passing to this 'field space bundle' formulation of scalar EFTs affords a technical benefit when it comes to describing field redefinitions geometrically. Recall from SS2.3 that, in the'maps to field space' picture, a non-derivative field redefinition (or change of map \(\phi^{i}\mapsto\psi^{i}(\phi^{j})\)) is equivalent to doing a local diffeomorphism on the target space.
Regarding a scalar field to instead be a section \(\phi\) of the bundle \((E,\Sigma,\pi)\), a general field redefinition will be implemented as a smooth change of (local) section
\[\Gamma_{x}(\pi)\ni\phi\mapsto\psi\in\Gamma_{x}(\pi) \tag{3.5}\]
This is itself a geometric notion of what it means to do a general field redefinition. A simple example of this is given in SS3.3.4 below. For derivative field redefinitions, however, we are not guaranteed an equivalent description in terms of morphisms of the target space (or bundle), as we study more carefully in the next Section.
### Morphisms on the field space bundle
In this 'field space bundle' picture, a more general class of field redefinitions can be described in terms of morphisms of the bundle, than we saw in the case of field space smooth maps, above. To unpack this statement requires a more-or-less formal definition of bundle morphisms, which also provide a natural language for describing _e.g._ symmetries of the theory.
Suppose we have a fibre bundle \(\pi:E\to\Sigma\), in the notation of SS3, and \(\phi\) is a (local) section of that bundle. Given another bundle \(\rho:F\to\Omega\), a bundle morphism is a pair of maps \(f:E\to F\) and \(\bar{f}:\Sigma\to\Omega\) such that
\[\begin{CD}E@>{f}>{}>F\\ @V{\pi}V{\Sigma}V@V{\rho}V{\Omega}V\end{CD} \tag{3.6}\]
is a commutative diagram. (We could, more generally, consider only a _local_ bundle morphism, whereby every element in the commutative diagram is replaced by open submanifolds, but such generality will not add much to the discussion.)
To describe both symmetries and field redefinitions, we are interested in fixing both bundles to be the same, \((E,\Sigma,\pi)\). Fig. 3 (right) illustrates how a bundle morphism acts in this case, in a 1d example. We are also sometimes interested in fixing the map from the base space to itself to be
\[\bar{f}=\mathrm{id}_{\Sigma}\,. \tag{3.7}\]
(The obvious exception to this will be when we discuss spacetime symmetries below.) The map \(f_{E}\) along the top is an automorphism from the bundle \((E,\Sigma,\pi)\) to itself, \(f_{E}:E\to E\). The commutative diagram becomes
(3.8)
We thus have the relation of maps
\[\pi=\rho\circ f_{E}\,. \tag{3.9}\]
This, which follows from assumption (3.7), means that \(f_{E}\) does not move points in the base space, but only moves points in the fibre. In our local fibred coordinate system \((x^{\mu},u^{i})\) on \(E\), we have
\[f_{E}:(x^{\mu},u^{i})\mapsto(x^{\mu},f^{i}(x^{\mu},u^{j}))\,. \tag{3.10}\]
The map \(f^{i}(x^{\mu},u^{j})\) on the fibre coordinates can be regarded as a family of diffeomorphisms on fibres \(\{M_{x}\}\), for each value of the base point \(x\), that is free to vary (smoothly) with \(x\).
A pair of local sections \(\phi\in\Gamma_{x}(\pi)\) and \(\psi\in\Gamma_{x}(\rho)\) for these two bundles will also be related by the bundle morphism \(f_{E}\):
\[\psi=f_{E}\circ\phi\,. \tag{3.11}\]
In our fixed local fibred coordinate system \((x^{\mu},u^{i})\), Eq. (3.11) implies
\[\psi^{i}(x)=f^{i}(x,\phi^{j}(x))\,. \tag{3.12}\]
This is clearly a generalisation of (2.21) because now the functions \(f^{i}_{x}\) can vary continuously (and smoothly) with the base point \(x\). However, it is perhaps not an especially useful generalisation, given our primary interest is in describing Poincare-invariant quantum field theory actions - the more general change of section expressed in (3.12) allows for field redefinitions that have explicit spacetime dependence, which typically violate Poincare symmetry. Next, we discuss how this notion of bundle morphisms applies first to symmetries, and then turn to field redefinitions.
#### 3.3.1 Internal _vs_ spacetime symmetries
We already offered a description of symmetries in the 'field space bundle' formulation of Higgs EFTs, in terms of a group action by \(\mathrm{Poinc}\times O(4)\) on the bundle \(E\). Generally, a group action of Lie group \(G\) on a manifold \(E\) is equivalent to specifying a diffeomorphism of \(E\) for each group element \(g\in G\). A symmetry that can be described via such a group action can thus be naturally interpreted in terms of bundle morphisms; moreover, the bundle structure offers an obvious notion of what we mean by saying that a symmetry is 'internal' or otherwise.
In this language, we might say rather abstractly that a _symmetry_ is a particular bundle morphism \((f,\bar{f})\) such that the Lagrangian \(\mathcal{L}=(1/2)\langle\eta^{-1},\phi^{\star}(g)\rangle\) shifts by at most a total
derivative:12
Footnote 12: We also implicitly assume an invariance condition [66] is satisfied for the topological term, if present.
\[\langle\eta^{-1},\phi^{*}(f^{*}-1)g\rangle=\partial_{\mu}K^{\mu}\,. \tag{3.13}\]
Such a broad definition would account for the fact that (i) there can be many redundancies in the map from metric to Lagrangian (as will become more obvious when we pass to 1-jet bundles, especially in the examples of SS6 and SS7), and (ii) we wish to identify Lagrangians that differ by total derivatives. Both these facts mean that requiring invariance of the metric itself, _viz._
\[f^{*}g=g\,, \tag{3.14}\]
is too strong a condition for a symmetry. Nonetheless, the more general condition (3.13) is rather nasty to implement in practice - indeed, it is not even obvious why the set of morphisms satisfying (3.13) should form a group,13 whereas (diffeo)morphisms satisfying (3.14) clearly do form a group. In practice, the particular symmetries that we consider will be implemented by bundle morphisms that are symmetries of the metric (isometries), satisfying (3.14), so we are happy to proceed with this definition of (a certain class of) symmetry.
Footnote 13: Moreover, a definition of ‘symmetry’ like Eq. (3.13) is still far from being comprehensive; indeed, finding all symmetries can be an almost overwhelming task even for very simple theories, if a symmetry is broadly defined to be any ‘thing that does not change observables’. This would include transformations that act completely trivially, such as the change of coordinates considered in §2.3.1 above as well as more general redundancies of description such as gauge symmetries, right through to the huge symmetry under field redefinitions that is not seen at the Lagrangian level, but only after doing the path integral. The symmetries we consider here, namely symmetries of the metric on \(E\) that can be captured by bundle morphisms, lie somewhere between these two extremes. Finally, we remark that in recent years yet further ‘generalised’ notions of symmetry have been appreciated, which are not seen by their action on any local operators but by their action on extended objects like Wilson lines (see [76, 77] for recent surveys).
Continuing, an _internal symmetry_ is then a symmetry that is described by a 'base-point preserving' bundle morphism \((f,\bar{f})\) of type (3.8), namely for which \(\bar{f}=\mathrm{id}_{\Sigma}\), which means the group action does not move points in spacetime. More precisely, it moves points in a given fibre \(M_{x}\) to other points in that same fibre \(M_{x}\). In the case of SMEFT, say, the internal symmetry is unsurprisingly the \(O(4)\) group action:
\[\sigma_{\mathrm{S}}:O(4)\times E\to E:(O,x^{\mu},u^{i})\mapsto(x^{\mu}, Ou^{i}), \tag{3.15}\]
in our local fibred coordinates. This defines a set of base-point preserving bundle morphisms satisfying both the commutative diagram (3.8) and the group multiplication law. In general, an internal symmetry described by a Lie group action by \(G\) on the bundle \(E\) corresponds to the set
\[\{(f_{g},\bar{f}=\mathrm{id}_{\Sigma})\text{ a bundle morphism }\forall g\in G\ |\ f_{g_{1}}\circ f_{g_{2}}=f_{g_{1}g_{2}}\ \forall g_{1},\,g_{2}\in G\}\,. \tag{3.16}\]
In our SMEFT example, the morphisms are
\[f_{O}:(x^{\mu},u^{i})\to(x^{\mu},Ou^{i})\ \forall O\in O(4)\,. \tag{3.17}\]
On the other hand, a _spacetime_ symmetry is a map \(\bar{f}\) such that \((f,\bar{f})\) is a bundle morphism that is a symmetry. In our HEFT and SMEFT examples, which have \((3+1)\)-d Poincare invariance, we can take this spacetime symmetry to be defined by the group action on the bundle
\[\rho:\text{Poinc}\times E\to E:(a^{\mu},L^{\mu}_{\nu},x^{\mu},u^{i})\mapsto(L^{ \mu}_{\nu}x^{\nu}+a^{\mu},u^{i})\,. \tag{3.18}\]
Again this defines a collection of bundle morphisms, where the important thing is the map on the base of the bundle, \(\bar{f}_{a,L}:x^{\mu}\mapsto L^{\mu}_{\nu}x^{\nu}+a^{\mu}\) for every \(a^{\mu}\) and \(L^{\mu}_{\nu}\).
It is interesting that, as a collection of bundle morphisms, the internal symmetry is naturally implemented _locally_. The general formula for an 'internal morphism' in the bundle picture is \(\sigma:(x^{\mu},u^{i})\mapsto(x^{\mu},O(x).u^{i})\), where \(O(x)\) is an \(O(4)\)-valued function on spacetime. This suggests there should be a natural notion of gauging in this language (indeed, gauge fields are perhaps the most familiar example of quantum fields being sections on bundles over spacetime). We save a proper treatment of the gauging of internal symmetries, which is of course a crucial element in the electroweak theory, for future work.
#### 3.3.2 Non-derivative field redefinitions (again)
Having discussed symmetries, we now return to the issue of field redefinitions, and ask when such transformations can be described by the bundle morphisms defined in SS3.3 above.
Firstly, if we take the maps \(f^{i}(x^{\mu},u^{j})\) in _e.g._ Eq. (3.10) to be independent of \(x\), then we have that \(u^{i}\circ\psi=u^{i}\circ f(\phi)\), equivalent to doing the same 'field space diffeomorphism' everywhere in spacetime, _i.e._ at every fibre. This special case of bundle morphisms can describe any non-derivative field redefinition, as considered in SS2.3.2. Later, in SS5.5.3 and SS5.5.4, we will see how this is implemented in the more general jet bundle context.
#### 3.3.3 More general field redefinitions?
Consider again the general (base-point preserving) bundle morphism described by Eq. (3.12). We now examine, for completeness, what kind of field redefinitions can be captured by such bundle morphisms, and whether we get 'anything more' than non-derivative field redefinitions. Recall that we already have a geometric description of arbitrary derivative field redefinitions as arbitrary changes of section \(\phi\to\psi\). We see an explicit example of this below in SS3.3.4.
To study this, let us start from the more general class of Lagrangian captured by our field space bundle geometry, including \(0\)-derivative (potential) terms. We take
\[g_{E}=-\frac{2}{d}V(u)\eta_{\mu\nu}\,dx^{\mu}\otimes dx^{\nu}+g_{ij}(u)du^{i} \otimes du^{j} \tag{3.19}\]
as in (3.2), which, pulling back along a section \(\phi:\Sigma\to E\), gives the general \(2\)-derivative Lagrangian \(\mathcal{L}[\phi,g_{E}]=\frac{1}{2}g_{ij}(\phi)\partial_{\mu}\phi^{i}\partial^ {\mu}\phi^{j}-V(\phi)\). If we do an 'internal diffeomorphism' on the bundle of the kind (3.8), which acts as \(f_{E}:(x^{\mu},u^{i})\to(x^{\mu},f^{i}(x,u))\) in our local fibred coordinate system, then the metric \(g_{E}\) transforms as
\[g_{E}\to f_{E}^{*}g_{E}=\ -\frac{2}{d}V(u\circ f_{E})\eta_{\mu\nu}\,dx^{\mu} dx^{\nu}+g_{ij}(u\circ f_{E})\frac{\partial(u^{i}\circ f_{E})}{\partial u^{k}} \bigg{|}_{x,u}\frac{\partial(u^{j}\circ f_{E})}{\partial u^{m}}\bigg{|}_{x,u} du^{k}du^{m}\]
\[+g_{ij}(u\circ f_{E})\frac{\partial(u^{i}\circ f_{E})}{\partial x^{\mu }}\bigg{|}_{x,u}\frac{\partial(u^{j}\circ f_{E})}{\partial x^{\nu}}\bigg{|}_{x,u }dx^{\mu}dx^{\nu} \tag{3.20}\]
Upon pulling back along the (same) section \(\phi\) and forming the new Lagrangian, we get
\[\mathcal{L}[\phi,g_{E}]\mapsto\mathcal{L}[\phi,f_{E}^{*}g_{E}] \tag{3.21}\] \[=\frac{1}{2}g_{ij}(f(x,\phi))\left[\partial_{\mu}f^{i}\bigg{|}_{ x,u=\phi}\partial^{\mu}f^{j}\right|_{x,u=\phi}+\left.\frac{\partial f^{i}}{ \partial u^{k}}\right|_{x,u=\phi}\frac{\partial f^{j}}{\partial u^{m}}\bigg{|} _{x,u=\phi}\partial_{\mu}\phi^{k}\partial^{\mu}\phi^{m}\right]-V(f(x,\phi))\]
While formulae (3.20) and (3.21) may look elaborate, they are essentially the result of applying the chain rule.
The result (3.21) generalises a little the effect of 'field space smooth maps' considered in SS2. There are two main differences: first, the possible dependence of the transformed \(g_{ij}\) and \(V\) functions on spacetime coordinates \(x^{\mu}\); second, the first term on the RHS which comes from partial differentiation of the functions \(f^{i}\) with respect to spacetime. If we restrict to bundle morphisms that respect Poincare symmetry, then neither of these generalisations are relevant.
If we further assume the morphism is a 'perturbation' around the trivial morphism, _i.e._ that it takes the form
\[f^{i}(x^{\mu},u^{j})=u^{i}+\epsilon\lambda^{i}(x^{\mu},u^{j})\,,\qquad \epsilon\ll 1\,, \tag{3.22}\]
then we can expand the various functions \(g_{ij}(f)\) and \(V(f)\) around their 'old values'. For the potential, for example, we can do an ordinary Taylor expansion
\[V(u)\mapsto V(f(x,u))=V(u)+\epsilon\lambda^{i}\frac{\partial V}{\partial u^{i }}+\frac{1}{2}\epsilon^{2}\lambda^{i}\lambda^{j}\frac{\partial^{2}V}{\partial u ^{i}\partial u^{j}}+\ldots\,, \tag{3.23}\]
while the various Jacobian factors can also be expanded
\[\frac{\partial f^{i}}{\partial x^{\mu}}=\epsilon\frac{\partial\lambda^{i}}{ \partial x^{\mu}},\qquad\frac{\partial f^{i}}{\partial u^{j}}=\delta^{i}_{j}+ \epsilon\frac{\partial\lambda^{i}}{\partial u^{j}}\,, \tag{3.24}\]
before substituting into (3.21).
One might wonder whether such a base-point preserving bundle morphism of this kind can ever capture the effects of a derivative field redefinition, which is a change of section \(\phi\rightarrow\psi\) such that \(u\circ\psi\) is a function of \(u\circ\phi\) and its spacetime derivatives. The first point to note is that, starting from the 2-derivative Lagrangian \(\mathcal{L}[\phi,g_{E}]\) above, the transformation (3.21) cannot generate terms with more than 2-derivatives. So, we straightaway learn that this kind of bundle morphism cannot generically capture a derivative field redefinition; at best, one might be able to reproduce the effects of a derivative field redefinition if we truncate the transformed Lagrangian also at two derivatives. We next consider a concrete example, for which such a diffeomorphism can capture the effects of a derivative field redefinitions - but we stress that this example is far from generic.
#### 3.3.4 Example: \(\phi^{4}\) theory
Consider a 4d theory of a single real scalar field. In the bundle formulation, we take a trivial bundle \(E=\mathbb{R}^{1,3}\times\mathbb{R}_{u}\xrightarrow{\pi}\mathbb{R}^{1,3}\), on which we specify an initial fibred coordinate system \((x^{\mu},u)\). We take the metric
\[g_{E}=-\frac{1}{2}\frac{y\,u^{4}}{4!}\eta_{\mu\nu}dx^{\mu}\,dx^{\nu}+du\,du, \qquad y\in\mathbb{R} \tag{3.25}\]
which is flat in the field space coordinate \(u\), corresponding to a canonical 2-derivative kinetic term. The real parameter \(y\) is a coupling constant. Pulling back along a section \(\phi:x^{\mu}\to(x^{\mu},\phi(x))\), our initial Lagrangian is simply
\[\mathcal{L}[\phi]=\frac{1}{2}\partial_{\mu}\phi\partial^{\mu}\phi-\frac{y}{4!} \phi^{4}\,, \tag{3.26}\]
that of '\(\phi^{4}\) theory' in 4d, with potential \(V(\phi)=(y/4!)\phi^{4}\).
Now, consider a derivative field redefinition that is defined by a change of section \(\phi\to\psi\) where, in our coordinate system,14
Footnote 14: We choose a field redefinition that is consistent with a \(\text{Poinc}\times O(1)_{u}\cong\text{Poinc}\times\mathbb{Z}_{2}\) symmetry. The field redefinition is consequently second-order in derivatives and ‘preserves the number of fields’. This kind of field redefinition, which shifts \(\phi\) by ‘\(\Box\phi\)’, can be used to eliminate redundancies due to the equations of motion (EOMs), and will play an important role in the examples of higher-derivative Lagrangians that we discuss in §6.
\[x^{\mu}\circ\psi=x^{\mu}\,,\qquad u\circ\psi=(1-\epsilon\partial_{\mu} \partial^{\mu})\,(u\circ\phi)\,, \tag{3.27}\]
where \(1\gg\epsilon\in\mathbb{R}\). The first condition ensures the change of section only acts 'internally' on the fibres, as is our assumption throughout. Abusing notation as before and letting \(\psi\) (\(\phi\)) also denote the component values of the section \(\psi\) (\(\phi\)) in the local fibre coordinate, and letting \(\Box:=\partial_{\mu}\partial^{\mu}\) as usual, we would write this (in notation more familiar to the physicist) as
\[\phi\mapsto\psi=\phi-\epsilon\Box\phi\,. \tag{3.28}\]
Pulling back the metric \(g\) along \(\psi\) rather than \(\phi\), we obtain the transformed Lagrangian,
\[\mathcal{L}[\psi,g] =\frac{1}{2}\eta^{\rho\sigma}\langle\partial_{\rho}\otimes \partial_{\sigma},\,dx^{\mu}\otimes dx^{\nu}\rangle\partial_{\mu}(u\circ\psi) \partial_{\nu}(u\circ\psi)-V(u\circ\psi)\] \[=\frac{1}{2}\partial_{\mu}(\phi-\epsilon\Box\phi)\partial^{\mu}( \phi-\epsilon\Box\phi)-V(\phi-\epsilon\Box\phi) \tag{3.29}\] \[=\frac{1}{2}(\partial\phi)^{2}-\epsilon\partial_{\mu}\Box\phi \partial^{\mu}\phi+\frac{1}{2}\epsilon^{2}\partial_{\mu}\Box\phi\partial^{\mu }\Box\phi-V+\epsilon\Box\phi\frac{dV}{d\phi}+\frac{1}{2}\epsilon^{2}(\Box\phi) ^{2}\frac{d^{2}V}{d\phi^{2}}+\mathcal{O}(\epsilon^{3})\]
exactly as one would expect from a derivative field redefinition, where we have here kept the potential \(V(\phi)\) general. We stress that there is no obstacle to considering derivative field redefinitions in the geometric picture, once we consider a field redefinition to be a change of section (along which the unchanged metric is pulled back to form the Lagrangian).
Knowing that our bundle geometry construction can only capture Lagrangians with up to 2-derivatives, it is instructive to also expand this only up to 2-derivative terms:
\[\mathcal{L}[\phi,g_{E}]\mapsto\mathcal{L}[\psi,g_{E}]=\frac{1}{2}(\partial\phi )^{2}\left(1-2\epsilon V^{\prime\prime}(\phi)\right)-V(\phi)+\mathcal{O}(4\partial)\]
\[=\frac{1}{2}(\partial\phi)^{2}\left(1-\epsilon y\phi^{2}\right)-\frac{y}{4!} \phi^{4}+\mathcal{O}(4\partial)\,, \tag{3.30}\]
where we have also used IBPs, and the relation
\[\partial_{\mu}F(\phi)=\partial_{\mu}\phi\frac{dF}{d\phi}\,. \tag{3.31}\]
The result, truncating to 2-derivative order, is a \(\phi\)-dependent shift of the kinetic term.
Truncated to this order, the effect of doing this derivative field redefinition can be realised instead via a diffeomorphism on the bundle \(E\). Because the field redefinition we seek to replicate is a perturbation around doing nothing, we assume the bundle morphism is also a perturbation around the trivial morphism, of the form (3.22). By Poincare invariance we also assume the bundle morphism has no explicit dependence on \(x^{\mu}\), so we take
\[(x^{\mu},u)\mapsto(x^{\mu},u+\epsilon\lambda(u))\,. \tag{3.32}\]
Subbing into (3.21), we get
\[\mathcal{L}[\phi,g_{E}]\mapsto\mathcal{L}[\phi,f_{E}^{*}g_{E}]=\frac{1}{2}( \partial\phi)^{2}\left(1+2\epsilon\lambda^{\prime}\right)-V-\epsilon\lambda V ^{\prime}+\mathcal{O}(\epsilon^{2})\,. \tag{3.33}\]
Thus, in this example, we can reproduce the effects (3.30) of the derivative field redefinition, truncated to operators with 2 derivatives and up to 4 field insertions, by choosing the function \(\lambda=-y\phi^{3}/6\).
This is far from a generic example; for more general (2-derivative) Lagrangians, derivative field redefinitions cannot be described by base-point preserving bundle morphisms of the kind we have described - but they can always be described as changes of section of the bundle. A relevant example here was discussed in Appendix E of [33]; there, starting from a theory with zero Ricci scalar on the field space, a derivative field redefinition in the EFT was performed (motivated by a simple non-derivative rotation in the UV), that was found to introduce a singularity in the Ricci scalar; this could not occur if the metric is pushed forward under a diffeomorphism (since a diffeomorphism must act invertibly on tensors).
#### 3.3.5 A limiting lemma
It is likely not even possible to remedy this by allowing for _general_ bundle morphisms \((f,\bar{f})\), as described by (3.8), which have a non-trivial morphism on the base manifold \(\Sigma\). That is, allowing totally general bundle morphisms (even, in desperation, allowing for generic Poincare violation) is still not enough to fully capture any old 'derivative change of section'. A big clue to this is the following lemma, which is admittedly proven for the case of vector bundles ([78], Lemma 2.2.9):
**Lemma 3.1**: _If \((E,\pi,\Sigma)\) and \((H,\rho,\Sigma^{\prime})\) are vector bundles and \((f,\bar{f})\): \((E,\pi,\Sigma)\to(H,\rho,\Sigma^{\prime})\) is a vector bundle morphism such that \(\bar{f}\) is a diffeomorphism, then \(\tilde{f}:\Gamma((E,\pi,\Sigma))\to\Gamma((H,\rho,\Sigma^{\prime}))\) defined by \(\tilde{f}(\phi)=f\circ\phi\circ\bar{f}^{-1}\) is a module homomorphism over the ring isomorphism \(\bar{f}^{-1*}:C^{\infty}(\Sigma)\to C^{\infty}(\Sigma^{\prime})\)_
Crucially, a 'derivative change of section' such as \(\phi(x)\mapsto\psi=\partial_{\mu}\phi(x)\) (abusing notation in the now-familiar way), or indeed the specific change of section (3.27) considered in the previous example, is not a module homomorphism over the ring of smooth functions, because differentiation doesn't commute with multiplication. The lemma then implies it is not possible to find any bundle morphism \((f,\bar{f})\), where \(\bar{f}\) is a diffeomorphism, such that \(\phi(x)\mapsto\partial_{\mu}\phi(x)\), since that would require the map to be a homomorphism.
In this and the previous Section we have carefully reviewed and, we hope, refined the geometric picture of the 2-derivative term in scalar EFTs, and how one can think about field redefinitions geometrically as simply a change of section. In doing so we have laid the groundwork for extending this geometric approach to higher derivative terms. The key objects, mathematically, will be _jet bundles_ of the original field space bundle \(E\), and metrics that we will define thereon. Since jet bundles might be unfamiliar territory to most effective field theorists, in the next Section we review the concept and necessary definitions. Our main reference is the textbook of Saunders [78].
## 4 Jet Bundles
Starting from the 'field space bundle' \((E,\Sigma,\pi)\), one can construct a sequence of associated manifolds of higher dimension, all themselves fibres bundles, called jet bundles. These jet bundles are indexed by an integer; there is a '1-jet bundle' denoted \(J^{1}E\), a '2-jet bundle' \(J^{2}E\), and so on; the '0-jet bundle' is just \(E\) itself. For our humble physics purposes, \(r\)-jet bundles with higher \(r\) will be needed to describe EFTs with more and more derivatives; the 1-jet bundle shall suffice to describe all terms (at least in SMEFT and HEFT) with _up to four derivatives_.15
Footnote 15: The 1-jet bundle also has well-known mathematical physics applications in formal treatments of Lagrangian mechanics. See _e.g._[78; 79; 80].
Jets are built up from equivalence classes of sections, as follows. Starting from a fibre bundle \((E,\Sigma,\pi)\), with a local fibred coordinate system \((x^{\mu},u^{i})\) as introduced in SS3, we say that two local sections \(\phi,\psi\in\Gamma_{x}(\pi)\) are 1-equivalent at the base point \(x\) if both their values and the values of their first derivatives agree there - but \(\phi\) and \(\psi\) can disagree in their higher derivatives. Recalling our notation \(\phi^{i}(x):=(u^{i}\circ\phi)(x)\) for the 'components' of a section (given local fibre coordinates \(u^{i}\)), then \(\phi\sim\psi\) iff
\[\phi^{i}(x)=\psi^{i}(x)\,,\qquad\text{and}\qquad\frac{\partial\phi^{i}(x)}{ \partial x^{\mu}}=\frac{\partial\psi^{i}(x)}{\partial x^{\mu}}\,. \tag{4.1}\]
It is easy to show that this equivalence relation is independent of the choice of coordinate chart on our patch of \(E\). The 1-equivalence class containing a representative section \(\phi\) is called the _1-jet of \(\phi\) at \(x\)_, denoted \(j^{1}_{x}\phi\). In a similar way, two local sections are \(r\)-equivalent at \(x\in\Sigma\), for some \(r\in\mathbb{Z}_{\geq 0}\), if their first \(r\) derivatives agree at \(x\); the \(r\)-equivalence class containing \(\phi\) is the \(r\)-jet of \(\phi\) at \(x\), denoted \(j^{r}_{x}\phi\).
The 1-jet bundle \(J^{1}E\) is then defined to be the set of 1-equivalence classes of sections,
\[J^{1}E=\{j^{1}_{x}\phi\,|\,x\in\Sigma,\phi\in\Gamma_{x}(\pi)\}\,, \tag{4.2}\]
which moreover has a natural structure as a differentiable manifold. Continuing in a similar fashion, the \(r\)-jet bundle \(J^{r}E\) is the set of \(r\)-equivalence classes of sections - or, more prosaically, is the space of field configurations that agree in their first \(r\) derivatives. The 0-jet bundle is defined to be \(J^{0}E=E\) itself.
Focussing our attention on the 1-jet bundle \(J^{1}E\), from its definition one can define two (ordinary) fibre bundles with total space \(J^{1}E\). One is a bundle over the original base space \(\Sigma\), and the other is a bundle over the original bundle \(E\):16
Footnote 16: More generally, the \(J^{r}E\) bundle can always be modelled as a fibre bundle over base space \(J^{r-1}E\), or indeed as a fibre bundle over any \(J^{q<r}E\). (There is natural inductive limit of the sequence \(\{J^{r}E\}\) of jet bundles, that defines a space \(J^{\infty}E\)[80]. One can model \(J^{\infty}E\) by sections that agree as \(C^{\infty}\) maps – we will not discuss this infinite order jet bundle here.)
\[\pi_{1}:J^{1}E\to\Sigma:\;\;j^{1}_{x}\phi\mapsto x\,, \tag{4.3}\] \[\pi_{1,0}:J^{1}E\to E:\;\;j^{1}_{x}\phi\mapsto\phi(x)\,. \tag{4.4}\]
These projections are sometimes referred to as the'source' and 'target' bundles respectively, for obvious reasons. These projections are compatible with the original bundle structure of \(E\), in that they clearly fit into a commutative diagram
involving the original projection \(\pi:E\to\Sigma\). Thus, \(\pi_{1}=\pi\circ\pi_{1,0}\).
#### Local coordinates on the 1-jet bundle
Recall that we start from local coordinates \((x^{\mu},u^{i})\) on the fibre bundle \(E\), or more precisely on an open set \(\pi^{-1}(\mathcal{U})\subset E\), and that a particular section \(\phi:\Sigma\to E\) maps a point \(x\in\Sigma\) to the point in \(E\) with coordinates \((x^{\mu},\phi^{i}(x^{\mu}))\). The 1-jet bundle \(J^{1}E\) then admits a set of induced local coordinates
\[(x^{\mu},u^{i},u^{i}_{\mu})\,. \tag{4.5}\]
The value of these coordinates at a point \(j^{1}_{x}\phi\in J^{1}E\) are
\[x^{\mu}\circ j^{1}_{x}\phi =x^{\mu}\,, \text{(spacetime points)} \tag{4.6}\] \[u^{i}\circ j^{1}_{x}\phi =\phi^{i}(x)\,, \text{(field values)}\] (4.7) \[u^{i}_{\mu}\circ j^{1}_{x}\phi =\left.\frac{\partial\phi^{i}}{\partial x^{\mu}}\right|_{x} \text{(first derivatives)}. \tag{4.8}\]
Thus, the extra fibre coordinates in the 1-jet bundle \((J^{1}E,\Sigma,\pi_{1})\) encode the first derivatives of sections passing through that base point \(x\) - which is intuitive given the definition of a point \(j^{1}_{x}\phi\in J^{1}E\) as the collection of all sections passing through \(x\) with the same value and first derivative there.
One can model the fibres of the bundle \((J^{1}E,E,\pi_{1,0})\), on which \(u^{i}_{\mu}\) provide fibre coordinates, as vector spaces [80]
\[\pi_{1,0}^{-1}(x\in\Sigma,m\in M_{x})\cong T_{x}^{*}(\Sigma)\otimes T_{m}(M)\,. \tag{114}\]
This structure is encoded in the placement of indices of the derivative coordinate \(u^{i}_{\mu}\); it transforms as a vector with respect to its \(i\) index, and a co-vector with respect to its \(\mu\) index. We will often call these extra coordinates \(u^{i}_{\mu}\) 'derivative coordinates'.
Summarising, the induced fibre coordinates on \((J^{1}E,\Sigma,\pi_{1})\), which evaluate on a section to \((\phi^{i},\partial_{\mu}\phi^{i})\), provide the familiar ingredients that we use to write down Lagrangians in field theory. The 1-jet bundle provides a coordinate-free description of this data. The geometry of \(\Sigma\), \(J^{0}E=E\) and \(J^{1}E\), all of which play a big role in the EFT formalism developed in this paper, is summarized schematically in Figure 1.
Lastly, it is easy to find the dimension of the 1-jet manifold. Starting with a general fibre bundle \(E\to\Sigma\), with \(\text{Dim}\Sigma=d\) and \(\text{Dim}E-\text{Dim}\Sigma=n\) (the \(\mathbb{R}\)-dimension of the fibre, equivalently the number of real scalar fields), the \(\{u^{i}_{\mu}\}\) provide \(dn\) extra coordinates. Then
\[\text{Dim}(J^{1}E)=\text{Dim}E+dn=d+n+dn\,. \tag{115}\]
Let's count the dimensions of \(J^{1}E\) for some examples:
* Real scalar field in \(d\) dimensions: \(\text{Dim}(J^{1}E)=2d+1\) (see SS6.2);
* HEFT or SMEFT: \(\text{Dim}(J^{1}E)=24\) (see SS7).
Going further, the dimensions of the jet manifolds \(J^{r}E\) quickly blow up when \(d=\text{Dim}\Sigma\) is large; in general, it is easy to see that
\[\text{Dim}(J^{r}E)=d+n\frac{(d+r)!}{d!r!}\,. \tag{116}\]
For SMEFT or HEFT, we have \((J^{2}E)=64\) for the 2-jet bundle.
## 5 Higher-derivative EFTs from Jet Bundle Geometry
The 1-jet bundle, introduced in SS4, defines a bundle over spacetime with local fibre coordinates that we identify with scalar fields and their first derivatives. While these degrees of freedom are treated independently on the jet bundle, we always eventually pull back objects (like metrics) along sections, whereupon the relation between the field and its derivative is enforced (_i.e._ upon pullback \(u^{i}_{\mu}\) will'remember' that it is a derivative of \(u^{i}\)).
The key idea, from the EFT perspective, is that we can consider more general geometric structures involving higher-derivatives of fields in this larger space \(J^{1}E\). In particular, we will see how defining different metrics on \(J^{1}E\), equivalently'measures of distance', encodes different scalar EFT Lagrangian with up to 4 spacetime derivatives acting on the fields. In this Section we outline how this works, for a general scalar EFT. In Sections 6 and 7, we will apply this to specific examples including the Higgs EFTs, which is our primary interest.
### Defining geometry on the 1-jet bundle
The subject of _geometry_ on jet bundles is not so widely explored in the mathematical literature. There are nonetheless 'natural metrics' that are induced on jet bundles \(J^{r}E\), given a metric on the original bundle \(E\xrightarrow{\pi}\Sigma\) that we take to be, locally, a product metric \(\eta\oplus g\), with \(\eta\) flat (ignoring any potential contribution \(V(u)\) for now) and \(g(u)\) possibly curved, as described in SSSS2.1 and 3. To give the reader a flavour for geometry on jet bundles, we begin with a brief sketch of such a natural geometry on \(J^{1}E\),17 and see how it offers a starting point for constructing Lagrangians with 4-derivatives.
Footnote 17: _Contact forms_ provide another source of natural geometries on jet bundles, which can also be used to define a class of 4-derivative EFT Lagrangians using the ideas in this paper.
We want to define a Riemannian metric on \(J^{1}E\) considered as a manifold. We already saw that \(J^{1}E\) is itself a vector bundle \((J^{1}E,E,\pi_{1,0})\) with fibre \(T^{*}_{x}(\Sigma)\otimes T_{m}(M)\). Defining a metric on \(J^{1}E\) means equipping the tangent bundle \(T(J^{1}E)\) with an inner product
\[g^{(1)}:T(J^{1}E)\times_{J^{1}E}T(J^{1}E)\to\mathbb{R} \tag{108}\]
that is symmetric and non-degenerate, and that varies smoothly with the base coordinate \(j^{1}_{x}\phi\) (in other words, it is a tensor field on \(J^{1}E\)). We use a notation whereby \(g^{(r)}\) denotes a metric on the \(r\)-jet bundle \(J^{r}E\). Now, the tangent bundle \(T(J^{1}E)\) is isomorphic to the pullback bundle18
Footnote 18: To show (108), we need some theorems concerning the structure of the tangent bundle of such a vector bundle. There is a short exact sequence of smooth vector bundles on \(E\), namely \(0\to VE\to TE\to HE\to 0\), where the sub-bundle \(VE:=\ker\,d\pi\) is called the ‘vertical bundle’, and the quotient bundle \(HE=TE/VE\) is called the ‘horizontal bundle’. Both these bundles are isomorphic to pullback bundles along \(\pi^{*}\); we have \(VE\cong\pi^{*}E\), and \(HE\cong\pi^{*}TY\), which gives rise to the isomorphism (108).
\[T(J^{1}E)\cong\pi^{*}(J^{1}E\oplus TE)\,. \tag{109}\]
We can therefore build a (fibre-wise) inner product on \(T(J^{1}E)\) (_i.e._ a metric on \(J^{1}E\)) out of (fibre-wise) inner products on the bundles \(J^{1}E\) and \(TE\). The latter is just the metric on \(E\), which we already stipulated is (locally) the product metric \(\eta\oplus g\). The former is not much more complicated, given the fibre is \(T^{*}_{x}(\Sigma)\otimes T_{\phi}(X)\); we define the inner product via \(\eta^{-1}\) and \(g\). Note the _inverse_ spacetime metric appears here, because we are contracting co-vectors with respect to spacetime, but vectors with respect to the target space.
Putting things together, we can express this 'natural' choice of metric, which recall is the one built from the metrics \(\eta\) and \(g\) that we already have at our disposal, in our local coordinate system:
\[g^{(1)}=\Lambda^{4}\,\eta_{\mu\nu}\,dx^{\mu}\otimes dx^{\nu}+g_{ij}(u)\,du^{i }\otimes du^{j}+\frac{1}{\Lambda^{2}}\eta^{\mu\nu}g_{ij}(u)\,du^{i}_{\mu} \otimes du^{j}_{\nu}\,. \tag{110}\]
We have introduced appropriate factors of \(\Lambda\) to give everything consistent dimensions.
### Our first four-derivative Lagrangian
Now, starting from this metric on the jet bundle \(J^{1}E\), which we intrepret as a 'derivative-generalisation of field space', one can construct a Lagrangian using exactly the same recipe that we described for the 2-derivative Lagrangian in SS2.1. The only difference is that we
now need to pull the metric \(g^{(1)}\) all the way back to spacetime, and so we should use a section \(j^{1}\phi\) of the fibre bundle \((J^{1}E,\Sigma,\pi_{1})\) introduced above.
In particular, given an original section \(\phi\) of \(E\) (_i.e._ a scalar field configuration), let
\[j^{1}\phi\in\Gamma_{x}(\pi_{1}) \tag{108}\]
be its 'prolongation' to the 1-jet bundle, defined as the section of \(\pi_{1}\) that passes through the 1-jet \(j^{1}_{x}\phi\).19 Evaluating on the coordinate chart, its components are simply
Footnote 19: Note that not every section of the 1-jet bundle \(\pi_{1}:J^{1}E\to\Sigma\) is a prolongation of a section \(\phi\) of \(\pi:E\to\Sigma\), where the latter is for us identified with a scalar field configuration. We are only interested in sections obtained in this way.
\[x^{\mu}\circ j^{1}\phi=x^{\mu}\,,\qquad u^{i}\circ j^{1}\phi=\phi^{i}(x)\,, \qquad u^{i}_{\mu}\circ j^{1}\phi=\left.\frac{\partial\phi^{i}}{\partial x^{ \mu}}\right|_{x}\,, \tag{109}\]
from (105).
We define the Lagrangian
\[\mathcal{L}_{4\partial}[\phi,g^{(1)}]:=\frac{1}{2}\,\left\langle\eta^{-1},\, (j^{1}\phi)^{*}g^{(1)}\right\rangle\,, \tag{110}\]
which we regard as a functional of the original section \(\phi\). Assuming the system of local coordinates set out in Eqs. (107)-(108), this gives the Lagrangian
\[\mathcal{L}_{4\partial}[\phi,g^{(1)}]=\frac{1}{2}\eta^{\mu\nu}g_{ij}(\phi(x) )\partial_{\mu}\phi^{i}\partial_{\nu}\phi^{j}+\frac{1}{2\Lambda^{2}}\eta^{ \rho\sigma}\eta^{\mu\nu}g_{ij}(\phi(x))\partial_{\rho}\partial_{\mu}\phi^{i} \partial_{\sigma}\partial_{\nu}\phi^{j}\,, \tag{111}\]
up to a cosmological constant term coming from the first term in (107) that we can neglect when doing flat-space quantum field theory (although its natural appearance seems intriguing nonetheless).
Let us pause to make some further comments. First and foremost, we have shown that defining a geometry on the 1-jet bundle naturally'refines' our original 2-derivative Lagrangian, as constructed from field space geometry in SS2.1, by a particular 4-derivative 'correction' term that is also geometric in origin. This is already, to our knowledge, a novel result in the context of EFT.
We can try to unpack the particular Lagrangian that we have obtained, which is clearly not yet general, in the case of the Higgs EFTs. Let us therefore take \(\Sigma\) to be flat 4d spacetime and the target space \(M\) is a real 4-manifold equipped with the \(SO(4)\) group action \(\sigma_{\mathrm{S}}\) for SMEFT, say. We can, for illustration, keep only terms in the Lagrangian (111) that have _two field_ insertions. This is equivalent to taking only the leading term in the expansion of the original \(\sigma_{\mathrm{S}}\)-invariant metric,
\[g_{ij}(u^{i})=\delta_{ij}+\ldots \tag{112}\]
In this limit, and contracting the various indices, the Lagrangian (111) becomes
\[\mathcal{L}=\frac{1}{2}\partial_{\mu}\phi\cdot\partial^{\mu}\phi+\frac{1}{2 \Lambda^{2}}\partial_{\mu}\partial_{\nu}\phi\cdot\partial^{\mu}\partial^{\nu} \phi+\mathcal{O}(\phi^{4})\,. \tag{113}\]
Interestingly, we recover the most general SMEFT Lagrangian with _up to two fields and up to four derivatives_. The SM can, as for the purely 2-derivative case, be obtained by taking the cut-off scale \(\Lambda\to\infty\), which decouples all the non-renormalisable operators from the EFT.
### General geometries for general Lagrangians
In the previous two Subsections, we considered a metric on \(J^{1}E\) that is 'natural', mathematically, given the geometry we started with on the field space bundle \(E\). We found that this metric delivered a 4-derivative EFT Lagrangian, via the 'obvious map' to Lagrangians specified by Eq. (101). Unsurprisingly, if we start from such a naturally induced metric on \(J^{1}E\), this will only ever get us a 4-derivative term whose form is determined by the 2-derivative terms already present in the action. For a general EFT description, we of course want to consider 4-derivative terms that are independent of the 2-derivative terms. This suggests that our geometry on the 1-jet bundle \(J^{1}E\) should introduce _new_ structures not already present in the geometry of the field space bundle \(E\).
This poses no problem, and in a sense simplifies the picture. One can shortcut the mathematical niceties of SS5.1, and instead take a more pedestrian approach that we are used to as physicists; to wit, we should consider the _most general geometry on the 1-jet bundle that is consistent with symmetry_. This is, after all, the most natural approach to take in the spirit of EFT!
#### General Poincare invariant 1-jet metrics
We will, however, make rather stringent assumptions about the structure and symmetry of spacetime, to simplify our discussion (and specialise to those EFTs we are most interested in as high-energy physicists). As previously, we fix spacetime to be \(\Sigma=\mathbb{R}^{1,3}\), flat Minkowski in \(d\) spacetime dimensions with metric \(\eta=\eta_{\mu\nu}\,dx^{\mu}\otimes dx^{\nu}=dt^{2}-d\vec{x}\cdot d\vec{x}\).20 We moreover enforce Poincare symmetry on the jet bundle geometry. In practice, this means we must contract all \(\mu\) indices (that derive from both the base space coordinates \(x^{\mu}\) and the derivative coordinates \(u^{i}_{\mu}\)) to make Lorentz invariants, but we do not form contractions with \(x^{\mu}\) vectors since that would break translations (nor can any explicit \(x\) dependence appear in the metric components). We will discuss symmetries on the jet bundle again in SS5.5, using the language of bundle morphisms (building on SS3.3).
Footnote 20: Last there is any confusion, the scalar field theory Lagrangian will be defined in terms of a geometry on the 1-jet bundle \(\pi_{1}:J^{1}E\to\Sigma\), and we equip \(J^{1}E\) with a geometry (_i.e._ a metric \(g^{(1)}\)) that is independent of the geometry on spacetime. In particular, the metric components in the ‘base space directions’ of \(J^{1}E\) need not agree with those components of \(\eta\), the metric on \(\Sigma\).
So, let us write down the most general such metric, in our local coordinates. We keep all terms, including 'cross-terms' that mix different types of jet bundle coordinates:
\[g^{(1)}= \left(dx^{\mu}\ du^{i}\ du^{i}_{\mu}\right)[g^{(1)}]\begin{pmatrix} dx^{\nu}\\ du^{j}\\ du^{j}_{\nu}\end{pmatrix},\quad[g^{(1)}]=\begin{pmatrix}\Lambda^{4}g_{\mu\nu}( \dots)&\Lambda^{2}g_{\mu j}(\dots)&g^{\nu}_{\mu j}(\dots)\\ \Lambda^{2}g_{\nu i}(\dots)&g_{ij}(\dots)&\frac{1}{\Lambda}g^{\nu}_{ij}( \dots)\\ g^{\mu}_{\nu i}(\dots)&\frac{1}{\Lambda}g^{\mu}_{ij}(\dots)&\frac{1}{\Lambda^ {2}}g^{\mu\nu}_{ij}(\dots)\end{pmatrix}\,. \tag{102}\]
Because the derivative coordinates themselves have _pairs_ of indices, one upstairs and one downstairs, it is convenient to use the same index notation for the components of the metric; thus, a component such as \(g^{\mu}_{ij}\) or \(g^{\mu\nu}_{ij}\) still has two downstairs jet bundle indices (_i.e._ all
metric components are of course components of a \((0,2)\) tensor, never a 3- or 4-tensor), but taken with respect to one (or two) derivative coordinates respectively. The \((\dots)\) record the fact that, in general and before imposing any symmetry, each metric component is a functions of _all_ the jet bundle coordinates - including the 'derivative coordinates'. Symmetry will of course restrict the form of these components dramatically.
Upon pulling back a general metric of the form (5.10) along a section \(j^{1}\phi\) of the jet bundle \((J^{1}E,\Sigma,\pi_{1})\) and forming the Lagrangian (5.6), one obtains the following Lagrangian terms:
\[\mathcal{L}[\phi,g^{(1)}] = \eta^{\rho\sigma}\bigg{[}\frac{1}{2}\Lambda^{4}\delta^{\mu}_{\rho }\delta^{\nu}_{\sigma}\,g_{\mu\nu}(\phi,\partial\phi)+\Lambda^{2}\delta^{\mu}_ {\rho}g_{\mu j}(\phi,\partial\phi)\,\partial_{\sigma}\phi^{j}+\Lambda\delta^{ \mu}_{\rho}g^{\nu}_{\mu j}(\phi,\partial\phi)\,\partial_{\rho}\partial_{\nu} \phi^{i} \tag{5.11}\] \[+ \frac{1}{2}g_{ij}(\phi,\partial\phi)\,\partial_{\rho}\phi^{i} \partial_{\sigma}\phi^{j}+\frac{1}{2\Lambda}g^{\mu}_{ij}(\phi,\partial\phi)\, \partial_{\rho}\partial_{\mu}\phi^{i}\partial_{\sigma}\phi^{j}+\frac{1}{2 \Lambda}g^{\nu}_{ij}(\phi,\partial\phi)\,\partial_{\rho}\phi^{i}\partial_{ \sigma}\partial_{\nu}\phi^{j}\] \[+ \frac{1}{2\Lambda^{2}}g^{\mu\nu}_{ij}(\phi,\partial\phi)\, \partial_{\rho}\partial_{\mu}\phi^{i}\partial_{\sigma}\partial_{\nu}\phi^{j} \,\bigg{]}\,.\]
Let us unpack this general form of the Lagrangian a little. First, we highlight in magenta the piece of this Lagrangian that is captured by the usual field space geometry picture, as in Eq. (2.8). In fact, this \(g_{ij}\) term is _already more general_ than the 2-derivative Lagrangian (2.8), because the component function \(g_{ij}(\phi,\partial\phi)\) is itself a function of first derivatives, and so this term already includes a subset of 4-derivative EFT operators (and indeed operators with \(>4\) derivatives), not just 2-derivative operators.
Another important remark to make is that we also capture terms with _fewer_ than 2-derivatives from this geometry, coming from the first two terms on the RHS of (5.11) - we already saw this feature in SS3 when describing the 2-derivative EFT using the field space bundle picture (which is equivalent to formulating the theory geometrically on the '0-jet bundle' \(J^{0}E=E\)). In particular, the first term
\[\mathcal{L}\supset\eta^{\mu\nu}\Lambda^{4}g_{\mu\nu}(\phi,\partial\phi) \tag{5.12}\]
can encode an arbitrary scalar potential contribution; taking \(g_{\mu\nu}=\frac{1}{d}\eta_{\mu\nu}V(\phi/\Lambda)+\dots\), and Taylor expanding, the preceding equation becomes
\[\mathcal{L}\supset\Lambda^{4}\sum_{n=0}^{\infty}V_{n}\left(\frac{\phi}{ \Lambda}\right)^{n}\,. \tag{5.13}\]
In the case of SMEFT or HEFT, when we impose \(O(4)\) symmetry and so forbid terms with odd powers of \(\phi\), this contains the (super-)renormalisable terms \(c+\mu^{2}\phi^{2}-\lambda\phi^{4}\) appearing in the SM, where \(c=\Lambda^{4}V_{0}\), \(\mu^{2}=\Lambda^{2}V_{2}\) and \(\lambda=-V_{4}\).21 We emphasize that, in the usual geometric approaches, the scalar potential does not have a geometric origin but rather is added as an extra ingredient in defining the EFT - see our discussion in SS2.2.
Footnote 21: Of course, tiny values are required for \(V_{0}\) and \(V_{2}\), given any viable EFT cutoff scale \(\Lambda\); this manifests the (tree-level) cosmological constant and electroweak hierarchy problems.
Let us return to non-renormalisable operators, with more derivatives. Because all of the metric components in (5.11) are arbitrary functions of \(\phi\) and \(\partial\phi\), the Lagrangian (5.11)
contains operators with arbitrary numbers of derivatives - but not all such terms - in a similar way to the usual geometric Lagrangian (8) containing operators with arbitrary numbers of fields. The more precise statement here is that the 1-jet bundle geometric Lagrangian (5.11) yields operators with
\[D\leq N+2 \tag{108}\]
where \(D\) is the number of derivatives in the operator, and \(N\) is the number of fields. The reason for the \(+2\) on the RHS of (108) is the same reason that the ordinary field space geometry (equivalently, the \(J^{0}E\) case) describes 2-derivative terms, namely because there are already two derivatives due to \(g\) being a \((0,2)\) tensor.
We do not claim, in this fully general EFT without any symmetries, that we capture _all_ operators satisfying (108); to answer this would require careful counting of non-redundant operators with unbounded mass dimension (since \(N\) can be taken arbitrarily large), which is clearly difficult. Adapting Hilbert series techniques (see _e.g._[81, 82]) to jet bundle 'field space' might provide helpful tools for tackling such completeness questions (should they be interesting). A more practical approach is to truncate the general Lagrangian (5.11) and count _e.g._ all operators with a maximum number of derivatives. For the 1-jet bundle, we are able to count operators in the examples considered in SS6 and SS7, including for HEFT and SMEFT, accounting for all integration by parts (IBP) redundancies; in all cases, we find the complete basis of operators with up to 4-derivatives is covered by the Lagrangian (5.11). In fact, we prove in general that by pulling back a 1-jet bundle metric one obtains a complete non-redundant basis of 0-, 2-, and 4-derivative operators for any scalar EFT. See Appendix B.
Passing to higher jet bundles, one captures more and more operators (with more and more derivatives, relative to the number of fields); specifically, for an \(r\)-jet bundle geometry, one can obtain operators with
\[D\leq rN+2 \tag{109}\]
number of derivatives; those operators with \((r-1)N<D-2<rN\) would _not_ have already been captured by the \((r-1)\)-jet bundle geometry. We thus see how going to higher jet bundles systematically captures more and more operators in the EFT, and so provides a natural organising principle for the EFT expansion. Indeed, one can prove that the \(r\)-jet bundle geometry can capture a complete basis of operators in a general scalar EFT with up to \(2(r+1)\) derivatives - again, see Appendix B.
We will use 1-jet bundles to study several examples of scalar field theories in SS6, culminating in a geometric formulation of the Higgs EFTs, including all operators containing up to 4 derivatives, in SS7. Before we get into the nitty gritty of these examples, in the remainder of this Section we want to dig a bit deeper into how field redefinitions might be described in this context, and how the various notions of 'bundle morphism' described in SS3.3 lift to the 1-jet bundle description. This will also allow a more general formulation of symmetries on the jet bundles.
### Field redefinitions are (prolonged) changes of section
To recap, in the 'bundle formulation' of a scalar field theory that we described in SS3, we identified scalar fields with local sections \(\phi\in\Gamma_{x}(\pi)\) of bundles \((E,\Sigma,\pi)\) over spacetime \(\Sigma\) (3.1). A field redefinition, in general, was identified as a change of section \(\phi\mapsto\psi\in\Gamma_{x}(\pi)\). Furthermore, for the special class of field redefinitions that are 'non-derivative', this is equivalent to doing a base-point preserving bundle morphism \((f,\bar{f})=(f_{E}:E\to E,\mathrm{id}_{\Sigma}:\Sigma\to\Sigma)\).
Upon passing to the 1-jet bundle \((J^{1},\Sigma,\pi_{1})\), which we suggest provides the right geometric ingredients for constructing EFT Lagrangians with more derivatives, the section \(\phi\) is promoted to its prolongation \(j^{1}\phi\in\Gamma_{x}(\pi_{1})\), a section of the 1-jet bundle. This is uniquely determined by the original section \(\phi\), simply by \(j^{1}\phi(x)=j^{1}_{x}\phi\), the 1-jet of \(\phi\) at \(x\). The natural notion of a 'field redefinition' in this picture is therefore sending
\[j^{1}\phi\mapsto j^{1}\psi\,, \tag{5.16}\]
the result of which is also a section of the 1-jet bundle \(\pi_{1}\). We can think of this as a 'prolongation' of the field redefinition \(\phi\to\psi\). Since the Lagrangian will be defined by pulling back objects (a metric) from \((J^{1}E,\Sigma,\pi_{1})\) along a section, this tells us how we can implement the field redefinition at the level of the Lagrangian - it is straightforward to check that the Lagrangian transforms in the 'right way' (see SS5.5.4 for some more-or-less trivial examples). We believe this can accommodate any (possibly derivatively-dependent, but sufficiently smooth) field redefinition.
### Morphisms on the jet bundle
The notion of bundle morphisms that we gave in SS3.3, and used to describe non-derivative field redefinitions and (internal and spacetime) symmetries, can be promoted to a notion of morphism on the 1-jet bundle (which extends suitably to higher \(r\)-jet bundles), which we describe in this Section.
#### 5.5.1 Prolongation of bundle morphisms
We start with the notion of bundle morphism \((f,\bar{f})\) introduced in Eq. (3.6), where we straightaway fix both bundles to have the same base and total space. Going to the 1-jet bundle \(J^{1}E\), the prolongated morphism \(j^{1}f\) is defined by its action on points in the jet bundle \(J^{1}E\), which recall are jets \(j^{1}_{x}\phi\), by [78]
\[j^{1}(f,\bar{f})\,(j^{1}_{x}\phi)=j^{1}_{\bar{f}(x)}(f\circ\phi\circ\bar{f}^{- 1}) \tag{5.17}\]
where \(\bar{f}^{-1}\) is the inverse of \(\bar{f}\), which we assume to be a (local) diffeomorphism. The commutative diagram (3.6) is extended to
(5.18)
Specialising to the case of bundle morphisms that act trivially on the base (3.8), _i.e._ taking \(\bar{f}=\mathrm{id}_{\Sigma}\), Eq. (5.17) simplifies to
\[j^{1}(f)\,(j^{1}_{x}\phi)=j^{1}_{x}(f\circ\phi)\,. \tag{5.19}\]
The commutative diagram (5.18) then simplifies to
(5.20)
Recall \(\pi_{1}=\pi_{1,0}\circ\pi\) and likewise \(\rho_{1}=\rho_{1,0}\circ\rho\). For two sections \(j^{1}\phi\in\Gamma_{x}(\pi_{1})\) and \(j^{1}\psi\in\Gamma_{x}(\rho_{1})\), the 'prolonged' commutative diagram implies the relation
\[j^{1}\psi=j^{1}f\circ j^{1}\phi \tag{5.21}\]
between sections, analogous to (3.11).
We can obtain the coordinate representation of the prolongated diffeomorphism \(j^{1}f\) by taking the composition with the various coordinate functions. For the original base and 'field space coordinates' \(x^{\mu}\) and \(u^{i}\), we get
\[x^{\mu}\circ j^{1}f =x^{\mu}\circ\rho_{1}\circ j^{1}f=x^{\mu}, \tag{5.22}\] \[u^{i}\circ j^{1}f =u^{i}\circ\rho_{1,0}\circ j^{1}f=u^{i}\circ f\,. \tag{5.23}\]
That is, the prolongated morphism \(j^{1}f\) acts exactly as \(f\) on the \(x^{\mu}\) and \(u^{i}\) components (remembering our restriction that the bundle morphism \(f\) acts trivially on the base space). Lastly, for the derivative coordinates \(u^{i}_{\mu}\), we can deduce their transformation under \(j^{1}f\) by considering the action on a particular jet \(j^{1}_{x}\phi\):22
Footnote 22: We reiterate that all formulae here are for the specific subset of bundle morphisms that act trivially on the base.
\[(u^{i}_{\mu}\circ j^{1}f)(j^{1}_{x}\phi) =\partial_{\mu}\left(u^{i}\circ(f\circ\phi)\right)\big{|}_{x} \tag{5.24}\] \[=\partial_{\mu}(u^{i}\circ f)\big{|}_{x}+\partial_{\mu}(u^{j} \circ\phi)\big{|}_{x}\frac{\partial(u^{i}\circ f)}{\partial u^{j}}\bigg{|}_{x}\] \[=\partial_{\mu}(u^{i}\circ f)\big{|}_{x}+u^{j}_{\mu}\left.\frac{ \partial(u^{i}\circ f)}{\partial u^{j}}\right|_{x}.\]
One can think of this as a 'total derivative' of the transformed coordinate functions \(u^{i}\circ f\); it follows essentially from applying the chain rule. To summarize, we can write the prolongated bundle morphism as:
\[j^{1}f:(x^{\mu},u^{i},u^{i}_{\mu})\mapsto(x^{\mu},f^{i},\partial_{\mu}f^{i}+u ^{j}_{\mu}\partial_{j}f^{i})\,, \tag{5.25}\]
in our local fibred coordinates, where \(f^{i}(x^{\mu},u^{j})=u^{i}\circ f\) as in SS3.3.
The construction we have described for prolonging a bundle morphism from \(\pi:E\to\Sigma\) to its 1-jet bundle \(\pi_{1}:J^{1}\to\Sigma\) can be extended straightforwardly to the higher-jet bundles. Since on the physics side we focus on \(J^{1}E\) in this paper, we omit details of this construction here.
#### 5.5.2 Prolonging internal and spacetime symmetries to the 1-jet bundle
In SS3.3.1 we defined notions of internal and spacetime symmetries as bundle morphisms satisfying particular conditions. Namely, all symmetries leave the Lagrangian invariant; internal symmetries are base-point preserving bundle morphisms, which in local coordinates can be expressed as \(f_{g}:(x^{\mu},u^{i})\mapsto(x^{\mu},f^{i}_{g}(u^{i},x^{\mu}))\); spacetime symmetries have a non-trivial action on the base space \(\Sigma\) of the bundle. We discussed the particular cases of the Higgs EFTs in SS3.3.1.
How do these symmetries act on the 1-jet bundle? Having phrased symmetries in the above language of bundle morphisms, there is an obvious way to 'lift' them to the 1- and higher-jet bundles using the notion of prolongation, that we have introduced in this Section.
#### Prolonging an internal symmetry
For an internal symmetry, we can directly apply the formula (5.25) for the prolongation of a base-point preserving bundle morphism, which we have for each group element \(g\in G\), our symmetry Lie group. The set of prolonged symmetry morphisms are23
Footnote 23: As usual, all morphisms and prolongations thereof are understood to be only _local_ here; the formulae we write are valid only in certain open submanifolds coordinatized by certain charts; in other words, all manifolds appearing in commutative diagrams such as (5.20) should be understood as open submanifolds of the objects written.
\[j^{1}f_{g}:(x^{\mu},u^{i},u^{i}_{\mu})\mapsto(x^{\mu},f^{i}_{g},\partial_{\mu}f ^{i}_{g}+u^{j}_{\mu}\partial_{j}f^{i}_{g})\,,\quad\forall g\in G \tag{5.26}\]
For the SMEFT/HEFT symmetries, we can, at least in a coordinate patch, write \(f^{i}_{O\in O(4)}=\sigma(O,u^{i})\), for \(\sigma=\sigma_{\rm S}\) or \(\sigma_{\rm H}\) in the case of SMEFT or HEFT respectively. Let us unpack this further in the case of SMEFT, for illustration, in which the group action in our coordinate chart of choice is linear:
\[f^{i}=O^{i}_{j}u^{j}\,, \tag{5.27}\]
where \(O^{i}_{j}\) is a constant \(O(4)\) matrix in the fundamental representation. Notice that, in addition to being an _internal_ symmetry, it is also a _global_ symmetry, in the usual physicist's sense, which is a restricted kind of base-point preserving morphism. (It is the same restriction we have often imposed when viewing field redefinitions as morphisms, where we appealed to Poincare invariance as our justification.) Thus, \(\partial_{\mu}f^{i}=0\) while \(\partial_{j}f^{i}=O^{i}_{j}\). The prolongation of the internal \(O(4)\) symmetries to the 1-jet bundle is thus given by the set of morphisms:
\[\{j^{1}f_{O}:(x^{\mu},u^{i},u^{i}_{\mu})\mapsto(x^{\mu},O^{i}_{j}u^{j},O^{i}_ {j}u^{j}_{\mu})\qquad\forall O\in O(4)\,\}\,. \tag{5.28}\]
exactly as one would naively expect. In practice, this means that one forms \(O(4)\)-invariants (say, when constructing invariant metrics on the 1-jet bundle, and thence invariant Lagrangians) by contracting the upper case \(i\) indices wherever they appear both in the \(u^{i}\) coordinates _and_ in the derivative coordinates \(u^{i}_{\mu}\). The simplicity of this group action, which could hardly have been otherwise, will be frequently invoked in SS7 when we finally turn to constructing the 4-derivative Higgs EFT Lagrangians.
#### Prolonging a spacetime symmetry
Starting from the field space bundle \((E,\Sigma,\pi)\), a spacetime symmetry is a collection of bundle morphisms \((f,\bar{f})\) for which \(\bar{f}\) is a non-trivial diffeomorphism on the base space. The Poincare group action for our Lorentz-invariant EFTs is still specified by Eq. (3.18). The prolonging of a spacetime symmetry requires the more general version shown in Eqs. (5.17, 5.18). This requires the generalisation of Eqs. (5.22-5.24) to the case where \(\bar{f}\) is non-trivial. (We refer the interested reader to Section 4.2 of the book [78] for these more general formulae, which we only use explicitly here). The prolongation of the Poincare symmetry morphisms (3.18) gives:
\[\{j^{1}f_{a,L}:(x^{\mu},u^{i},u^{i}_{\mu})\mapsto(L^{\mu}_{\nu}x^{\nu}+a^{\mu },u^{i},(L^{-1})^{\nu}_{\mu}u^{i}_{\nu})\qquad\forall(a,L)\in O(1,3)\,\}\,. \tag{5.29}\]
Notice that while the Poincare action is non-linear on the \(x^{\mu}\) coordinates, including the effects of translations, this does not appear in the group action on the derivative coordinates upon prolonging the symmetry. Moreover, notice that it is the inverse matrix \(L^{-1}\) that appears multiplying the derivative coordinates. Again, as for the \(O(4)\) internal indices, the straightforward action of the prolonged spacetime symmetry on the derivative coordinates means that we can treat the \(\mu\) index exactly as we would any other Lorentz index when forming Lorentz-invariant contractions; namely, we should contract with the Minkowski metric \(\eta^{\mu\nu}\) on spacetime, as we do everywhere in the examples that follow in SS6 and SS7.
#### 5.5.3 Non-derivative field redefinitions (last time!)
Again, analogous to the situation we described in SS3.3, _not any_ change of prolonged section \(j^{1}\phi\to j^{1}\psi\) can be obtained by prolonging a bundle morphism, as in (5.20). Indeed, the same considerations of SS3.3, noting in particular the lemma discussed in SS3.3.5, suggest that a morphism \(j^{1}f\) on the 1-jet bundle cannot reproduce the effects of a derivative field redefinition (except under special circumstances, analogous to those considered in Example 3.3.4),24 but can reproduce the effects of non-derivative field redefinitions. It is important to check, as we do in this Subsection, that non-derivative field redefinitions are still implemented consistently by prolonging the bundle morphism \(f\) to \(j^{1}f\).
Footnote 24: Later in §6.2, we will perform an analogous exercise to that in §3.3.4 ; for a scalar field theory at the 4-derivative level, which is described by geometry on the 1-jet bundle, we ask when (if ever) a prolonged bundle morphism can reproduce the effects of a _derivative_ field redefinition.
Recall that a general Lagrangian (with up to 4 derivatives) is, in this formulation, obtained by pulling back a metric \(g^{(1)}\) from the 1-jet bundle to \(\Sigma\) along a section \(j^{1}\phi\in\Gamma_{x}(\pi_{1})\), given an initial section \(\phi\in\Gamma_{x}(\pi)\), before contracting with the inverse metric on
spacetime:
\[\mathcal{L}[\phi,g^{(1)}]=\frac{1}{2}\langle\eta^{-1},(j^{1}\phi)^{*}g^{(1)} \rangle\,.\]
Doing a prolongated bundle morphism \(j^{1}f:J^{1}E\to J^{1}E\) on the 1-jet bundle, obtained from a morphism \(f:E\to E\) via (5.20), which is itself a diffeomorphism on \(J^{1}E\) considered as a manifold, the metric changes via the usual transformation law
\[g^{(1)}=g^{(1)}_{IJ}(\mathbf{x})d\mathbf{x}^{I}\otimes d\mathbf{x}^{J}\mapsto( j^{1}f)^{*}g^{(1)}=g^{(1)}_{KL}(\mathbf{x}\circ j^{1}f)\frac{\partial\mathbf{j^{1} f}^{K}}{\partial\mathbf{x}^{I}}\frac{\partial\mathbf{j^{1}f}^{L}}{\partial \mathbf{x}^{J}}d\mathbf{x}^{I}\otimes d\mathbf{x}^{J}\,, \tag{5.30}\]
where we employ a compact notation in which
\[\{\mathbf{x}^{I}\}=\{x^{\mu},u^{i},x^{i}_{\mu}\} \tag{5.31}\]
runs over all the 1-jet bundle coordinates, and here \(\mathbf{j^{1}f}^{I}=\mathbf{x}^{I}\circ j^{1}f\). When we pull back the transformed metric along the _original_ section \(j^{1}\phi\), we get
\[(j^{1}\phi)^{*}(j^{1}f)^{*}(g^{(1)})=g^{(1)}_{KL}(\mathbf{x} \circ j^{1}f\circ j^{1}\phi)\frac{\partial(\mathbf{x}^{K}\circ j^{1}f\circ j^{ 1}\phi)}{\partial\mathbf{x}^{I}}\frac{\partial(\mathbf{x}^{L}\circ j^{1}f \circ j^{1}\phi)}{\partial\mathbf{x}^{J}}\] \[\times d(\mathbf{x}^{I}\circ j^{1}\phi)\otimes d(\mathbf{x}^{J} \circ j^{1}\phi)\,.\]
This can be expanded out in the various pieces, analogous to the expansion (5.11), using Eqs. (5.25) and (5.5) to evaluate objects like \(\mathbf{x}\circ j^{1}f\circ j^{1}\phi\), but the result would be very cumbersome and so we do not write it explicitly.
#### 5.5.4 Example: free particle quantum mechanics
To see a more explicit expression, it is helpful to lose this full generality and consider a specific example. We consider the quantum mechanics of a single real scalar field, described by a 1d sigma model on \(M=\mathbb{R}\) (we will treat this example in full generality in SS6.1), for which we view a field as a section \(\phi\) of a bundle (\(E\cong\mathbb{R}_{t}\times\mathbb{R}_{u},\mathbb{R}_{t},\pi:(t,u)\mapsto t\)). The 1-jet bundle is just 3-dimensional, on which we specify an initial fibred coordinate system \((t,u,u_{t})\). We start from the simplest viable Lagrangian, just \(\mathcal{L}=\frac{1}{2}\dot{\phi}^{2}\) that describes a free particle. As mentioned at the end of SS6.1, one valid choice of metric is25
Footnote 25: One cannot take \(g^{(1)}=dtdt+dudu\) on the 1-jet bundle, because that is not invertible and so does not define a metric.
\[g^{(1)}=u\,dt\,du_{t}+2\,du\,du\,. \tag{5.32}\]
To recap, given a section \(\phi:t\mapsto(t,\phi(t))\) which we prolong, this gives the Lagrangian
\[\mathcal{L}[\phi,g^{(1)}]=\frac{1}{2}\langle\eta^{-1},(j^{1}\phi)^{*}g^{(1)} \rangle=\frac{1}{2}\phi\ddot{\phi}+\dot{\phi}^{2}\sim\frac{1}{2}\dot{\phi}^{2} \tag{5.33}\]
after using IBPs.
Now, consider the base-point preserving bundle morphism
\[f:(t,u)\mapsto(t,u^{2})\,, \tag{5.34}\]
which, similar to the field theory example considered in SS2.3, implements a non-derivative field redefinition sending our initial section \(\phi\) to a section \(\psi\) whose components satisfy \(\psi=\phi^{2}\). Applying Eq. (102), the prolongation of this morphism to the 1-jet bundle is
\[j^{1}f:(t,u,u_{t})\mapsto(t,u^{2},2uu_{t})\,, \tag{103}\]
as expected from the chain rule. Now apply this morphism to the metric \(g^{(1)}\), using (101), to obtain
\[g^{(1)}\mapsto(j^{1}f)^{*}g^{(1)}=2u^{2}u_{t}dt\,du+2u^{3}dt\,du_{t}+8u^{2}du \,du\,. \tag{104}\]
Finally, pulling this back and forming the transformed Lagrangian gives
\[\mathcal{L}[\phi,(j^{1}f)^{*}g^{(1)}]=\frac{1}{2}\langle\eta^{-1},(j^{1}\phi)^ {*}(j^{1}f)^{*}g^{(1)}\rangle=\phi^{2}\dot{\phi}^{2}+\phi^{3}\ddot{\phi}+4\phi ^{2}\dot{\phi}^{2}\stackrel{{\rm IBPs}}{{\sim}}2\phi^{2}\dot{ \phi}^{2}\,, \tag{105}\]
after doing IBPs. We see that the formulae for prolonging the bundle morphism to the 1-jet, where the original morphism corresponds to a particular non-derivative field redefinition, gives the consistent transformation of the Lagrangian when computed using a 1-jet metric. We expect the same applies for any non-derivative field redefinition, which we would write as \(\phi\mapsto\psi=\phi^{2}\) at the level of coordinates implemented via the general formula (102), and for general Lagrangians (with up to 4-derivatives).
#### Derivative field redefinitions
We emphasize that, mirroring our discussion in SS3.3 of 2-derivative Lagrangians using the field space bundle picture, one can always implement more general derivative field redefinitions just by pulling back the metric \(g^{(1)}\) along a different section \(j^{1}\psi\), as described in SS5.4; but, in this more general situation, one loses the equivalent description in terms of a prolongated morphism on the jet bundle.
We can exhibit this by considering an explicit derivative field redefinition, still in the context of this QM example for simplicity. Consider a change of section \(\phi\to\psi\) which, in components, is defined by \(\phi\to\psi=\phi-\epsilon\ddot{\phi}\), analogous to the field theory example (106). Starting from the same metric (106) describing the free particle (when pulled back along \(j^{1}\phi\)), but pulling back along the different section \(\psi\), we get
\[\mathcal{L}[\phi,g^{(1)}]\mapsto\mathcal{L}[\psi,g^{(1)}] =\frac{1}{2}\langle\eta^{-1},(j^{1}\psi)^{*}g^{(1)}\rangle \tag{106}\] \[=\frac{1}{2}\langle\partial_{t}\otimes\partial_{t},dt\otimes dt \rangle\left((u\circ j^{1}\psi)\partial_{t}(u_{t}\circ j^{1}\psi)+2\partial_{t }(u\circ j^{1}\psi)\partial_{t}(u\circ j^{1}\psi)\right)\] \[=\frac{1}{2}(\phi-a\ddot{\phi})\partial_{t}(\dot{\phi}-a\dddot{ \phi})+\partial_{t}(\phi-a\ddot{\phi})^{2}\] \[=\frac{1}{2}\left(\phi\ddot{\phi}-a\ddot{\phi}^{2}-a\phi\dddot{ \phi}+a^{2}\dddot{\phi}\dddot{\phi}\right)+\left(\dot{\phi}-a\dddot{\phi} \right)^{2}\stackrel{{\rm IBPs}}{{\sim}}\frac{1}{2}\left(\dot{ \phi}-a\dddot{\phi}\right)^{2}\,.\]
As for the example considered above in SS3.3.4, the formulae for implementing this field redefinition are essentially trivial. Changing section corresponds to'subbing in' \(\mathbf{x}\circ\psi\) everywhere in place of \(\mathbf{x}\circ\phi\), which are precisely the manipulations one would do to 'change fields' at the level of the Lagrangian.
Examples
In this Section we consider concrete examples of scalar field theories, to better acquaint the reader with the jet bundle formalism described in SS4, and the notion of geometry thereon (SS5). The examples start simple and increase in complexity.
### Quantum mechanics on the line
We begin with an example that could not be much simpler, namely the quantum mechanics of a point particle moving on the real line \(M\cong\mathbb{R}\), which is described by a \((0+1)\)-dimensional sigma model on \(\mathbb{R}\). The 1-jet bundle is here only 3-dimensional, meaning it is feasible to write down metrics explicitly and thereby get a feel for the general formalism.
It is worth remarking that, in \(0+1\) dimensions, the scalar field \(\phi\) has negative (classical) mass dimension, and so operators with more insertions of the field \(\phi\) become more and more relevant (in contrast to the story we are used to in \(3+1\) dimensions). Therefore, the 'EFT expansion' we consider in this Section should be regarded as a formal construction, in which we expand in both number of fields and number of derivatives, not to be interpreted as an expansion in increasingly irrelevant operators.
The sigma model field is simply a real function \(\phi:\mathbb{R}\to\mathbb{R}:\,t\mapsto\phi(t)\). For our symmetries, we take a pair of \(O(1)\cong\mathbb{Z}_{2}\) factors that act as inversions on both the source and target lines:
\[T:t\mapsto-t, \text{(time reversal)}\,, \tag{100}\] \[P:\phi\mapsto-\phi, \text{(parity)}\,. \tag{101}\]
This defines a simple toy quantum field theory to which we can apply the jet bundle formalism, which nonetheless is structurally similar to the Higgs EFTs; the \(O(1)_{t}\times O(1)_{u}\) symmetry, like the \(O(3,1)_{\Sigma}\times O(4)_{M}\) of the HEFT or SMEFT, allows only terms with even numbers of derivatives, and even numbers of fields.
The first step is to pass from the formulation in terms of maps to field space \(M\cong\mathbb{R}\), to the formulation in which the scalar field is a section of a bundle \(E\). For simplicity, let us take the field space bundle here to be (globally) a product manifold, \(E=\mathbb{R}_{t}\times\mathbb{R}_{u}\to\mathbb{R}_{t}\), on which our field
\[\phi:t\mapsto(t,\phi(t)) \tag{102}\]
is a section. The 1-jet bundle \(J^{1}E\) can then be modelled as the real 3-manifold
\[J^{1}E\cong T^{*}\mathbb{R}_{t}\times\mathbb{R}_{u}\,, \tag{103}\]
with local fibred coordinate system
\[y^{I}=\{t,u,u_{t}\}\,. \tag{104}\]
Along the section \(j^{1}\phi\) of \(J^{1}E\to\Sigma\), these coordinates evaluate to
\[t\circ j^{1}\phi=t, \tag{105}\]
\[u\circ j^{1}\phi =\phi(t), \tag{111}\] \[u_{t}\circ j^{1}\phi =\dot{\phi}(t)\,. \tag{112}\]
Following our general formalism, an invariant metric on this 1-jet bundle can be used to define a general 4-derivative Lagrangian for this system.
Before writing down the 1-jet metric, it is helpful to know where we want to end up: using the usual procedure, let us first write down the most general Lagrangian for this toy EFT with up to 4-derivatives in a non-redundant basis, consistent with our pair of \(O(1)\) symmetries. At the 2-derivative level, there are two independent operators \(\phi\ddot{\phi}\) and \(\dot{\phi}^{2}\), one of which can be removed (we remove \(\phi\ddot{\phi}\)) by an IBP relation. At the 4-derivative level, there are 5 operators and 3 IBP relations. We choose to remove the operators \(\dddot{\phi}\) and \(\dddot{\phi}\dot{\phi}\), which cannot in fact be written using the 1-jet bundle (they contain too many derivatives acting on a single field), plus the operator \(\ddot{\phi}\dot{\phi}\dot{\phi}\), to leave a non-redundant basis spanned by the 4-derivative operators \(\dot{\phi}^{4}\) and \(\ddot{\phi}^{2}\). In general all these operators (including the 0-derivative constant piece) are multiplied by arbitrary even functions of \(\phi\).
Now let us see how to recover this general Lagrangian using the 1-jet bundle described above. We write down the most general metric
\[g=g_{IJ}dy^{I}\otimes dy^{J} \tag{113}\]
on \(J^{1}E\) that is consistent with our \(O(1)_{t}\times O(1)_{u}\) global symmetry. We truncate each metric component to include all contributions that will pull back to Lagrangian terms with up to and including four time derivatives. We have:
\[g_{u_{t}u_{t}} =A(u), \tag{114}\] \[g_{uu_{t}} =u_{t}uB(u),\] (115) \[g_{uu} =C(u)+u_{t}^{2}D(u),\] (116) \[g_{tu_{t}} =uE(u)+uu_{t}^{2}F(u),\] (117) \[g_{tu} =u_{t}G(u)+u_{t}^{3}H(u),\] (118) \[g_{tt} =-2V(u)+u_{t}^{2}J(u)+u_{t}^{4}K(u), \tag{119}\]
where \(A(u)\),... \(V(u)\) are even functions of \(u\), as dictated by our symmetry. Written as a symmetric matrix of components:
\[[g_{IJ}]=\begin{pmatrix}-2V+u_{t}^{2}J+u_{t}^{4}K&u_{t}G+u_{t}^{3}H&uE+uu_{t} ^{2}F\\.&C+u_{t}^{2}D&uu_{t}B\\.&.&A\end{pmatrix}\,, \tag{120}\]
in our local fibred coordinates (109).
To form a Lagrangian, we pull back this metric along the section \(j^{1}\phi\), and contract with the inverse metric on the 1d spacetime:
\[\mathcal{L}[\phi,g]=\frac{1}{2}\left\langle\eta^{-1},\,(j^{1}\phi)^{*}g \right\rangle\,. \tag{121}\]
Here, the inverse metric is just
\[\eta^{-1}=\partial_{t}\otimes\partial_{t}.\]
The resulting object is a scalar function on the worldline, _i.e._ a function of \(t\), that is our Lagrangian. We get
\[\mathcal{L}[\phi,g]=\frac{1}{2}\left\langle\eta^{-1},\,(j^{1}\phi)^ {*}g\right\rangle= \frac{1}{2}g_{tt}+g_{tu}\dot{\phi}+g_{tu}\ddot{\phi}+\frac{1}{2}g_{ uu}\dot{\phi}^{2}+g_{uu}\dot{\phi}\ddot{\phi}+\frac{1}{2}g_{u_{t}u_{t}}\ddot{ \phi}^{2} \tag{111}\] \[= -V(\phi)+\frac{1}{2}\dot{\phi}^{2}(C+2G+J)+\ddot{\phi}\phi E\] (112) \[+\frac{1}{2}\dot{\phi}^{4}(D+2H+K)+\ddot{\phi}\dot{\phi}^{2}\phi( B+F)+\frac{1}{2}\ddot{\phi}^{2}A\,.\]
where \(V,A,B\ldots K\) are implicitly understood to be even functions of the field: \(V(\phi),A(\phi)\), etc. As anticipated, we do not obtain the operators \(\dddot{\phi}\) and \(\dddot{\phi}\dot{\phi}\) with more than two derivatives on a single field, but this is not an issue because those operators can be removed from the Lagrangian using IBPs.
We do, however, obtain 2 (3) operator structures with 2 (4) derivatives, only 1 (2) of which are independent, as discussed. The extra operators can be removed via IBP:
\[\ddot{\phi}\phi E =-\dot{\phi}^{2}E-\dot{\phi}\phi\,\partial_{t}E=-\dot{\phi}^{2}E- \dot{\phi}\phi\,2\phi\dot{\phi}\frac{dE}{d(\phi^{2})}=-\dot{\phi}^{2}(E+2\phi ^{2}E^{\prime})\,, \tag{113}\] \[\ddot{\phi}\dot{\phi}^{2}\phi(B+F) =-\frac{1}{3}\dot{\phi}^{4}\left(B+F+2\phi^{2}(B^{\prime}+F^{ \prime})\right)\,, \tag{114}\]
where \(E^{\prime}=dE/d(\phi^{2})\) and analogously for \(B^{\prime},F^{\prime}\). With this notation, \(B^{\prime},E^{\prime},F^{\prime}\) are even functions of \(\phi\). The Lagrangian (111) can now be written in a non-redundant basis:
\[\mathcal{L}[\phi,g]= \text{IBP}\circ\frac{1}{2}\left\langle\eta^{-1},\,(j^{1}\phi)^{* }g\right\rangle \tag{115}\] \[= -V+\frac{1}{2}\dot{\phi}^{2}(C+2G+J-2E-4\phi^{2}E^{\prime})\] (116) \[+\dot{\phi}^{4}\left(\frac{1}{2}(D+K)+H-\frac{1}{3}(B+F)-\frac{2 }{3}\phi^{2}(B^{\prime}+F^{\prime})\right)+\frac{1}{2}\ddot{\phi}^{2}A\,.\]
In fact we know a little more; the 2-derivative term must be non-vanishing, and its leading order piece is fixed by canonical normalisation of \(\phi\). We require
\[C(0)+2G(0)+J(0)-2E(0)=1\,. \tag{117}\]
This defines a map from metric \(g\) to \(\mathcal{L}[\phi,g]\), the Lagrangian functional (of \(\phi\)), in a particular basis with no residual IBP redundancies. The map is, importantly, surjective, meaning that the most general Lagrangian with up to 4-derivatives can be captured by pulling back a metric on the 1-jet bundle.
The map \(g\mapsto\mathcal{L}[\phi,g]\) is not, however, injective. This is a general feature that will recur in every example: there are redundancies in the map from 1-jet metrics to Lagrangians, in the sense that many different metrics map to the same Lagrangian. A natural next step is to try to define equivalence classes of metrics that map to the same Lagrangian (up to
a total derivative), in order to fix up a map \(\mathcal{L}[\phi,g]\) that is both surjective and injective. Generally, two metrics \(g\) and \(\tilde{g}\) are equivalent iff
\[\eta^{-1}(j^{1}\phi)^{*}(g-\tilde{g})=\partial_{\mu}K^{\mu}\,. \tag{108}\]
However, it is not so straightforward to make a consistent choice of representative for each such equivalence class, as is made apparent even in this toy example where we can explicitly'read off' the redundancies. For example, one might try to define representative metrics by setting the functions \(G\), \(J\), \(E\) to zero (capturing the 2-derivative term via \(C\)), and setting \(H\), \(K\), \(B\), and \(F\) to zero also (capturing the \(\dot{\phi}^{4}\) term via \(D\)). But this can lead to pathologies when trying to describe certain (physically reasonable) Lagrangians, such as the free theory \(\mathcal{L}=\frac{1}{2}\dot{\phi}^{2}\); with the'representative' above, the'metric' would have components \([g_{IJ}]=\text{diag}(0,1,0)\), which is singular and so not a metric at all. A valid metric that does map to this Lagrangian, accounting for IBPs in the manner described above, would be \([g_{IJ}]=\text{diag}(u/2,2,u/2)\).
### Real scalar in 4d
We now consider the case of a single real scalar field in Minkowski spacetime. We consider the scalar field \(\phi\) to be a section of the bundle \(\pi:E=\mathbb{R}^{1,3}\times\mathbb{R}_{u}\to\mathbb{R}^{1,3}\):
\[\phi:x^{\mu}\mapsto(x^{\mu},\phi(x^{\mu})). \tag{109}\]
We require Poincare invariance, but do not impose any internal symmetries on the field. The 1-jet bundle \(J^{1}E\) is a real 9-manifold
\[J^{1}E\cong T^{*}\mathbb{R}^{1,3}\times\mathbb{R}_{u}\,, \tag{110}\]
with local fibred coordinates
\[y^{I}=\{x^{\mu},u,u_{\mu}\}\,. \tag{111}\]
Along the section \(j^{1}\phi\) of \(J^{1}E\to\Sigma\), they evaluate to
\[x^{\mu}\circ j^{1}\phi =x^{\mu}, \tag{112}\] \[u\circ j^{1}\phi =\phi(x^{\mu}),\] (113) \[u_{\mu}\circ j^{1}\phi =\partial_{\mu}\phi(x^{\mu})\,. \tag{114}\]
Following the same steps as in the previous example, we first write down the Lagrangian we wish to obtain.
In this case, we are after a complete Green's basis for a scalar EFT, _i.e._ a set of operators that is non-redundant under IBP (but can in general contain redundancies under Equations-of-Motion), with up to 4 derivatives and arbitrarily large number of field insertions. We also require no more than 2 derivatives acting on each field and that boxes \(\Box=\partial_{\mu}\partial^{\mu}\) are absent. The former condition is necessary, because it is not possible to obtain such structures from a 1-jet bundle metric. The complete absence of operators with boxes is not strictly required by the metric structure, but turns out to be a very convenient choice: the Lagrangian obtained pulling back the most general 1-jet bundle metric
contains _all_ box-less operators with up to 4 derivatives, but only a subset of those with boxes, see SSB. Restricting to box-less bases guarantees that the scalar Lagrangian will be manifestly matched by pulling back the 1-jet bundle metric, without further manipulations and without introducing cumbersome evaluations of which operators with boxes can or cannot be obtained. The fact that all operators with boxes can always be removed from a Green's basis is not obvious a priori, but can be proven explicitly for any scalar theory at any derivative order, see SSB.
In the case of a single real scalar with interactions with up to 4 derivatives, the conditions listed above identify unambiguously:26
Footnote 26: With \(N\geq 2\) field insertions, there are 2 operator structures with 2 derivatives and 1 IBP. With \(N=2\) (3) fields, one can write 4 (6) structures with 4 derivatives, related by 3 (4) IBP. With \(N\geq 4\) fields, there are 7 structures with 4 derivatives and 4 IBP. The case \(N=1\) is never relevant as any such operator is automatically a total derivative. The number of independent operators for each field multiplicity is automatically reflected in the Lagrangian (114).
\[\mathcal{L}= \,\frac{1}{2}\partial_{\mu}\phi\partial^{\mu}\phi\mathcal{F}_{0}( \phi)-\mathcal{V}(\phi) \tag{115}\] \[+\frac{1}{\Lambda^{2}}(\partial_{\mu}\partial_{\nu}\phi\partial^ {\mu}\partial^{\nu}\phi)\,\mathcal{F}_{1}(\phi)+\frac{1}{\Lambda^{3}}( \partial_{\mu}\partial_{\nu}\phi\partial^{\mu}\phi\partial^{\nu}\phi)\, \mathcal{F}_{2}(\phi)+\frac{1}{\Lambda^{4}}(\partial_{\mu}\phi\partial^{\mu} \phi)^{2}\,\mathcal{F}_{3}(\phi)\,.\]
where \(\mathcal{F}_{0}(0)=1\) to have a canonical kinetic term and in general \(\mathcal{F}_{i}(0)=c_{i}\) for \(i=1,2,3\), accounting for arbitrary Wilson coefficients.27 The operator bases in Eqs. (115) and (116) below were cross-checked with BasisGen[84].
Footnote 27: We will see in §8 that the Wilson coefficient \(\mathcal{F}_{1}(0)\) is _not_ completely arbitrary, but must satisfy a positivity bound (114). In particular, its sign is negative [83]. In the context of our jet bundle geometry, this translates to a condition on the signature of the metric.
The Lagrangian (115) can be recovered from a 1-jet bundle metric which is formally similar to the one in the 1d example. The metric components, truncated such that only terms with up to 4 derivatives are generated when pulling back to the Lagrangian, are
\[g^{\mu\nu}_{uu} =\eta^{\mu\nu}A(u)\,, \tag{116}\] \[g^{\mu}_{uu} =\frac{u^{\mu}}{\Lambda^{2}}B(u)\,,\] (117) \[g_{uu} =C(u)+\frac{u_{\rho}u^{\rho}}{\Lambda^{4}}D(u)\,,\] (118) \[g^{\nu}_{\mu u} =\delta^{\nu}_{\mu}E(u)+\frac{u^{\nu}u_{\mu}}{\Lambda^{4}}F_{1}(u )+\delta^{\nu}_{\mu}\frac{u_{\rho}u^{\rho}}{\Lambda^{4}}F_{2}(u)\,,\] (119) \[g_{\mu u} =\frac{u_{\mu}}{\Lambda^{2}}G(u)+\frac{u_{\mu}\,u_{\rho}u^{\rho}} {\Lambda^{6}}H(u)\,,\] (120) \[g_{\mu\nu} =-\frac{\eta_{\mu\nu}}{2}V(u)+\left(\frac{u_{\mu}u_{\nu}}{\Lambda ^{4}}+\frac{\eta_{\mu\nu}}{4}\frac{u_{\rho}u^{\rho}}{\Lambda^{4}}\right)\frac {J(u)}{2}+\left(\frac{u_{\mu}u_{\nu}}{\Lambda^{4}}+\frac{\eta_{\mu\nu}}{4} \frac{u_{\rho}u^{\rho}}{\Lambda^{4}}\right)\frac{u_{\sigma}u^{\sigma}}{\Lambda ^{4}}\frac{K(u)}{2}\,, \tag{121}\]
where the coordinates \(u^{\mu}\) are defined by raising indices with the spacetime metric \(u^{\mu}=\eta^{\mu\nu}u_{\nu}\) and we have restored the powers of \(\Lambda\) to make the dimensionalities manifest. The functions \(V,A\ldots K_{i}\) are defined to be dimensionless, and therefore functions of \((u/\Lambda)\). Moving from 1d to 4d Minkowski spacetime, we find a larger number of independent tensor structures: the terms proportional \(F_{1}\) and \(F_{2}\) were degenerate in the QM example,
but become distinguishable in this case. In Eq. (111) the functions \(J\) and \(K\) multiply a combination of two Lorentz structures: they were grouped together because they yield the same result upon contracting with \(\eta^{\mu\nu}\):
\[\eta^{\mu\nu}u_{\mu}u_{\nu}=u_{\mu}u^{\mu}=\frac{\eta^{\mu\nu}\eta_{\mu\nu}}{4}u _{\rho}u^{\rho}\,. \tag{112}\]
Pulling back along the section \(j^{1}\phi\) and contracting with the inverse metric:
\[\mathcal{L}[\phi,g]=\frac{1}{2}\left\langle\eta^{-1},\,(j^{1}\phi)^{*}g\right\rangle \qquad\qquad\qquad\text{where}\quad\eta^{-1}=\eta^{\rho\sigma}\partial_{\rho} \otimes\partial_{\sigma}\,, \tag{113}\]
we get
\[\mathcal{L}[\phi,g]= \frac{\Lambda^{4}}{2}\eta^{\mu\nu}g_{\mu\nu}+\Lambda^{2}g_{\mu \alpha}\partial^{\mu}\phi+\Lambda g_{\mu\alpha}^{\nu}\partial^{\mu}\partial_{ \nu}\phi+\frac{1}{2}g_{uu}\partial_{\mu}\phi\partial^{\mu}\phi \tag{114}\] \[+\frac{g_{uu}^{\mu}}{\Lambda}\partial^{\nu}\phi\partial_{\mu} \partial_{\nu}\phi+\frac{1}{2}\frac{g_{uu}^{\mu\nu}}{\Lambda^{2}}\partial_{ \rho}\partial_{\mu}\phi\partial^{\rho}\partial_{\nu}\phi\] \[= -\Lambda^{4}V+\frac{1}{2}(\partial_{\mu}\phi\partial^{\mu}\phi) \left(C+2G+J\right)+\Lambda\square\phi E\] (115) \[+\frac{(\partial_{\mu}\partial_{\nu}\phi\partial^{\mu}\partial^{ \nu}\phi)}{\Lambda^{2}}\frac{A}{2}+\frac{\partial_{\mu}\partial_{\nu}\phi \partial^{\mu}\phi\partial^{\nu}\phi}{\Lambda^{3}}\left(B+F_{1}\right)+\frac{( \square\phi)(\partial_{\mu}\phi\partial^{\mu}\phi)}{\Lambda^{3}}F_{2}\] \[+\frac{(\partial_{\mu}\phi\partial^{\mu}\phi)^{2}}{\Lambda^{4}} \frac{D+2H+K}{2}\,,\]
where \(V,A\ldots K\) are understood to be functions of \(\phi/\Lambda\).
The terms proportional to \(E\) and \(F_{2}\) contain boxes. Both arise from pulling back the \(g_{\mu u}^{\nu}\) metric element, which, at the 1-jet level, is the only one that allows the contraction of two derivatives acting on the same field. As is evident in Eq. (114), box contractions can't be implemented in any of the other terms. Following the reasoning above and the argument proved in SSB, all operators with boxes obtained from the 1-jet bundle are redundant with the box-less ones, and can be removed using IBPs. Let us do this explicitly: if \(E\) is a constant, the corresponding term is a total derivative and therefore can be removed. If \(E\) is at least linear in \(\phi\), then it can be recast as:
\[\square\phi E=-\partial_{\mu}\phi\partial^{\mu}E=-\frac{\partial_{\mu}\phi \partial^{\mu}\phi}{\Lambda}\frac{dE}{d(\phi/\Lambda)}\,. \tag{116}\]
The term proportional to \(F_{2}\) can always be removed via IBP, even if \(F_{2}\) is a constant:
\[(\square\phi)\partial_{\mu}\phi\partial^{\mu}\phi F_{2}=-2\partial_{\mu} \partial_{\nu}\phi\partial^{\mu}\phi\partial^{\nu}\phi F_{2}-\frac{\partial_{ \mu}\phi\partial^{\mu}\phi\,\partial_{\nu}\phi\partial^{\nu}\phi}{\Lambda} \frac{dF_{2}}{d(\phi/\Lambda)} \tag{117}\]
No further manipulation is required in order to obtain a non-redundant Lagrangian, that manifestly matches Eq. (110):
\[\mathcal{L}[\phi,g]= \,\text{IBP}\circ\frac{1}{2}\left\langle\eta^{-1},\,(j^{1}\phi)^ {*}g\right\rangle \tag{118}\] \[= \,-\Lambda^{4}V+\frac{1}{2}(\partial_{\mu}\phi\partial^{\mu}\phi )\left(C+2G+J-2E^{\prime}\right)\] \[+ \frac{(\partial_{\mu}\partial_{\nu}\phi\partial^{\mu}\partial^{ \nu}\phi)}{\Lambda^{2}}\frac{A}{2}+\frac{\partial_{\mu}\partial_{\nu}\phi \partial^{\mu}\phi\partial^{\nu}\phi}{\Lambda^{3}}\left(B+F_{1}-2F_{2}\right)\]
\[+\frac{(\partial_{\mu}\phi\partial^{\mu}\phi)^{2}}{\Lambda^{4}} \frac{D+2H+K-2F_{2}^{\prime}}{2}\,,\]
where now \(E^{\prime}=dE/d(\phi/\Lambda)\) and analogously for \(F_{2}\). A canonical kinetic term requires
\[C(0)+2G(0)+J(0)-2E^{\prime}(0)=1\,. \tag{101}\]
The results obtained for a real scalar in Minkowski spacetime share several features with the case of quantum mechanics on the line: the map from metric to Lagrangian is surjective, and a degeneracy is observed between the same groups of functions, namely \((B,F_{i})\), \((C,G,J,E)\), \((D,H,K)\). The presence of \(E^{\prime}\) alone in this example, rather than both \(E\) and \(E^{\prime}\) as in the 1d case, is simply due to the different internal symmetry assumed.
Finally, we comment on the conditions for \(g^{(1)}\) with elements (100)-(101) to be invertible, which is a required to have a valid metric. The necessary condition to have \(\det[g^{(1)}]\neq 0\) is that at least one of the following 9 polynomials is non-vanishing:28
Footnote 28: Note that, in general, this requirement does not preclude the possibility of coordinate singularities, i.e. conditions in which the metric is singular only at specific values of the coordinates \((u,u_{\mu})\), which should be examined case by case.
\[C(2E^{2}+AV)\,, \tag{102}\] \[D(8F_{2}^{2}-AK)\,,\] (103) \[B(4F_{1}G-BJ)-2CF_{1}^{2}\,,\] (104) \[B(4F_{1}H-BK)-2DF_{1}^{2}\,,\] (105) \[4D(2E^{2}+AV)+C(16EF_{2}-AJ)\,,\] (106) \[C(8F_{2}^{2}-AK)+D(16EF_{2}-AJ)\,,\] (107) \[4E(BG-CF_{1})+A(CJ-2G^{2})+B^{2}V\,,\] (108) \[16E(BH-DF_{1})+16F_{2}(BG-CF_{1})+4A(DJ+CK-4GH)-B^{2}J\,,\] (109) \[16F_{2}(BH-DF_{1})+4A(DK-2H^{2})-B^{2}K\,. \tag{110}\]
In the generic case where 4-derivative operators with non-zero Wilson coefficients are present in the Lagrangian, the conditions (102)-(109) leave large freedom to choose a possible metric that pulls back to Eq. (104). A minimal example is:
\[g^{\mu\nu}_{uu} =2\eta^{\mu\nu}\mathcal{F}_{1}(u)\,, g^{\mu}_{uu} =\frac{u^{\mu}}{\Lambda^{2}}\mathcal{F}_{2}(u)\,, g_{uu} =\mathcal{F}_{0}(u)+2\frac{u_{\mu}u^{\mu}}{\Lambda^{4}}\mathcal{F }_{3}(u)\,, \tag{111}\] \[g^{\nu}_{\mu u} =0\,, g_{\mu u} =0\,, g_{\mu\nu} =-\frac{1}{2}\eta_{\mu\nu}\frac{\mathcal{V}(u)}{\Lambda^{4}}\,.\]
If we restrict the Lagrangian to the 2- and 0-derivative terms, setting to zero all functions except \(V,C,E\), then the conditions for invertibility collapse into \(2CE^{2}\neq 0\), which implies that both \(C\) and \(E\) must be non-vanishing. This indicates that, even though it produces a physically redundant operator, the \(E\) term is necessary for the jet-bundle metric to be non-singular for any value of the 4-derivative Wilson coefficients, consistently with what observed in SSSS5.5.4 and 6.1. Restricting further to 0-derivatives only is incompatible with both physics (as the kinetic term is removed) and jet-bundle geometry, as the metric would be inevitably singular.
#### Equation-of-Motion redundancies
The operator basis considered up to this point is non-redundant under IBP, i.e. it is a Green's basis for operators with up to 4 derivatives and arbitrary number of fields. It is interesting to note that the 1-jet bundle geometry automatically accounts for all IBP among operators with up to 4 derivatives, except the residual \(E\) and \(F_{2}\) function redundancies.29
Footnote 29: The fact that no redundant box-less operators are produced by the jet bundle metric is due to the fact that, in this particular case, there are exactly as many operators with boxes as independent IBP. Therefore the set of _all_ box-less operators with up to 4 derivatives forms a complete and non-redundant Green’s basis.
When constructing EFTs, we are typically interested in removing Equation-of-Motion (EOM) redundancies as well. EOM relations express invariance properties of the \(S\)-matrix. Because this information is not contained in the geometric formulation, they need to be imposed as additional conditions on top of the Lagrangian in Eq. (111).
Formally, the removal of EOM redundancies in an EFT is performed by applying _derivative_ field redefinitions on the Lagrangian. Therefore, based on the discussion in SS5, two jet bundle metrics pulling back to EOM-equivalent Lagrangians are in general not related by a morphism. In practice, we do not expect to find an operation on the jet bundle metric that is mathematically equivalent to basis reduction via EOM. Nevertheless, it is instructive to consider the EOM-reduced Lagrangian for the toy example under study, and ask what classes of jet bundle metrics would pull back to it. We will come back to the inadequacy of jet bundle morphisms (even prolonged ones) at the end of this subsection.
To this end, we take a step back and review the construction of the operator basis. This operation is necessary because, being (111) "box-less", EOM redundancies are not manifest. At the same time, we hope to give a useful illustration of the application of derivative field redefinitions in an EFT.
At the 2-derivatives level and with \(N\) fields, there are two allowed operators structures:
\[O_{1}^{(2)} =(\Box\phi)\phi^{N-1}\,, O_{2}^{(2)} =(\partial_{\mu}\phi\partial^{\mu}\phi)\phi^{N-2}\,, \tag{113}\]
which are related by the IBP
\[O_{1}^{(2)}+(N-2)O_{2}^{(2)} =0\,. \tag{114}\]
To obtain (111), we have used the IBP to remove \(O_{1}^{(1)}\) for each value of \(N\geq 2\). This is always the most convenient choice for \(N=2\), since in this case \(O_{2}^{(2)}\) is the kinetic term. However, for \(N\geq 3\), we can choose to remove \(O_{2}^{(2)}\) via IBP, and then use the EOM to trade \(O_{1}^{(2)}\) for 0-derivative interactions, effectively obtaining a basis without any 2-derivative operator, besides the kinetic term. In practice, the EOM removal implies that we start with a Lagrangian
\[\mathcal{L} =\frac{1}{2}\partial_{\mu}\phi\partial^{\mu}\phi-\mathcal{V}( \phi)+\sum_{N=3}^{\infty}\frac{c_{1,N}^{(2)}}{\Lambda^{N-2}}(\Box\phi)\phi^{ N-1}+\ldots\,, \tag{115}\]
where the scalar potential is as in Eq (110) and the dots stand for other operators that are present in general. Then we apply the derivative redefinition
\[\phi\mapsto\phi-\sum_{k=0}^{\infty}\frac{\epsilon_{k}}{\Lambda^{k+2}}(\Box\phi )\phi^{k}\,, \tag{116}\]
and truncate again at 2-derivatives:
\[\mathcal{L} \to\frac{1}{2}\partial_{\mu}\phi\partial^{\mu}\phi-\mathcal{V}( \phi)-\frac{d\mathcal{V}(\phi)}{d\phi}\sum_{k}\,\frac{\epsilon_{k}}{\Lambda^{k +2}}(\Box\phi)\phi^{k}+\sum_{N=3}^{\infty}\frac{c_{1,N}^{(2)}}{\Lambda^{N-2}}( \Box\phi)\phi^{N-1}+\ldots \tag{114}\] \[=\frac{1}{2}\partial_{\mu}\phi\partial^{\mu}\phi-\mathcal{V}( \phi)+(\Box\phi)\left[-\sum_{n,k}\frac{\epsilon_{k}nV_{n}}{\Lambda^{n+k-2}} \phi^{n+k-1}+\sum_{N=3}^{\infty}\frac{c_{1,N}^{(2)}}{\Lambda^{N-2}}\phi^{N-1} \right]+\ldots\] (115) \[=\frac{1}{2}\partial_{\mu}\phi\partial^{\mu}\phi-\mathcal{V}( \phi)+\sum_{N=3}^{\infty}\left[c_{1,N}^{(2)}-\sum_{n=1}^{N}\epsilon_{N-n}nV_{ n}\right]\frac{\phi^{N-1}}{\Lambda^{N-2}}(\Box\phi)+\ldots \tag{116}\]
Choosing the constants \(\epsilon_{k}\) such that \(\sum_{n=1}^{N}\epsilon_{N-n}nV_{n}=c_{1,N}^{(2)}\) removes the infinite tower of \(O_{1}^{(2)}\) operators with \(N\geq 3\).30
Footnote 30: This is an infinite system of equations, each labeled by \(N\) and depending on \(N\) free parameters \(\epsilon_{0}\ldots\epsilon_{N-1}\). It always admits a solution that can be found recursively.
At the level of 4 derivatives and with \(N\) fields, we can write the 7 structures
\[O_{1}^{(4)} =(\Box\Box\phi)\phi^{N-1}\,, O_{5}^{(4)} =(\Box\phi)(\partial_{\mu}\phi\partial^{\mu}\phi)\phi^{N-3}\,, \tag{117}\] \[O_{2}^{(4)} =(\partial_{\mu}\Box\phi)(\partial^{\mu}\phi)\phi^{N-2}\,, O_{6}^{(4)} =(\partial_{\mu}\partial_{\nu}\phi\,\partial^{\mu}\phi\partial^{ \nu}\phi)\phi^{N-3}\,,\] (118) \[O_{3}^{(4)} =(\Box\phi)(\Box\phi)\phi^{N-2}\,, O_{7}^{(4)} =(\partial_{\mu}\phi\partial^{\mu}\phi)^{2}\phi^{N-4}\,,\] (119) \[O_{4}^{(4)} =(\partial_{\mu}\partial_{\nu}\phi)(\partial^{\mu}\partial^{\nu }\phi)\phi^{N-2}\,, \tag{120}\]
which are related by the 4 IBPs
\[O_{1}^{(4)}+(N-1)O_{2}^{(4)} =0\,, \tag{121}\] \[O_{2}^{(4)}+O_{3}^{(4)}+(N-2)O_{5}^{(4)} =0\,,\] (122) \[O_{2}^{(4)}+O_{4}^{(4)}+(N-2)O_{6}^{(4)} =0\,,\] (123) \[2O_{6}^{(4)}+O_{5}^{(4)}+(N-3)O_{7}^{(4)} =0\,. \tag{124}\]
To obtain Eq (114), we used the four IBP to remove the four operators with boxes. On the other hand, the more widely used procedure to construct minimal operator bases is to first use IBP to remove as many operators _without_ boxes as possible, and then employ EOMs to reduce box operators to lower derivative structures, by using field redefinitions similar to Eq (111). With \(N=2,3\) fields, this procedure reveals that all 4-derivative operators are redundant with lower-derivative ones. However, for \(N\geq 4\), the operator \(O_{7}^{(4)}\) cannot be removed, neither with IBP nor with EOMs.
In summary, a IBP+EOM reduced Lagrangian with up to 4 derivatives is
\[\mathcal{L}=\frac{1}{2}\partial_{\mu}\phi\partial^{\mu}\phi-\mathcal{V}(\phi) +\frac{1}{\Lambda^{4}}(\partial_{\mu}\phi\partial^{\mu}\phi)^{2}\mathcal{F}_ {3}(\phi)\,, \tag{125}\]
which is simply a subset of the Green's basis in Eq (114), as expected. This indicates that the EOM-reduced Lagrangian can be obtained by pulling back a correspondingly reduced metric \(g\) on the 1-jet bundle. Specifically, Eq. (125) can be obtained requiring
\[C(\phi)+2G(\phi)+J(\phi)-2E^{\prime}(\phi)=1\,, \tag{126}\]
\[V(\phi) ={\cal V}(\phi)/\Lambda^{4} \tag{108}\] \[A(\phi) =0\,,\] (109) \[B(\phi)+F_{1}(\phi)-2F_{2}(\phi) =0\,,\] (110) \[\frac{1}{2}\left(D(\phi)+K(\phi)\right)+H(\phi)-F_{2}^{\prime}(\phi) ={\cal F}_{3}(\phi)\,. \tag{111}\]
This still leaves much freedom to identify a suitable metric, that respects the conditions (108)-(110) as well. A minimal example is
\[g_{uu}^{\mu\nu} =g_{uu}^{\mu}=g_{\mu u}=0\,, g_{uu} =2\frac{u_{\rho}u^{\rho}}{\Lambda^{4}}{\cal F}_{3}(u)\,, \tag{112}\] \[g_{\mu\nu} =-\frac{\eta_{\mu\nu}}{2}\frac{{\cal V}(u)}{\Lambda^{4}}\,, g_{\mu u}^{\nu} =-\frac{\delta_{\mu}^{\nu}}{2}\frac{u}{\Lambda}\,. \tag{113}\]
**Can prolonged bundle morphisms capture 'EOM field redefinitions'?**
Derivative field redefinitions of the form
\[\phi\mapsto\psi\quad\text{such that}\quad u\circ\psi=\psi\left(u\circ\phi, \partial_{\mu}\partial^{\mu}(u\circ\phi)\right)\,, \tag{114}\]
namely in which the new section \(\psi\) is a function of \(\phi\) and \(\Box\phi\), have played an important role in our discussion of this 4d real scalar example. Before our introduction of jet bundles, in SS3.3 we discussed to what extent a morphism on the field space bundle \((E,\Sigma,\pi)\), which recall can capture any _non_-derivative field redefinition, can capture the effects of derivative field redefinitions such as this (which we can understand generically to be changes of section). There are indeed fundamental reasons why such morphisms cannot induce derivative maps on sections (see lemma 3).
Nonetheless, for the particular example of a 4d real scalar with canonical kinetic term and a \(\phi^{4}\) potential, we saw that a bundle morphism of the form \((x^{\mu},u)\mapsto(x^{\mu},u+\epsilon\lambda(u))\) can replicate the shift of the Lagrangian under a derivative change of section, \(\phi\mapsto\psi=\phi-\epsilon\Box\phi\), at least when the Lagrangian is truncated to remain at 2-derivative order, and keeping only operators with up to 4-fields. One might ask whether similar 'accidents' can happen when we pass to 4-derivative Lagrangians using our 1-jet bundle formalism. In other words, can a morphism on the 1-jet bundle replicate, at the Lagrangian level, the effects of derivative field redefinitions that depend on \(\Box\phi\)? We here suggest that the answer to this question is no. (More precisely, we expect that such an equivalence will only occur for very special examples, truncated in a particular way.)
To see this, let us consider the effects of a prolongated bundle morphism, as defined in SS5.5, on the 1-jet bundle metrics constructed for the 4d real scalar in this Subsection. We specialise to the case of a base-point preserving 'perturbative morphism' on the bundle \(E\) that is consistent with Poincare symmetry, considered in SS3.3.4, of the form
\[f_{E}:(x^{\mu},u)\mapsto(x^{\mu},f(u))=(x^{\mu},u+\epsilon\lambda(u))\,. \tag{115}\]
Using Eq. (107), the prolongation of this bundle morphism to the 1-jet bundle is
\[j^{1}f_{E}:(x^{\mu},u,u_{\mu})\mapsto\left(x^{\mu},f(u),u_{\mu}\frac{\partial f }{\partial u}\right)=\left(x^{\mu},u+\epsilon\lambda,u_{\mu}+\epsilon u_{\mu} \lambda^{\prime}\right)\,, \tag{116}\]
where \(\lambda^{\prime}:=d\lambda/du\)_etc_. The coordinate basis 1-forms transform as
\[j^{1}f_{E}:(dx^{\mu},du,du_{\mu})\mapsto\left(dx^{\mu},du(1+\epsilon\lambda^{ \prime}),du_{\mu}(1+\epsilon\lambda^{\prime})+\epsilon u_{\mu}\lambda^{\prime \prime}du\right)\,, \tag{111}\]
from which we can compute the transformation law for the 1-jet bundle metric.
Let us start with the 1-jet metric written in the form of Eq. (108)
\[\begin{split} g^{(1)}=&-\frac{\eta^{\mu\nu}}{2} \frac{\mathcal{V}(u)}{\Lambda^{4}}dx^{\mu}dx^{\nu}+\left(\mathcal{F}_{0}(u)+ \frac{2u_{\mu}u^{\mu}}{\Lambda^{4}}\mathcal{F}_{3}(u)\right)dudu\\ &+\mathcal{F}_{2}(u)\frac{u^{\mu}}{\Lambda^{3}}dudu_{\mu}+\frac{2 \eta^{\mu\nu}}{\Lambda^{2}}\mathcal{F}_{1}(u)du_{\mu}du_{\nu}\,,\end{split} \tag{112}\]
which pulls back to give the Lagrangian in Eq. (105). Doing our prolongated jet bundle morphism and keeping only terms that are linear in the perturbation \(\epsilon\), we compute the transformed metric to be
\[\begin{split} j^{1}f_{E}^{*}g^{(1)}&=-\frac{1}{2} \eta^{\mu\nu}\mathcal{V}(u+\epsilon\lambda)dx^{\mu}dx^{\nu}+\mathcal{F}_{0}( u+\epsilon\lambda)(1+2\epsilon\lambda^{\prime})dudu\\ &+\frac{u_{\mu}u^{\mu}}{\Lambda^{4}}\left[2\mathcal{F}_{3}(u+ \epsilon\lambda)(1+4\epsilon\lambda^{\prime})+\epsilon\mathcal{F}_{2}(u+ \epsilon\lambda)\lambda^{\prime\prime}\right]dudu\\ &+\frac{u^{\mu}}{\Lambda^{2}}\left[\mathcal{F}_{2}(u+\epsilon \lambda)(1+3\epsilon\lambda^{\prime})+4\epsilon\mathcal{F}_{1}(u+\epsilon \lambda)\lambda^{\prime\prime}\right]dudu_{\mu}\\ &+2\eta^{\mu\nu}\mathcal{F}_{1}(u+\epsilon\lambda)(1+2\epsilon \lambda^{\prime})du_{\mu}du_{\nu}+O(\epsilon^{2})\,.\end{split} \tag{113}\]
Expanding also each'structure function' \(\mathcal{V}\), \(\mathcal{F}_{0,1,2,3}\) about \(u\), we can capture the effects of this transformation in terms of the following shifts at leading order in \(\epsilon\) (the terms neglected are \(O(\epsilon^{2})\)):
\[\mathcal{V} \mapsto\mathcal{V}+\epsilon\lambda\mathcal{V}^{\prime}\,, \tag{114}\] \[\mathcal{F}_{0} \mapsto\mathcal{F}_{0}+\epsilon(2\lambda^{\prime}\mathcal{F}_{0}+ \lambda\mathcal{F}_{0}^{\prime})\,,\] (115) \[\mathcal{F}_{1} \mapsto\mathcal{F}_{1}+\epsilon(2\lambda^{\prime}\mathcal{F}_{1} +\lambda\mathcal{F}_{1}^{\prime})\,,\] (116) \[\mathcal{F}_{2} \mapsto\mathcal{F}_{2}+\epsilon(3\lambda^{\prime}\mathcal{F}_{2} +4\lambda^{\prime\prime}\mathcal{F}_{1}+\lambda\mathcal{F}_{2}^{\prime})\,,\] (117) \[\mathcal{F}_{3} \mapsto\mathcal{F}_{3}+\epsilon(4\lambda^{\prime}\mathcal{F}_{3} +\frac{1}{2}\lambda^{\prime\prime}\mathcal{F}_{2}+\lambda\mathcal{F}_{3}^{ \prime})\,. \tag{118}\]
If we then pullback this metric to \(\Sigma\) along the original (prolongated) section \(j^{1}\phi\), we obtain the transformed Lagrangian
\[\mathcal{L}[\phi,(j^{1}f_{E})^{*}g^{(1)}] =\frac{1}{2}\left(\mathcal{F}_{0}+\epsilon(2\lambda^{\prime} \mathcal{F}_{0}+\lambda\mathcal{F}_{0}^{\prime})\right)(\partial\phi)^{2}-( \mathcal{V}+\epsilon\lambda\mathcal{V}^{\prime}) \tag{119}\] \[\quad+\frac{1}{\Lambda^{2}}\left(\mathcal{F}_{1}+\epsilon(2 \lambda^{\prime}\mathcal{F}_{1}+\lambda\mathcal{F}_{1}^{\prime})\right) \partial_{\mu}\partial_{\nu}\phi\partial^{\mu}\partial^{\nu}\phi\] \[\quad+\frac{1}{\Lambda^{3}}\left(\mathcal{F}_{2}+\epsilon(3 \lambda^{\prime}\mathcal{F}_{2}+4\lambda^{\prime\prime}\mathcal{F}_{1}+ \lambda\mathcal{F}_{2}^{\prime})\right)\partial_{\mu}\partial_{\nu}\phi \partial^{\mu}\phi\partial^{\nu}\phi\] \[\quad+\frac{1}{\Lambda^{4}}\left(\mathcal{F}_{3}+\epsilon(4 \lambda^{\prime}\mathcal{F}_{3}+\frac{1}{2}\lambda^{\prime\prime}\mathcal{F}_{ 2}+\lambda\mathcal{F}_{3}^{\prime})\right)(\partial_{\mu}\phi\partial^{\mu} \phi)^{2}\,\,.\]
Thus, upon doing the prolongated bundle morphism to transform the metric, we see that the transformation of the Lagrangian has a particularly constrained structure; operators
with a certain number \(D\) of derivatives (here 0, 2, or 4 due to our symmetry assumptions) mix only into operators with the same value of \(D\).
In contrast, if we were to perform a derivative change of section, we would see operators of lower-derivative-order mix into operators of higher-derivative-order, as in the \(\phi^{4}\) example considered in SS3.3.4: for example, it is straightforward to compute that \(\mathcal{V}\)'mixes into' the 2-derivative structure function \(\mathcal{F}_{0}\) and all three 4-derivative structure functions \(\mathcal{F}_{1,2,3}\); the 2-derivative term \(\mathcal{F}_{0}\) likewise mixes into all of \(\mathcal{F}_{1,2,3}\). For the 2-derivative toy example considered in SS3.3.4 we exploited the fact that there was only a single such operator mixing, namely of the potential into the 2-derivative term, that could be cancelled by a bundle morphism that shifts the kinetic term (explicitly, this fixed \(\lambda\) in terms of \(\partial_{u}V\)) _provided_ the potential were high enough order that we could neglect its own variation. This was a very special situation which enabled the derivative change of section to be matched by a bundle morphism. In this more general 4-derivative example, we must still fix \(\lambda\) by matching the shift of the 2-derivative term (even if that is zero, for example in the case of vanishing potential), but then we cannot hope to simultaneously match the shift in any 4-derivative term that is present. So one cannot even contrive a theory with 2- and 4-derivative terms for which the \(\phi\to\psi(\phi,\Box\phi)\) change of section can be described via an (appropriately prolongated) morphism of the jet bundle. Nonetheless, we reiterate that the derivative field redefinition can be implemented as a prolonged change of section, as discussed in SS5.4.
### Two real scalars in 4d, without internal symmetries
We generalize the previous example by introducing two scalar fields \(\phi^{1},\phi^{2}\), without imposing any internal symmetries relating them. The main goals in considering this example are twofold: (i) to cover a case in which non-trivial field indices appear in the metric, and (ii) to check that the 1-jet bundle gives a complete Green's basis for a theory with multiple fields but without internal symmetries.
The formalism is identical to the previous case, except the 1-jet bundle is now a real 14-manifold with local fibred coordinates
\[j^{I}=\{x^{\mu},u^{i},u^{i}_{\mu}\},\quad i=1,2\,, \tag{111}\]
that, along the section \(j^{1}\phi\), evaluate to
\[x^{\mu}\circ j^{1}\phi =x^{\mu} \tag{112}\] \[u^{i}\circ j^{1}\phi =\phi^{i}(x^{\mu})\] (113) \[u^{i}_{\mu}\circ j^{1}\phi =\partial_{\mu}\phi^{i}(x^{\mu}) \tag{114}\]
As in the previous cases, we start by writing the EFT Lagrangian containing a complete and IBP-non-redundant basis with up to 4 derivatives, requesting that no more than 2 derivatives act on each field and that there are no boxes. This identifies the Green's basis:
\[\mathcal{L} =\frac{1}{2}(\partial_{\mu}\phi^{i}\partial^{\mu}\phi^{j}) \mathcal{F}^{0}_{ij}-\mathcal{V}(\phi_{1},\phi_{2}) \tag{115}\] \[+\frac{1}{\Lambda^{2}}(\partial_{\mu}\partial_{\nu}\phi^{i} \partial^{\mu}\partial^{\nu}\phi^{j})\mathcal{F}^{1}_{ij}+\frac{1}{\Lambda^{3 }}(\partial_{\mu}\partial_{\nu}\phi^{i}\partial^{\mu}\phi^{j}\partial^{\nu} \phi^{k})\mathcal{F}^{2}_{ijk}+\frac{1}{\Lambda^{4}}(\partial_{\mu}\phi^{i} \partial^{\mu}\phi^{j})(\partial_{\nu}\phi^{k}\partial^{\nu}\phi^{l}) \mathcal{F}^{3}_{ijkl}\,.\]
where, in general, the scalar potential \({\cal V}\) and the \({\cal F}\) functions have the form
\[{\cal F}=f_{0}+f_{1}\frac{\phi^{1}}{\Lambda}+f_{2}\frac{\phi^{2}}{\Lambda}+f_{11} \frac{(\phi^{1})^{2}}{\Lambda^{2}}+\cdots=\sum_{a,b=0}^{\infty}f_{\underbrace{1 \ldots 1}_{a}2\ldots\underbrace{2}_{b}}\frac{(\phi^{1})^{a}(\phi^{2})^{b}}{ \Lambda^{a+b}} \tag{111}\]
Each such function contains \((n+1)\) independent index assignments for every number of fields \(n=a+b\). Therefore, accounting for the symmetry in \((ij)\), with \(N\) field insertions in the Lagrangian, the 2-derivative operator has \(3(n+1)=3(N-1)\) independent index assignments. For the 4-derivative operators there are a total of \(3(5N-11)\) independent assignments for \(N\geq 4\). This number is reduced to 12 for \(N=3\) and to 3 for \(N=2\), due to the fact that some operator structures are unavailable in these cases. These numbers, as well as the EOM-reduced counting below, have been cross-checked with BasisGen. Canonical kinetic terms require \(f_{11}^{0}=f_{22}^{0}=1,f_{12}^{0}=0\).
We now turn to the jet bundle metric from which we would like to recover (110). The generic metric components, truncated as in the previous examples, are
\[g_{ij}^{\mu\nu} =\eta^{\mu\nu}A_{ij}(u)\,, \tag{112}\] \[g_{ij}^{\mu} =\frac{u^{k\mu}}{\Lambda^{2}}B_{ijk}(u)\,,\] (113) \[g_{ij} =C_{ij}(u)+\frac{u_{\rho}^{k}u^{l\rho}}{\Lambda^{4}}D_{ijkl}(u)\,,\] (114) \[g_{\mu j}^{\nu} =\delta_{\mu}^{\nu}E_{j}(u)+\frac{u^{k\nu}u^{l}_{\mu}}{\Lambda^{4 }}F_{1,jkl}(u)+\delta_{\mu}^{\nu}\frac{u_{\rho}^{k}u^{l\rho}}{\Lambda^{4}}F_{2,jkl}(u)\,,\] (115) \[g_{\mu j} =\frac{u_{\mu}^{k}}{\Lambda^{2}}G_{jk}(u)+\frac{u_{\mu}^{k}\,u_{ \rho}^{l}u^{m\rho}}{\Lambda^{6}}H_{jklm}(u)\,,\] (116) \[g_{\mu\nu} =-\frac{\eta_{\mu\nu}}{2}V(u)+\left(\frac{u_{\mu}^{k}u_{\nu}^{l} }{\Lambda^{4}}+\frac{\eta_{\mu\nu}}{4}\frac{u_{\rho}^{k}u^{l\rho}}{\Lambda^{4 }}\right)\frac{J_{kl}(u)}{2}+\left(\frac{u_{\mu}^{k}u_{\nu}^{l}}{\Lambda^{4}} +\frac{\eta_{\mu\nu}}{4}\frac{u_{\rho}^{k}u^{l\rho}}{\Lambda^{4}}\right)\frac {u_{\rho}^{m}u^{n\rho}}{\Lambda^{4}}\frac{K_{klmn}(u)}{2}\,, \tag{117}\]
where \(u^{i\mu}=\eta^{\mu\nu}u^{i}_{\nu}\) and all the functions \(V,A\ldots K\) are dimensionless functions of \((u^{i}/\Lambda)\). We also assume that the metric is fully symmetric: \(g_{ij}^{\mu\nu},g^{\mu\nu},g_{\mu j}^{\nu}\) are symmetric in \(\mu\nu\), \(g_{ij}^{\mu\nu},g_{ij}\) are symmetric in \(ij\), and \(g_{\nu i}^{(\mu)}=g_{\mu j}^{(\nu)},g_{ij}^{\mu}=g_{ij}^{\nu}\) upon relabeling the free indices. All repeated indices are understood to be summed over. Because we are not assuming internal symmetries, they simply represent labels on the functions.
Pulling back along the section \(j^{1}\phi\) and contracting with the inverse metric
\[{\cal L}[\phi,g]=\frac{1}{2}\left\langle\eta^{-1},\,(j^{1}\phi)^{*}g\right\rangle \qquad\qquad\text{where}\quad\eta^{-1}=\eta^{\rho\sigma}\partial_{\rho}\otimes \partial_{\sigma}\,, \tag{118}\]
we get
\[{\cal L}[\phi,g] =\frac{\Lambda^{4}}{2}\eta^{\mu\nu}g_{\mu\nu}+\Lambda^{2}g_{\mu j }\partial^{\mu}\phi^{j}+\Lambda g_{\mu j}^{\nu}\partial^{\mu}\partial_{\nu} \phi^{j}+\frac{1}{2}g_{ij}\partial^{\mu}\phi^{i}\partial^{\mu}\phi^{j} \tag{119}\] \[\quad+\frac{1}{2}\frac{g_{ij}^{\mu}}{\Lambda}\partial_{\mu} \partial_{\nu}\phi^{i}\partial^{\nu}\phi^{j}+\frac{1}{2}\frac{g_{ij}^{\nu}}{ \Lambda}\partial^{\mu}\phi^{i}\partial_{\mu}\partial_{\nu}\phi^{j}+\frac{1}{2} \frac{g_{ij}^{\mu\nu}}{\Lambda^{2}}\partial_{\rho}\partial_{\mu}\phi^{i} \partial^{\rho}\partial_{\nu}\phi^{j}\]
\[= -\Lambda^{4}V+\frac{1}{2}\partial_{\mu}\phi^{i}\partial^{\mu}\phi^{j} \left(C_{ij}+2G_{ij}+J_{ij}\right)+\Lambda\Box\phi^{j}E_{j}\] \[+\frac{(\partial_{\mu}\partial_{\nu}\phi^{i}\partial^{\mu}\partial^ {\nu}\phi^{j})}{\Lambda^{2}}\frac{A_{ij}}{2}+\frac{\partial_{\mu}\partial_{\nu }\phi^{i}\partial^{\mu}\phi^{j}\partial^{\nu}\phi^{k}}{\Lambda^{3}}\frac{B_{ ijk}+B_{jik}+2F_{1,ijk}}{2}\] \[+\frac{(\Box\phi^{i})\partial_{\mu}\phi^{j}\partial^{\mu}\phi^{k} }{\Lambda^{3}}F_{2,ijk}(u)+\frac{(\partial_{\mu}\phi^{i}\partial^{\mu}\phi^{j} )(\partial_{\nu}\phi^{k}\partial^{\nu}\phi^{l})}{\Lambda^{4}}\frac{D_{ijkl}+2 H_{ijkl}+K_{ijkl}}{2}\,.\]
This result is again formally equivalent to those obtained in SS6.1 and SS6.2, the only difference being the presence of explicit field indices. The \(E_{j}\) function is required to have a non-singular metric at the 2-derivatives level, but it is redundant via IBP:31
Footnote 31: As in the previous example, we find that for any number of fields \(N\), there are always as many independent IBPs as operators with boxes. This implies that the set of all box-less operators forms a complete and non-redundant Green’s basis.
\[\Box\phi^{j}E_{j}=-\frac{\partial_{\mu}\phi^{i}\partial^{\mu}\phi^{j}}{\Lambda }\frac{dE_{j}}{d(\phi^{i}/\Lambda)}\,. \tag{107}\]
The term proportional to the function \(F_{2,ijk}\) is also redundant and can be recast as
\[(\Box\phi^{i})\partial_{\mu}\phi^{j}\partial^{\mu}\phi^{k}F_{2,ijk}=-(\partial _{\mu}\partial_{\nu}\phi^{j})\partial^{\mu}\phi^{k}\partial^{\nu}\phi^{i}(F_ {2,ijk}+F_{2,ikj})-\frac{(\partial_{\mu}\phi^{j}\partial^{\mu}\phi^{k})( \partial_{\nu}\phi^{i}\partial^{\nu}\phi^{l})}{\Lambda}\frac{dF_{2,ijk}}{d( \phi^{l}/\Lambda)} \tag{108}\]
such that
\[\mathcal{L}[\phi,g]= \,\text{IBP}\ \circ\frac{1}{2}\left\langle\eta^{-1},\,(j^{1}\phi)^{*}g\right\rangle \tag{109}\] \[= -\Lambda^{4}V+\frac{1}{2}\partial_{\mu}\phi^{i}\partial^{\mu} \phi^{j}\left(C_{ij}+2G_{ij}+J_{ij}-2\frac{dE_{j}}{d(\phi^{i}/\Lambda)}\right)\] (110) \[+\frac{(\partial_{\mu}\partial_{\nu}\phi^{i}\partial^{\mu} \partial^{\nu}\phi^{j})}{\Lambda^{2}}\frac{A_{ij}}{2}+\frac{\partial_{\mu} \partial_{\nu}\phi^{i}\partial^{\mu}\phi^{j}\partial^{\nu}\phi^{k}}{\Lambda^{ 3}}\left(\frac{B_{ijk}+B_{jik}}{2}+F_{1,ijk}-F_{2,kij}-F_{2,kji}\right)\] \[+\frac{(\partial_{\mu}\phi^{i}\partial^{\mu}\phi^{j})(\partial_{ \nu}\phi^{k}\partial^{\nu}\phi^{l})}{\Lambda^{4}}\left(\frac{D_{ijkl}+K_{ijkl }}{2}+H_{ijkl}-\frac{dF_{2,kij}}{d(\phi^{l}/\Lambda)}\right)\,.\]
This Lagrangian manifestly captures the complete Green's basis in Eq. (106).
#### Equation-of-Motion redundancies.
We can repeat the same exercise illustrated in SS6.2 in order to identify a minimal operator basis with up to 4 derivatives, accounting for both IBP and EOM reduction. The procedure is formally equivalent to the one illustrated above, and leads to an analogous result, namely:
\[\mathcal{L} =\frac{1}{2}\partial_{\mu}\phi^{i}\partial^{\mu}\phi^{i}-\mathcal{ V}(\phi) \tag{111}\] \[+\frac{1}{2}\partial_{\mu}\phi^{2}\partial^{\mu}\phi^{2}\phi^{1} \phi^{1}\mathcal{F}^{0}_{2211}(\phi)+\frac{1}{\Lambda^{4}}(\partial_{\mu}\phi^ {i}\partial^{\mu}\phi^{j})(\partial_{\nu}\phi^{k}\partial^{\nu}\phi^{l}) \mathcal{F}^{3}_{ijkl}(\phi)\,.\]
For \(N=2,3\) field insertions it is possible to remove all terms with up to 4 derivatives (except the two kinetic ones) and trade them for non-derivative interactions in the potential. For \(N\geq 4\), the structure \((\partial\phi)^{4}\) is always independent. The main difference with the previous
examples is that, in addition, \((N-3)\) index assignments of the 2-derivative operator cannot be removed. Here we chose arbitrarily those starting with (2211).
The Lagrangian in Eq. (111) is a subset of (106), and therefore it can be obtained pulling back a 1-jet bundle metric. Comparing to Eq. (110), we see that such a metric should satisfy:
\[A_{ij} =0\,, \tag{112}\] \[B_{ijk}+B_{jik}+2F_{1,ijk}-2F_{2,kij}-2F_{2,kji} =0\,,\] (113) \[V =\frac{\mathcal{V}(\phi)}{\Lambda^{4}}\] (114) \[C_{ij}+2G_{ij}+J_{ij}-2\frac{dE_{j}}{d(\phi^{i}/\Lambda)} =\delta_{ij}+\delta_{i2}\delta_{j2}(\phi^{1})^{2}\mathcal{F}_{2211 }^{0}(\phi)\,,\] (115) \[\frac{1}{2}\left(D_{ijkl}+K_{ijkl}\right)+H_{ijkl}-\frac{dF_{2,kij }}{d(\phi^{l}/\Lambda)} =\mathcal{F}_{ijkl}^{3}(\phi)\,. \tag{116}\]
## 7 Four scalars in 4d, with \(O(4)\) internal symmetry
We now cover the case of a multiplet of 4 fields in Minkowski space, with an internal \(O(4)\) symmetry. This example is representative of SMEFT/HEFT with exact custodial symmetry and in the absence of gauge and fermion fields. The discussion in this section follows the same steps taken in the examples in SS6.
### The EFT Lagrangian up to 4 derivatives
We start by constructing a complete, IBP-non-redundant set of \(O(4)\)-symmetric operators with up to 4 derivatives and arbitrary number of fields. We require that no more than 2 derivatives act on each field and the absence of boxes. As in the toy examples, this identifies a unique Lagrangian:
\[\mathcal{L} =\frac{1}{2}\partial_{\mu}\phi\cdot\partial^{\mu}\phi\mathcal{F} _{a}+(\partial_{\mu}\phi\cdot\phi)^{2}\mathcal{F}_{b}-\mathcal{V} \tag{117}\] \[\quad+\,\frac{1}{\Lambda^{2}}(\partial_{\mu}\partial_{\nu}\phi \cdot\partial^{\mu}\partial^{\nu}\phi)\mathcal{F}_{1}+\frac{1}{\Lambda^{4}}( \partial_{\mu}\partial_{\nu}\phi\cdot\phi)(\partial^{\mu}\partial^{\nu}\phi \cdot\phi)\mathcal{F}_{2}+\frac{1}{\Lambda^{4}}(\partial_{\mu}\partial_{\nu} \phi\cdot\partial^{\mu}\phi)(\partial^{\nu}\phi\cdot\phi)\mathcal{F}_{3}\] \[\quad+\,\frac{1}{\Lambda^{4}}(\partial_{\mu}\partial_{\nu}\phi \cdot\phi)(\partial^{\mu}\phi\cdot\partial^{\nu}\phi)\mathcal{F}_{4}+\, \frac{1}{\Lambda^{4}}(\partial_{\mu}\phi\cdot\partial^{\mu}\phi)^{2}\mathcal{ F}_{5}+\frac{1}{\Lambda^{4}}(\partial_{\mu}\phi\cdot\partial_{\nu}\phi)^{2} \mathcal{F}_{6}\] \[\quad+\,\frac{1}{\Lambda^{6}}(\partial_{\mu}\phi\cdot\phi\cdot \phi)(\partial^{\mu}\phi\cdot\phi)(\partial^{\nu}\phi\cdot\phi)\mathcal{F}_{ 7}+\frac{1}{\Lambda^{6}}(\partial_{\mu}\phi\cdot\partial^{\mu}\phi)(\partial _{\nu}\phi\cdot\phi)^{2}\mathcal{F}_{8}\] \[\quad+\,\frac{1}{\Lambda^{6}}(\partial_{\mu}\phi\cdot\partial_{ \nu}\phi)(\partial^{\mu}\phi\cdot\phi)(\partial^{\nu}\phi\cdot\phi)\mathcal{F}_ {9}+\frac{1}{\Lambda^{8}}(\partial_{\mu}\phi\cdot\phi)^{4}\mathcal{F}_{10}\,.\]
Here the dot indicates the contraction of two 4-vectors: \(\phi\cdot\phi=\phi^{i}\phi^{j}\delta_{ij}\) and \(\mathcal{V},\mathcal{F}_{k}\) are generic functions of \((\phi\cdot\phi)/\Lambda^{2}\). The requirement of a canonical kinetic term translates into \(\mathcal{F}_{a}(0)=1\). As for the examples in SS6, the basis structure with and without EOM redundancies has been cross-checked with BasisGen. Upon imposing preservation of the custodial symmetry, the number of operators with 2 derivatives + 4 fields, and 4 derivatives
\(+\) 2 fields in the Green's basis of Eq. (110) matches the number in [85]. The number of operators with 2 derivatives \(+\) 6 fields and 4 derivatives \(+\) 4 fields match those in [86].32
Footnote 32: There are 3 custodial-preserving terms in [85]: \(O_{H\Box},O^{\prime}_{HD},O_{DH}\). There are 8 in [86]: \(O^{(1)}_{\phi^{6}}\), \(O^{(3)}_{\phi^{6}}\), \(O^{(3)}_{\phi^{4}}\), \(O^{(4)}_{\phi^{4}}\), \(O^{(10)}_{\phi^{4}}\), \((O^{(1)}_{\phi^{4}}+O^{(2)}_{\phi^{4}})\), \((O^{(8)}_{\phi^{4}}+2O^{(11)}_{\phi^{4}})\), \((O^{(6)}_{\phi^{4}}+O^{(12)}_{\phi^{4}})\).
While the connection to (jet) bundle geometry is best studied considering the 4 scalars as the components of a real vector, it can be useful to make contact with the more familiar form in terms of the \(SU(2)\) Higgs doublet, by exploiting the isomorphism
\[O(4)\cong\left(\frac{SU(2)_{L}\times SU(2)_{R}}{\mathbb{Z}_{2}}\right)\rtimes \mathbb{Z}_{2}\,, \tag{111}\]
where the quotient by \(\mathbb{Z}_{2}\) identifies the central elements of \(SU(2)_{L}\) and \(SU(2)_{R}\); the part in parentheses is \(SO(4)\), which recall is a normal subgroup of \(O(4)\).
Explicit expressions for the operators in (110) can be obtained straightforwardly from the following dictionary:
\[\partial_{\mu}\phi\cdot\phi =\partial_{\mu}(H^{\dagger}H)\,, \tag{112}\] \[\partial_{\mu}\phi\cdot\partial_{\nu}\phi =(\partial_{\mu}H^{\dagger})(\partial_{\nu}H)+(\partial_{\nu}H^{ \dagger})(\partial_{\mu}H)\,,\] (113) \[\partial_{\mu}\partial_{\nu}\phi\cdot\phi =(\partial_{\mu}\partial_{\nu}H^{\dagger})+H^{\dagger}(\partial_ {\mu}\partial_{\nu}H)\,,\] (114) \[\partial_{\mu}\partial_{\nu}\phi\cdot\partial_{\rho}\partial_{ \sigma}\phi =(\partial_{\mu}\partial_{\nu}H^{\dagger})(\partial_{\rho}\partial_ {\sigma}H)+(\partial_{\rho}\partial_{\sigma}H^{\dagger})(\partial_{\mu} \partial_{\nu}H)\,,\] (115) \[\partial_{\mu}\partial_{\nu}\phi\cdot\partial^{\mu}\phi =(\partial_{\mu}\partial_{\nu}H^{\dagger})(\partial^{\mu}H)+( \partial^{\mu}H^{\dagger})(\partial_{\mu}\partial_{\nu}H)\,,\] (116) \[\mathcal{F}(\phi\cdot\phi/\Lambda^{2}) =\mathcal{F}(H^{\dagger}H/\Lambda^{2})\,. \tag{117}\]
#### Equation-of-Motion redundancies
If EOM redundancies are accounted for, the Lagrangian in Eq. (110) can be further reduced. Applying the procedure described in SS6.2, we find that a possible IBP+EOM -non-redundant basis is:
\[\mathcal{L}= \frac{1}{2}\partial_{\mu}\phi\cdot\partial^{\mu}\phi+(\partial_{ \mu}\phi\cdot\phi)^{2}\mathcal{F}_{b}-\mathcal{V} \tag{118}\] \[+\frac{1}{\Lambda^{4}}(\partial_{\mu}\phi\cdot\partial^{\mu}\phi )^{2}\mathcal{F}_{5}+\frac{1}{\Lambda^{4}}(\partial_{\mu}\phi\cdot\partial_{ \nu}\phi)^{2}\mathcal{F}_{6}\] \[+\frac{1}{\Lambda^{6}}(\partial_{\mu}\phi\cdot\partial^{\mu}\phi )(\partial_{\nu}\phi\cdot\phi)^{2}\mathcal{F}_{8}+\frac{1}{\Lambda^{6}}( \partial_{\mu}\phi\cdot\partial_{\nu}\phi)(\partial^{\mu}\phi\cdot\phi)( \partial^{\nu}\phi\cdot\phi)\mathcal{F}_{9}\] \[+\frac{1}{\Lambda^{8}}(\partial_{\mu}\phi\cdot\phi)^{4}\mathcal{ F}_{10}\,.\]
Notice that the function \(\mathcal{F}_{a}\) accompanying the kinetic term has now been removed. This is the case because, with \(N\geq 4\) fields it is always possible to map via IBP
\[(\partial_{\mu}\phi\partial^{\mu}\phi)(\phi\cdot\phi)^{N-2}=(2-N)(\partial_{ \mu}\phi\cdot\phi)^{2}(\phi\cdot\phi)^{N-4}-(\Box\phi\cdot\phi)(\phi\cdot \phi)^{N-2}\,, \tag{119}\]
and then remove the last term via EOM. Eq. (118) is consistent with the SMEFT operator bases in the literature: for instance, the Warsaw basis of dimension-6 operators [4] contains 1 custodial-preserving operator with 2 derivatives \(+\) 4 fields (\(O_{H\Box}\sim\mathcal{F}_{b}\)), and
none with 4 derivatives + 2 fields. The Murphy basis of dimension-8 operators [87] also contains 1 operator with 2 derivatives + 6 fields (), and 3 with 4 derivatives + 4 fields. However, only 2 combinations of those (), preserve the custodial symmetry. The interactions in the last two lines of Eq (7.9) correspond to dimension-10 and higher operators in SMEFT and were cross-checked against the results of Ref. [88].
### The EFT Lagrangian from 1-jet bundle geometry
The full 4-derivative Lagrangian in Eq. (7.1) can be captured by pulling back to spacetime a metric from the 1-jet bundle. This is a real 24-manifold, with local fibred coordinates
(7.11)
The metric components must be representations of, are symmetric in and are symmetric in, while this is not enforced for. The generic form is:
(7.12) (7.13) (7.14) (7.15) (7.16) (7.17) (7.18) (7.19) (7.20) (7.21) (7.22) (7.23) (7.24) (7.25) (7.26) (7.27) (7.28) (7.29) (7.30) (7.31) (7.32) (7.33) (7.34) (7.35) (7.36) (7.37) (7.38) (7.39) (7.40) (7.41) (7.42) (7.43) (7.44) (7.45) (7.46) (7.47) (7.48) (7.49) (7.40) (7.41) (7.43) (7.42) (7.44) (7.45) (7.46) (7.47) (7.48) (7.49) (7.41) (7.40) (7.42) (7.43) (7.44) (7.45) (7.46) (7.47) (7.48) (7.49) (7.41) (7.40) (7.42) (7.43) (7.44) (7.45) (7.46) (7.47) (7.48) (7.49) (7.41) (7.42) (7.43) (7.44) (7.49) (7.41) (7.42) (7.43) (7.44) (7.45) (7.46) (7.47) (7.48) (7.49) (7.41) (7.42) (7.43) (7.44) (7.46) (7.47) (7.48) (7.49) (7.41) (7.42) (7.43) (7.44) (7.45) (7.46) (7.47) (7.48) (7.49) (7.40) (7.41) (7.42) (7.43) (7.44) (7.45) (7.46) (7.47) (7.48) (7.49) (7.41) (7.43) (7.44) (7.49) (7.42) (7.44) (7.45) (7.46) (7.47) (7.48) (7.49) (7.40) (7.41) (7.42) (7.43) (7.44) (7.45) (7.46) (7.47) (7.48) (7.49) (7.41) (7.42) (7.43) (7.44) (7.49) (7.41) (7.43) (7.44) (7.45) (7.46) (7.47) (7.48) (7.49) (7.41) (7.42) (7.43) (7.44) (7.46) (7.48) (7.49) (7.41) (7.43) (7.44) (7.49) (7.42) (7.44) (7.45) (7.46) (7.47) (7.48) (7.49) (7.41) (7.42) (7.43) (7.44) (7.47) (7.49) (7.41) (7.43) (7.44) (7.45) (7.46) (7.47) (7.48) (7.49) (7.49) (7.41) (7.42) (7.43) (7.44) (7.46) (7.48) (7.49) (7.41) (7.43) (7.44) (7.49) (7.41) (7.44) (7.45) (7.46) (7.47) (7.48) (7.49) (7.41) (7.42) (7.43) (7.44) (7.49) (7.41) (7.44) (7.45) (7.46) (7.47) (7.48) (7.49) (7.41) (7.42) (7.43) (7.44) (7.47) (7.49) (7.41) (7.44) (7.45) (7.46) (7.47) (7.48) (7.49) (7.41) (7.42) (7.43) (7.44) (7.46) (7.48) (7.49) (7.41) (7.44) (7.47) (7.49) (7.42) (7.43) (7.44) (7.45) (7.46) (7.47) (7.48) (7.49) (7.49) (7.41) (7.44) (7.45) (7.46) (7.47) (7.49) (7.41) (7.42) (7.43) (7.44) (7.46) (7.48) (7.49) (7.41) (7.44) (7.45) (7.46) (7.47) (7.49) (7.41) (7.47) (7.49) (7.41) (7.44) (7.47) (7.47) (7.48) (7.49) (7.41) (7.42) (7.43) (7.44) (7.49) (7.47) (7.47) (7.49) (7.41) (7.44) (7.47) (7.47) (7.49) (7.41) (7.44) (7.44) (7.47) (7.47) (7.48) (7.49) (7.41) (7.44) (7.49) (7.41) (7.44) (7.47) (7.49) (7.41) (7.44) (7.47) (7.47) (7.49) (7.41) (7.44) (7.44) (7.47) (7.47) (7.47) (7.47) (7.47) (7.48) (7.49) (7.41) (7.44) (7.49) (7.41) (7.44) (7.47) (7.
\[+\frac{(\partial_{\mu}\phi\cdot\partial^{\mu}\phi)(\partial^{\mu} \phi\cdot\phi)(\partial^{\nu}\phi\cdot\phi)}{\Lambda^{4}}F_{20}+\frac{(\Box\phi \cdot\partial_{\mu}\phi)(\partial^{\mu}\phi\cdot\phi)}{\Lambda^{4}}F_{21}\] \[+\frac{(\partial_{\mu}\phi\cdot\partial^{\mu}\phi)^{2}}{\Lambda^{4} }\frac{D_{0}+2H_{0}+K_{0}}{2}+\frac{(\partial_{\mu}\phi\cdot\partial_{\nu}\phi) ^{2}}{\Lambda^{4}}\frac{D_{1}+2H_{1}+K_{1}}{2}\] \[+\frac{(\partial_{\mu}\partial_{\nu}\phi\cdot\phi)(\partial^{\mu} \phi\cdot\phi)(\partial^{\nu}\phi\cdot\phi)}{\Lambda^{6}}(B_{3}+F_{12})+\frac{( \Box\phi\cdot\phi)(\partial_{\mu}\phi\cdot\phi)^{2}}{\Lambda^{6}}F_{22}\] \[+\frac{(\partial_{\mu}\phi\cdot\partial^{\mu}\phi)(\partial_{ \nu}\phi\cdot\phi)^{2}}{\Lambda^{6}}\frac{D_{2}+2H_{2}+K_{2}}{2}\] \[+\frac{(\partial_{\mu}\phi\cdot\partial_{\nu}\phi)(\partial^{ \mu}\phi\cdot\phi)(\partial^{\nu}\phi\cdot\phi)}{\Lambda^{6}}\frac{D_{3}+2H_{3 }+K_{3}}{2}\] \[+\frac{(\partial_{\mu}\phi\cdot\phi)^{4}}{\Lambda^{8}}\frac{D_{4} +2H_{4}+K_{4}}{2}\,. \tag{7.20}\]
The Lagrangian obtained from the 1-jet metric manifestly matches the complete Green's basis of operators with up to 4 derivatives and arbitrary number of fields, presented in Eq. (7.1). The only leftover IBP redundancies are in the \(E\), \(F_{20},F_{21}\) and \(F_{22}\) terms, that
can be recast as33
Footnote 33: As in all the previous examples, we find as many independent IBP as operators with boxes, at each fields and derivatives multiplicity (up to 4). Therefore the set of all box-less operators forms a Green’s basis.
\[(\square\phi\cdot\phi)E= -(\partial_{\mu}\phi\cdot\partial^{\mu}\phi)E-2\frac{(\partial_{ \mu}\phi\cdot\phi)(\partial^{\mu}\phi\cdot\phi)}{\Lambda^{2}}\frac{dE}{d( \phi\cdot\phi/\Lambda^{2})}\,, \tag{7.21}\] \[(\square\phi\cdot\phi)(\partial_{\mu}\phi\cdot\partial^{\mu}\phi) F_{20}= -(\partial_{\mu}\phi\cdot\partial^{\mu}\phi)^{2}F_{20}-2(\partial_{\nu} \partial_{\mu}\phi\cdot\partial^{\mu}\phi)(\partial^{\nu}\phi\cdot\phi)F_{20}\] \[-\frac{2(\partial_{\mu}\phi\cdot\partial^{\mu}\phi)(\partial_{ \nu}\phi\cdot\phi)(\partial^{\nu}\phi\cdot\phi)}{\Lambda^{2}}\frac{dF_{20}}{d( \phi\cdot\phi/\Lambda^{2})}\] (7.22) \[(\square\phi\cdot\partial_{\mu}\phi)(\partial^{\mu}\phi\cdot\phi) F_{21}= -(\partial_{\mu}\partial_{\nu}\phi\cdot\partial^{\nu}\phi)( \partial^{\mu}\phi\cdot\phi)F_{21}-(\partial_{\mu}\partial_{\nu}\phi\cdot\phi )(\partial^{\mu}\phi\cdot\partial^{\nu}\phi)F_{21}\] \[-(\partial_{\mu}\phi\cdot\partial_{\nu}\phi)^{2}F_{21}-\frac{2( \partial_{\mu}\phi\cdot\partial_{\nu}\phi)(\partial^{\mu}\phi\cdot\phi)( \partial^{\nu}\phi\cdot\phi)}{\Lambda^{2}}\frac{dF_{21}}{d(\phi\cdot\phi/ \Lambda^{2})}\] (7.23) \[(\square\phi\cdot\phi)(\partial_{\mu}\phi\cdot\phi)^{2}F_{22}= -(\partial_{\nu}\phi\cdot\partial^{\nu}\phi)(\partial_{\mu} \phi\cdot\phi)^{2}F_{22}-2(\partial_{\mu}\partial_{\nu}\phi\cdot\phi)( \partial^{\nu}\phi\cdot\phi)(\partial^{\mu}\phi\cdot\phi)F_{22}\] \[-2(\partial_{\mu}\phi\cdot\partial_{\nu}\phi)(\partial^{\mu}\phi \cdot\phi)(\partial^{\nu}\phi\cdot\phi)F_{22}\] \[-\frac{2(\partial_{\mu}\phi\cdot\phi)^{2}(\partial_{\nu}\phi \cdot\phi)^{2}}{\Lambda^{2}}\frac{dF_{22}}{d(\phi\cdot\phi/\Lambda^{2})} \tag{7.24}\]
such that
\[\mathcal{L}[\phi,g]= \,\text{IBP}\ \circ\frac{1}{2}\left\langle\eta^{-1},\,(j^{1}\phi)^{*}g\right\rangle \tag{7.25}\] \[= \,(\partial_{\mu}\phi\cdot\partial^{\mu}\phi)\frac{C_{0}+2G_{0}+ J_{0}-2E}{2}+\frac{(\partial_{\mu}\phi\cdot\phi)^{2}}{\Lambda^{2}}\frac{C_{1}+2G_{1}+J_ {1}-4E^{\prime}}{2}-\Lambda^{4}V\] \[+\frac{(\partial_{\mu}\partial_{\nu}\phi\cdot\partial^{\mu} \partial^{\nu}\phi)}{\Lambda^{2}}\frac{A_{0}}{2}+\frac{(\partial_{\mu} \partial_{\nu}\phi\cdot\phi)(\partial^{\mu}\partial^{\nu}\phi\cdot\phi)}{ \Lambda^{4}}\frac{A_{1}}{2}\] \[+\frac{(\partial_{\mu}\partial_{\nu}\phi\cdot\partial^{\mu} \phi)(\partial^{\nu}\phi\cdot\phi)}{\Lambda^{4}}\frac{B_{0}+B_{1}+2B_{2}+2F_{ 11}-4F_{20}-2F_{21}}{2}\] \[+\frac{(\partial_{\mu}\partial_{\nu}\phi\cdot\phi)(\partial^{\mu }\phi\cdot\partial^{\nu}\phi)}{\Lambda^{4}}\frac{B_{0}+B_{1}+2F_{10}-2F_{21}} {2}\] \[+\frac{(\partial_{\mu}\phi\cdot\partial^{\mu}\phi)^{2}}{\Lambda^{4 }}\frac{D_{0}+2H_{0}+K_{10}-2F_{20}}{2}+\frac{(\partial_{\mu}\phi\cdot\partial _{\nu}\phi)^{2}}{\Lambda^{4}}\frac{D_{1}+2H_{1}+K_{1}-2F_{21}}{2}\] \[+\frac{(\partial_{\mu}\partial_{\nu}\phi\cdot\phi)(\partial^{\mu} \phi\cdot\phi)(\partial^{\nu}\phi\cdot\phi)(\partial^{\nu}\phi\cdot\phi)}{ \Lambda^{6}}(B_{3}+F_{12}-2F_{22})\] \[+\frac{(\partial_{\mu}\phi\cdot\partial^{\mu}\phi)(\partial_{\nu }\phi\cdot\phi)^{2}}{\Lambda^{6}}\frac{D_{2}+2H_{2}+K_{2}-4F_{20}^{\prime}-2F_{ 22}}{2}\] \[+\frac{(\partial_{\mu}\phi\cdot\partial_{\nu}\phi)(\partial^{\mu} \phi\cdot\phi)(\partial^{\nu}\phi\cdot\phi)}{\Lambda^{6}}\frac{D_{3}+2H_{3}+K_ {3}-4F_{21}^{\prime}-4F_{22}}{2}\] \[+\frac{(\partial_{\mu}\phi\cdot\phi)^{4}}{\Lambda^{8}}\frac{D_{4} +2H_{4}+K_{4}-4F_{22}^{\prime}}{2}\,, \tag{7.26}\]
where \(E^{\prime}=dE/d(\phi\cdot\phi/\Lambda^{2})\) and analogously for the other functions.
### Beyond the custodial limit
In actual SMEFT/HEFT, the symmetry \(O(4)\) is not exact: the electroweak vacuum preserves only a \(O(3)\sim SU(2)_{L+R}/\mathbb{Z}_{2}\) subgroup, namely the custodial symmetry. This is
however broken explicitly by the gauging of the hypercharge \(U(1)\), which (in the absence of fermions) coincides with the group spanned by the 3rd generator of \(SU(2)_{R}\). While invariance under the gauged \(SU(2)_{L}\times U(1)\) subgroup must clearly be respected, it is generally possible to include operators that are not invariant under the groups spanned by the first two generators of \(SU(2)_{R}\), and therefore break the custodial \(O(3)\).
We can ask whether custodial-violating effects can be described in the jet bundle formalism. Indeed, custodial-violating terms can be added to the 1-jet metric without posing any problem. More in general, any internal symmetry structure of the Lagrangian can always be accommodated in the 1-jet bundle by choosing suitable metric entries. Even situations with no symmetry at all can be reproduced, as illustrated in the two scalars example in SS6.3.
In the specific case of SMEFT/HEFT custodial violation, they can be introduced allowing for contractions with the 3rd generator of \(SU(2)_{R}\): given two \(O(4)\) vectors \(u^{i},v^{j}\),
\[u\cdot v=u^{i}v^{j}\delta_{ij} \text{is custodial preserving}, \tag{111}\] \[u\,t^{3R}v=u^{i}v^{j}t^{3R}_{ij} \text{is custodial violating}.\]
If \(O(4)\) vectors \(\phi^{i},i=1\ldots 4\) are defined such that the mapping to \(SU(2)\) doublets is
\[H=\frac{1}{\sqrt{2}}\binom{\phi^{2}+i\phi^{1}}{\phi^{4}-i\phi^{3}}\,, \tag{112}\]
then an explicit form for \(t^{3R}_{ij}\) is
\[t^{3R}=\frac{1}{2}\begin{pmatrix}1&\\ -1&\\ &-1\\ &1\end{pmatrix}\,. \tag{113}\]
While in this work we do not aim at constructing a complete and non-redundant basis including this class of operators, a generalization would be straightforward. As a practical example, the metric terms
\[g_{ij} =t^{3R}_{ik}t^{3R}_{jl}\,\frac{u^{k}u^{l}}{\Lambda^{2}}\,, g_{\mu\nu} =t^{3R}_{ik}t^{3R}_{jl}\,\frac{u^{i}_{\nu}u^{j\nu}u^{k}_{\rho}u^{l \rho}}{\Lambda^{8}}\,, \tag{114}\]
pull back respectively to the custodial-violating dimension-6 and dimension-8 operators
\[(\partial_{\mu}\phi\,t^{3R}\,\phi)^{2} =(\partial_{\mu}H^{\dagger}H)(H^{\dagger}\partial^{\mu}H)\,, \tag{115}\] \[(\partial_{\mu}\phi\cdot t^{3R}\cdot\partial_{\nu}\phi)^{2} =(\partial_{\mu}H^{\dagger}\partial_{\nu}H-\partial_{\nu}H^{ \dagger}\partial_{\mu}H)^{2}\,, \tag{116}\]
after using our usual prescription for constructing the Lagrangian.
### Topological terms
In SS2.2 we mentioned the possibility of topological terms in scalar EFTs. The Higgs EFTs we consider in this Section admit such topological terms (with a similar story playing out
for an \(O(N)\)-invariant theory of \(N\) real scalars in \(d=N\) spacetime dimensions). We briefly digress in this Subsection to sketch the construction of these topological terms, for the 4d \(O(4)\)-invariant theory of 4 real scalars considered in this Section. Given such terms are not the focus of our geometric jet bundle formalism - the topological terms are, after all, those that explicitly do _not_ require a geometry - our discussion here is neither rigorous nor general. But we believe it is sufficient for the particular example we consider, up to the omission of possible torsion effects that cannot naively be captured by differential forms (but can be captured, for example, by differential cohomology [67]).
To understand these topological terms, it is enough to stick with the field space bundle \((E,\Sigma,\pi)\) introduced in SS3, _i.e._ we do not need to pass to the 1-jet bundle \(J^{1}E\). Locally, we adopt our usual fibred coordinate system \((x^{\mu},u^{i})\) and choose a section \(\phi\in\Gamma_{x}(\pi)\) with which to pull back objects. The topological term is proportional to the following operator in the Lagrangian:34
Footnote 34: Note that \(\delta_{ij}\) and \(\epsilon_{ijkl}\) are the only \(O(4)\) invariant tensors we can use to contract fundamental \(O(N)\) indices to form singlets. While the Kronecker-delta is used ubiquitously in our constructions of invariant metric terms and EFT operators, the epsilon tensor appears only in the topological terms discussed in this Subsection, and so it does not appear when formulating our EFTs perturbatively.
\[\mathcal{O}_{\text{top}}\propto\frac{1}{\Lambda^{4}}\epsilon_{\mu\nu\rho \sigma}\epsilon_{ijkl}\partial^{\mu}\phi^{i}\partial^{\nu}\phi^{j}\partial^{ \rho}\phi^{k}\partial^{\sigma}\phi^{l}\,. \tag{113}\]
The corresponding term in the action is 'topological' because it is in fact the integral of an \(O(4)\)-invariant differential form. This means it is independent of any metric - even that on spacetime which appears in all other Lagrangian terms captured by our construction. Here the differential form in question is a 4-form \(A\in\Omega^{4}(E)\) that is closed (\(dA=0\)) but need not be exact (its integral over a closed submanifold can be non-zero), which can be written in our local coordinate chart as
\[A|_{\mathcal{U}_{\Sigma}\times\mathcal{F}_{M}}=du^{1}\wedge du^{2}\wedge du^{3 }\wedge du^{4}\,. \tag{114}\]
The corresponding action is obtained by pulling back along the section \(\phi\) and integrating,
\[S_{\text{top}}=\frac{\theta}{2\pi V}\int_{\Sigma}\phi^{*}A,\qquad\theta\in \mathbb{R}/2\pi\mathbb{Z}\,, \tag{115}\]
which will give the operator structure \(\mathcal{O}_{\text{top}}\). Here \(V\propto\Lambda^{4}\) is an appropriate normalisation factor whose precise value is subtle, but which can be roughly thought of as the (dimensionful) 'volume' of a typical fibre \(M\), such that the action is dimensionless and the coupling constant \(\theta\) is \(2\pi\) periodic, analogous to a 'theta-angle' in gauge theories such as QCD.
Because the 4-form \(A\) that we pull back and integrate is closed, it is locally (but not necessarily global) exact by the Poincare lemma. This means that, on our local coordinate patch, the Lagrangian term \(\mathcal{O}_{\text{top}}\) is a total derivative. This term in the action therefore gives no contribution to the classical equations of motion, nor does it contribute to any Feynman diagrams. We therefore choose to neglect such terms in our main discussion - but we caution that this does not preclude their having physical effects _non_-perturbatively.35
The same is true for a (seemingly) more general class of terms that we can try to build using the \(\epsilon\) tensor. For example, consider a more general operator structure of the form
\[\mathcal{O}^{(1)}_{\rm top}=\frac{1}{\Lambda^{4}}\,f\left(\frac{\phi\cdot\phi}{ \Lambda^{2}}\right)\,\epsilon_{\mu\nu\rho\sigma}\epsilon_{ijkl}\,\partial^{\mu} \phi^{i}\partial^{\nu}\phi^{j}\partial^{\rho}\phi^{k}\partial^{\sigma}\phi^{l}\,, \tag{111}\]
for any suitably smooth function with compact support \(f(x)\). This is again topological, obtained by pulling back a differential form \(A^{(1)}\) along the section \(\phi\), where \(A^{(1)}\) has the (local) form
\[A^{(1)}|_{\mathcal{U}_{\rm Z}\times\mathcal{F}_{M}}=f\left(\frac{u\cdot u}{ \Lambda^{2}}\right)\,du^{1}\wedge du^{2}\wedge du^{3}\wedge du^{4}\,. \tag{112}\]
Again, it is easy to verify that the differential form \(A^{(1)}\) is closed, _viz._\(dA^{(1)}\propto f^{\prime}u^{i}du^{i}\wedge A=0\), where \(f^{\prime}=df/dx\), and where the last equality holds because \(du^{i}\wedge du^{i}\) (no sum) vanishes for any \(i\in\{1,...,4\}\). Therefore, again by the Poincare lemma, any operator like \(\mathcal{O}^{(1)}_{\rm top}\) is also a total derivative.36
Footnote 36: One might ask if there are any other contractions we can play with using the \(\epsilon\) tensor that give terms which are _not_ total derivatives, and the answer is no. For example, one might consider an operator structure \(\mathcal{O}^{(2)}_{\rm top}=\epsilon_{\mu\nu\rho\sigma}\epsilon_{i\mu kl}\, \phi^{i}\,\partial^{\sigma}\phi^{j}\partial^{\rho}\phi^{k}\partial^{\sigma} \phi^{l}\partial^{\mu}\phi^{m}\,\phi^{m}\). But \(\mathcal{O}^{(2)}_{\rm top}d^{4}x\propto\phi^{*}(\epsilon_{ijkl}u^{i}du^{j} \wedge du^{k}\wedge du^{l})\wedge du^{m}u^{m}=(u^{1}du^{2}\wedge du^{3}\wedge du ^{4})\wedge du^{1}u^{1}+{\rm perms}\propto(u\cdot u)\,du^{1}\wedge du^{2} \wedge du^{3}\wedge du^{4}\), and so this is the same form as \(A^{(1)}\). Finally, we remark that a differential form like \(A^{(1)}\), for non-constant function \(f\), will likely be not just closed but also exact, in which case it has no effects even in the presence of instanton configurations, since it evaluates identically to zero on any closed manifold.
## 8 From Jets to Amplitudes
To conclude the paper, in this Section we seek to connect the higher-derivative EFT Lagrangians that we have constructed using geometry on 1-jet bundles with physical observables - namely, some basic amplitudes in these scalar theories. Our study here is far from comprehensive, but we nonetheless already see some advantages of going to the jet bundle geometry.
### Normal coordinates on the jet bundle (and why they fail)
In Section 2.4 we reviewed the use of normal coordinates in the geometric formulation of EFTs, showcasing their ability to relate amplitudes to geometric invariants by offering an efficient coordinate system in which to expand the metric around a particular point.
In this paper, we have re-formulated scalar EFTs first using bundles \((E,\Sigma,\pi)\), which (besides other general advantages) allows one to incorporate the 0-derivative potential terms into the metric, alongside the 2-derivative terms. Additionally, the bundle construction provides a starting point for passing to higher jet bundles which allow one to further incorporate higher-derivative terms into geometry. In particular, we have seen in detail that geometry on the 1-jet bundle \(J^{1}E\) allows one to capture the full 4-derivative EFT. A natural question to ask is whether normal coordinates can be used to expand our jet
bundle metric, in a similar fashion to the 2-derivative case considered in SS2.4, in an effort to obtain generalised formulae relating \(n\)-point correlators to components of the jet bundle Riemann tensor.
It turns out that such an attempt would be doomed to fail, in general. This is because, even though normal coordinates still exist (since the total spaces of the various bundles we consider are themselves pseudo-Riemannian manifolds), the map to normal coordinates is not guaranteed to respect the bundle structure of \((E,\Sigma,\pi)\),37 likewise of the 1-jet bundle. The additional structure of a bundle \((E,\Sigma,\pi)\) is crucial to our formulation of the EFT, and this bundle structure imposes some restrictions. In particular, the fields are defined to be sections of the bundle, \(\phi\in\Gamma(\pi)\). _Any_ physically meaningful transformation that we can do must therefore map the bundle to another bundle; if not, then we can no longer even define fields (because we will have lost the notion of sections). For example, we have already discussed how field redefinitions, including derivative field redefinitions, may be defined as suitable changes of section, \(\phi\to\psi\) with \(\phi,\psi\in\Gamma(\pi)\); the non-derivative field redefinitions can moreover be identified with bundle morphisms \((E,\Sigma,\pi)\to(F,\Omega,\rho)\), which induce maps \(\Gamma(\pi)\to\Gamma(\rho)\) at the level of sections. In either case, it is straightforward to see that the condition \(x^{\mu}\circ\phi=x^{\mu}\) on our local fibred coordinate system \((x^{\mu},u^{i})\), and the simple requirement that any section is locally an inverse of the projection \(\pi\), restricts us to transformations that act on coordinates as
Footnote 37: A simple but technical explanation lies in the fact that the map to normal coordinates around a point \(m\in E\) is defined by the exponential from some neighbourhood \(V\subset T_{m}E\) to \(U\subset E\), as described above in footnote 9. But for the transformation to preserve the fibres, we require \(V\subset\pi^{*}(T_{\pi(m)}\Sigma)\), which is not necessarily true.
\[\begin{split} x^{\mu}\mapsto x^{\prime\mu}(x^{\nu})\,,\\ u^{i}\mapsto u^{\prime i}(x^{\nu},u^{j})\,.\end{split} \tag{108}\]
This is manifestly true in the case of non-derivative redefinitions, as can be seen from the commutative diagram in (105), but should hold more generally for any physical 'transformation' of the bundle \((E,\Sigma,\pi)\) that we use to define our EFT.
The map to normal coordinates can violate the basic requirement (108) for preserving the (essential) bundle structure. To illustrate this, we consider two counter-examples in which one cannot satisfy the conditions for normal coordinates while preserving the bundle structure, both in the case of a single real scalar in 4d (the subject of SS6.2).
Counter-example: condition 1 for normal coordinates.Recall from SS2.4 that the first condition (34) for normal coordinates is that the metric is diagonal (with entries further normalised to \(\pm 1\), determining the signature). Given an initial set of coordinates \(x^{I}=(x^{\mu},u,u_{\mu})\) on the 1-jet bundle, there are choices of invariant metrics written in those coordinates that are not diagonal (with off-diagonal elements indeed being necessary to match onto certain EFTs via our construction of the Lagrangian). This means that to pass to coordinates satisfying condition 1, we need to first do a rotation, and this rotation can mix \(x^{\mu}\) with the other coordinates, violating (108).
For an explicit example, using the general expressions (104-105) for the invariant metric in the case of one real scalar in 4d, and evaluating at a point \(p\in J^{1}E\) such that
\(u(p)=0=u_{\mu}(p)\), we see that the metric tensor \(g_{IJ}(p)\) takes the following block form
\[g_{IJ}=\begin{pmatrix}\eta_{\mu\nu}V(0)&0&\delta^{\mu}_{\nu}E(0)\\ 0&C(0)&0\\ \delta^{\mu}_{\nu}E(0)&0&\eta^{\mu\nu}A(0)\end{pmatrix} \tag{104}\]
Assuming that none of the component functions vanish, it is clear that diagonalizing the metric requires a rotation of the form \((x^{\mu},u_{\mu})\mapsto O(x^{\mu},u_{\mu})\) which mixes \(x^{\mu}\) with \(u_{\mu}\), thus violating the bundle condition (103).
Now, the attentive reader might retort that, for the EFT described by this metric (with all functions non-zero), the function \(E(0)\) appearing on the off-diagonal is actually redundant, in that it pulls back to the operator that is equivalent to the kinetic term after using IBPs; so if we take \(E(0)\) to zero, the metric is already (block) diagonal without the need to mix \(x^{\mu}\) with other coordinates. However, there are EFTs where we _need_ to keep the \(E(0)\) function; in particular, if \(A(0)=0\), in which case the EFT is purely 2-derivative and the \(E(0)\) term is required for invertibility of the metric. In that case, diagonalising the metric clearly requires a non-trivial rotation \((x^{\mu},u_{\mu})\mapsto O(x^{\mu},u_{\mu})\), violating the bundle condition.
Counter-example: condition 3 for normal coordinates.We now suppose that one has a metric for which condition 1 is already satisfied, for example by taking the metric (104) and setting \(E(0)=0\) and \(A(0)\neq 0\), and try to thence find coordinates that further satisfy condition 3 (36). The determination of the explicit coordinate transformation that takes us to normal coordinates on the total space can be found order-by-order via an iterative procedure, but for our demonstration it will suffice to stop at the first order, at which point the obstruction shall already arise.
We start with a local fibred coordinate system \(x^{I}=(x^{\mu},u,u_{\mu})\) on a local patch of the 1-jet manifold. To try to construct normal coordinates in a perturbative fashion, consider a new coordinate system \(y^{I}=(y^{\mu},v,v_{\mu})\), defined by \(x^{I}=y^{I}+a^{I}_{JK}y^{J}y^{K}\), where \(a^{I}_{JK}\) is a tensor of constants satisfying \(a^{I}_{JK}=a^{I}_{KJ}\). When evaluating the metric components \(\bar{g}_{IJ}\) in the \(y_{I}\) coordinate system, at the origin in these coordinates, we have
\[\begin{split}\frac{\partial}{\partial y^{K}}\bar{g}_{IJ}(0)& =\left.\frac{\partial}{\partial y^{K}}\bigg{(}\frac{\partial x^{ R}}{\partial y^{I}}\frac{\partial x^{S}}{\partial y^{J}}g_{RS}\bigg{)}\right|_{y=0} \\ &=a^{R}_{IK}g_{RJ}(0)+a^{S}_{JK}g_{IS}(0)+\frac{\partial}{ \partial x^{K}}g_{IJ}(0)\end{split} \tag{105}\]
We see that the number of indices and the symmetries of \(a^{I}_{JK}\) and \(\frac{\partial}{\partial x^{K}}g_{IJ}(0)\) match. This implies it is possible to choose the components \(a^{I}_{JK}\) such that \(\frac{\partial}{\partial y^{K}}\bar{g}_{IJ}(0)=0\), thus satisfying a necessary condition for normal coordinates (see 103), by requiring
\[\frac{\partial}{\partial x^{K}}g_{IJ}(0)=-\bigg{(}a^{R}_{IK}g_{RJ}(0)+a^{R}_{ JK}g_{RI}(0)\bigg{)}\,. \tag{106}\]
From (106) we see that \(a^{I}_{JK}=-\Gamma^{I}_{JK}(0)\), where the symmetries dictate that there are 18 different structures. The Christoffel symbols (with respect to the original \(x^{I}\) coordinates)
are as usual given by
\[\Gamma^{I}_{JK}=\frac{1}{2}g^{IM}(g_{MJ,K}+g_{MK,J}-g_{JK,M})\,. \tag{110}\]
For our particular example, using the general expressions in Eqs. (104-106) for the metric components and lowering all indices of the Christoffel symbols, we find
\[\Gamma_{\rho\mu\nu}(0) =0,\quad\Gamma_{\rho\mu u}(0)=-\frac{1}{2}\partial_{u}V(0),\quad \Gamma_{\rho uu}(0)=0,\] (111) \[\Gamma_{\rho\mu u}^{\ \ \ \ \nu}(0) =0,\quad\Gamma_{\rho uu}^{\ \ \ \ \ \mu}(0)=\frac{1}{2}\delta^{\mu}_{\rho}\bigg{(}\frac{1}{ \Lambda^{2}}G(0)+\partial_{u}E(0)\bigg{)},\quad\Gamma_{\rho uu}^{\ \ \ \mu\nu}(0)=0,\] (112) \[\Gamma_{uuu}(0) =\frac{1}{2}\partial_{\mu}C(0),\quad\Gamma_{uu\mu}(0)=0,\quad \Gamma_{u\mu\nu}=\frac{1}{2}\partial_{u}V(0),\] (113) \[\Gamma_{uuu}^{\ \ \ \ \mu}(0) =0,\quad\Gamma_{u\mu u}^{\ \ \ \ \nu}(0)=\frac{1}{2}\delta^{\nu}_{\mu}\bigg{(}\frac{1}{ \Lambda^{2}}G(0)-\partial_{u}E(0)\bigg{)},\quad\Gamma_{uuu}^{\ \ \ \mu\nu}(0)=\eta^{\mu\nu}\bigg{(}\frac{1}{\Lambda^{2}}B(0)-\frac{1}{2}\partial_{u }A(0)\bigg{)},\] (114) \[\Gamma_{uuu}^{\rho\mu\nu}(0) =0,\quad\Gamma_{uuu}^{\mu\nu}=\frac{1}{2}\eta^{\mu\nu}\partial_{u }A(0),\quad\Gamma_{uu\rho}^{\ \
example geometries for which going to normal coordinates does happen to be consistent with the bundle structure; from Eq. 8.8 it is easy to see that if the Christoffel symbols or metric components appearing were set to zero, then the map to normal coordinates would indeed satisfy (8.1). But this situation is far from generic, meaning that normal coordinates are not especially useful in our formalism.
Another important reason why normal coordinates are not generally useful in our formalism is that passing to normal coordinates would be inconsistent with the inclusion of any non-constant potential \(V\). This can be seen by supposing that we have gone to normal coordinates, and expanding the metric up to second order, giving
\[g=g_{IJ}(u)dx^{I}\otimes dx^{J}=\bigg{(}\bar{g}_{IJ}+\frac{1}{3}R_{IJKL}(m)x^{K }x^{L}+\dots\bigg{)}dx^{I}\otimes dx^{J}\,. \tag{8.9}\]
If we wish to impose Poincare invariance, then a term such as \(R_{i\mu\nu j}x^{\mu}x^{\nu}du^{i}\otimes du^{j}\) must be set to zero. However, the symmetries of the Riemann tensor state that \(R_{i\mu\nu j}=R_{\mu ij\nu}\) and \(R_{\mu ij\nu}u^{i}u^{j}dx^{\mu}\otimes dx^{\nu}\) corresponds to the mass term in the Lagrangian. Therefore, the map to normal coordinates in conjunction with Poincare invariance requires the mass term to vanish and it is easy to see that all terms in the potential are forced to vanish beyond the leading order in the expansion.
Before moving on, we wish to make one last comment about normal coordinates in relation to our jet bundle formalism, which is that the _motivation_ for passing to normal coordinates is also weaker than it was in the geometric approach that we reviewed in SS2. An important motivation for passing to normal coordinates comes from the inclusion of the potential \(V\) contributions to the geometric formulae for amplitudes: the problem is that the second derivative of the potential, which enters for example in four point amplitudes, does not in general transform like a tensor on field space manifold \(M\) due to the appearance of Christoffel symbols in the expression. Normal coordinates circumvent this non-tensorial behaviour because the Christoffel symbols vanish. In contrast, in our bundle-based reformulation of EFTs the potential is itself coming from components of the metric tensor, rather than being added in by hand, so we never face this issue.
### Expansion of the jet bundle metric
We now show how the metric on the jet bundle may be expanded, explicitly _not_ in normal coordinates, but in a way that _is_ consistent with the iterated fibre bundle structure. As usual, we work with the local fibred coordinate system \(\{x^{\mu},u^{i},u^{i}_{\mu}\}\) on a general jet manifold \(J^{1}E\).39 The choice of origin point in the 1-jet manifold around which we expand the metric will be determined by the section \(j^{1}\phi\) according to
Footnote 39: To avoid any confusion in this Section, we remind the reader that the coordinates \(\{x^{\mu},u^{i},u^{i}_{\mu}\}\) are all independent on the jet manifold; in particular \(\frac{\partial}{\partial x^{\mu}}u^{i}=0\) (with no relation to \(u^{i}_{\mu}\)).
\[\begin{split} u^{i}(j^{1}\phi(p))&=0\,,\\ u^{i}_{\mu}(j^{1}\phi(p))&=0\,,\end{split} \tag{8.10}\]
_i.e._\(p\in\Sigma\) is a point in spacetime where the scalar field values \(\phi^{i}\), evaluated in our choice of local fibred coordinates, vanish. We can consider a completely general scalar theory, with arbitrary spacetime dimensions \(d\) and an arbitrary number of fields. While we do not impose any internal symmetries, we will impose Poincare invariance, which simplifies the expansion substantially since:
* None of the metric tensor components may depend explicitly on \(x^{\mu}\), _i.e._\(\frac{\partial}{\partial x^{\mu}}g_{IJ}=0\)\(\forall I,J\).
* All spacetime indices must be contracted into singlets, including those labelling derivative coordinates thanks to (111), thus any terms in the expansion with an odd number of spacetime indices must vanish.
We adopt the following notational conventions for the expansion
\[\bar{g}_{IJ}:=g_{IJ}(p)\quad\text{and}\quad g_{IJ,K}:=\frac{\partial}{ \partial x^{K}}g_{IJ}\,. \tag{113}\]
By Taylor expanding the metric and keeping all contributions to operators with up to four derivatives, we find
\[g_{IJ}(x^{K})dx^{I}\otimes dx^{J}=\sum_{n=0}^{\infty}\frac{1}{n! }\bigg{(}\bar{g}_{\mu\nu,k_{1}\dots k_{n}}u^{k_{1}}\dots u^{k_{n}}+\bar{g}_{ \mu\nu,qrk_{1}\dots k_{n-2}}u^{q}_{\rho}u^{r}_{\sigma}u^{k_{1}}\dots u^{k_{n-2}}\] (114) \[\quad\quad+\bar{g}_{\mu\nu,qrstk_{1}\dots k_{n-4}}^{\ \
### Geometrizing amplitudes on the jet bundle
We are now almost ready to extract the lowest degree \(n\)-point amplitudes from the expansion (108) by recalling our definition of the Lagrangian
\[\mathcal{L}=\frac{1}{2}\langle\eta^{-1},(j^{1}\phi)^{*}g\rangle \tag{109}\]
where composing the section \(j^{1}\phi\) with the coordinate charts yields
\[x^{\mu}\circ j^{1}\phi=x^{\mu}\,,\] \[u^{i}\circ j^{1}\phi=\phi^{i}\,, \tag{110}\] \[u^{i}_{\mu}\circ j^{1}\phi=\partial_{\mu}\phi^{i}\,.\]
For the propagator we find
\[i\]
\[i\]
\[\widetilde{g}_{ij}p^{2}+\tfrac{\eta^{\mu\nu}}{2}\bar{g}_{\mu\nu,ij}\tfrac{ \rho\sigma}{p_{\rho}p_{\sigma}+2\left(\bar{g}_{i\mu,j}^{\phantom{\mu,j}\nu}- \bar{g}^{\nu}_{i\mu,j}\right)p_{\nu}p^{\mu}+\tfrac{\eta^{\mu\nu}}{2}\bar{g}_{ \mu\nu,ij}+\bar{g}^{\mu\nu}_{ij}p^{2}p_{\mu}p_{\nu}}\] (111)
Requiring a canonical 2-derivative term enforces the relation
\[\bar{g}_{ij}+\frac{1}{2d}\eta^{\mu\nu}\eta_{\rho\sigma}\bar{g}_{\mu\nu,ij}^{ \phantom{\mu\nu,ij}\rho\sigma}+\frac{2}{d}\delta^{\mu}_{\nu}\left(\bar{g}_{i \mu,j}^{\phantom{\mu,j}\nu}-\bar{g}^{\nu}_{i\mu,j}\right)=\delta_{ij}\,, \tag{112}\]
reproducing _e.g._ (106), which can be thought of as a physics constraint on the jet bundle geometry. More precisely, for a metric that describes a physical EFT, we must be able to find a section \(\phi\) such that the kinetic term is canonically normalised in this way, and it is implicit that we have already performed this change of section.
For \(\bar{g}^{\mu\nu}_{ij}\neq 0\), avoiding tachyonic instabilities in the propagator [83] additionally requires that
\[\eta_{\mu\nu}\bar{g}^{\mu\nu}_{ij}=\begin{pmatrix}\lambda_{1}&0&\ldots&0\\ 0&\lambda_{2}&\ldots&0\\ \ldots&\ldots&\ldots&\ldots\\ 0&0&\ldots&\lambda_{n}\end{pmatrix},\quad\lambda_{i}<0\;\;\forall i\in\{1, \ldots,n=\dim(M)\}\,, \tag{113}\]
Again, it is implicit that we have done a derivative field redefinition, or change of section, to further bring \(\eta_{\mu\nu}\bar{g}^{\mu\nu}_{ij}\) into diagonal form, which is always possible up to order \(p^{6}\) corrections. This inequality partly fixes the _signature_ of the jet bundle metric (in the derivative-coordinate directions), and is the manifestation of _positivity_ in our geometric formulation. For example, in the case of a single real scalar in 4d (SS6.2), and using the parametrization (100), the above conditions become simply
\[\mathcal{F}_{0}(0)=1 (\text{2-derivative term}), \tag{114}\] \[\mathcal{F}_{1}(0)<0 (\text{4-derivative term})\,, \tag{115}\]
with the latter being a known positivity bound [83].
Continuing, the 3-point amplitude is40
Footnote 40: In these expressions for the amplitudes, a ‘dot product’ denotes a contraction in spacetime indices, _i.e._\(p_{q}\cdot p_{r}:=\eta^{\mu\nu}(p_{q})_{\mu}(p_{r})_{\nu}\). Furthermore, Lorentz symmetry, together with the symmetry properties of the metric, fixes the space-time structure of metric components. For example, \(\bar{g}_{\mu\nu,ijk}\eta^{\mu\nu}\sim\eta^{\alpha\beta}\) and \(\bar{g}_{\mu,jk}\sim\delta^{\alpha}_{\mu}\).
\[\begin{split}
metric are all objects we can compute on the jet manifold. All of them can only be related to physical fields after being pulled back along a section \(j^{1}\phi\) of the jet bundle, to form a Lagrangian functional of \(\phi\).
Since a smooth map moves points around on the manifold, it should be clear that none of these object are invariant under one. We have seen many examples of this throughout the paper. For the Riemann and Ricci tensors, the story is the same one we saw back in SS2, whereby a smooth map changes the metric tensor \(g\) (see _e.g._ Eq. (111)). The story for the Ricci scalar (which has no 'components') is perhaps more subtle and thus deserving of a closer look. The isometry invariance of the Ricci scalar (as is relevant, for example, in showing that it is invariant under coordinate transformations) follows simply from the manipulations
\[f^{*}R((f^{*})^{-1}g)=R(f^{*}(f^{*})^{-1}g)=R(g)\,. \tag{114}\]
In contrast, under a smooth map \(f\) (that actually moves points, rather than just relabelling them) we do not have the compensating factor of \((f^{*})^{-1}\), thus giving
\[f^{*}R(g)=R(f^{*}g)\neq R(g)\,, \tag{115}\]
unless, of course, the smooth map \(f\) is itself an isometry with respect to the metric \(g\) (in this case it would be a 'non-trivial isometry', like the internal \(O(4)\) symmetry we have discussed in the context of the Higgs EFTs). If we further take the smooth map \(f\) to be a diffeomorphism, which we have seen in preceding Sections can replicate a _non-derivative field redefinition_, then \(f\) will generally change the values of the curvature invariants - but the requirement that \(f\) and \(f^{-1}\) be well defined and differentiable ensures that a singularity cannot be introduced by \(f\). This helps to make sense of the criterion suggested in Ref. [33] for determining whether SMEFT is enough.
More generally, a field redefinition can be viewed as a change of section, which is more general than a diffeomorphism (indeed, it cannot in general be implemented as a bundle morphism, as discussed quite generally in SS3.3.5). Such a derivative change of section _can_ introduce a singularity into any of the curvature invariants, which is precisely the case in the example discussed in Appendix E of [33]. Finally, we observe that a change of basis at the level of the Lagrangian is typically done through a field redefinition involving derivatives (such as the '\(\Box\phi\)'-dependent redefinitions we frequently encountered in this paper), which can completely change the values of curvature invariants, and potentially even introduce singularities. This makes the use of curvature invariants a somewhat unreliable (_i.e._ fundamentally basis-dependent) way of comparing theories at the level of the Lagrangian.41
Footnote 41: We nonetheless offer some related comments concerning curvature invariants, and their role in detecting a certain subset of redundancies in our metric to Lagrangian maps, in Appendix A.
## 9 Conclusion
In this paper we used geometry on jet bundles, which are a sequence of manifolds that one can construct given a set of scalar fields \(\phi^{i}\) for which spacetime derivatives \(\partial_{\mu_{1}\ldots\mu_{r}}\phi^{i}\) are treated as independent coordinates, as a source of Lagrangians defining scalar field
theories. In particular, we showed that Lagrangians containing operators with up to 4 spacetime derivatives can be obtained by pulling back a metric on the appropriate 1-jet bundle given the field content of the theory. The jet bundle geometry is taken to be the most general geometry that is invariant under the symmetries of the EFT, which we saw are naturally extended (or 'prolongated') to the jet bundle.
In all the examples we have explored (SS6 and SS7), the Lagrangian obtained by pulling back the 1-jet bundle metric contains a complete basis of operators up to 4 derivatives, where all IBP redundancies have been removed. The IBP redundancies are accounted for automatically in the formalism, meaning that the jet bundle metric naturally pulls back to a Lagrangian without \(\Box\phi\) terms - one could say that the jet bundle metric naturally takes us to the most general possible EFT Lagrangian (with up to 4-derivatives) expressed in a Green's basis. On the other hand, equation-of-motion redundancies, which stem from invariance of the \(S\)-matrix under certain derivative field redefinitions, can be identified and implemented, but in a more _ad hoc_ fashion. An important thing to stress, in our view, is that the map from 1-jet metrics to 4-derivative EFT Lagrangians surjects, in the sense that one can find metrics that map to every possible EFT. We suspect that this statement can be generalized to invariant Lagrangians with up to \(2(r+1)\) derivatives, in correspondence with invariant metrics on the \(r\)-jet bundle - but we leave a proof to future work.
Along the way, we hope to have clarified various issues related to field redefinitions, which we understand to be smooth changes of (possibly prolongated) section of the (jet) bundle. This applies to both non-derivative and derivative field redefinitions. In the former case, one is afforded an equivalent description in terms of bundle morphisms, which means the field redefinition can be implemented by doing a diffeomorphism on the jet bundle metric. We find no such correspondence between derivative changes of section and bundle morphisms.
Finally, we expand our metric in the vicinity of a point in the 1-jet bundle. Translating this to the Lagrangian, we organise an expansion of \(\mathcal{L}\) in terms higher- and higher-point interactions, inspired by the ideas of [34], keeping all interactions with up to 4 derivatives. This facilitates a connection between components of the 1-jet bundle metric (and its derivatives) and propagators/amplitudes of the EFT. We do not elucidate this correspondence comprehensively in this paper, but we already highlight some nice features - for example, we obtain the lowest-point correlation functions explicitly, and infer _e.g._ how positivity (unitarity) constrains the signature of the jet bundle metric.
In the future, we aim to investigate more systematically the structure of amplitudes in this jet-bundle formalism, especially in the important case of the Higgs EFTs. Other elements that we plan to incorporate into our jet bundle formalism, to describe higher-derivative EFTs geometrically, include the incorporation of fermions, and the (partial) gauging of symmetries - both of which are necessary to connect with the rich phenomenology of the EFTs that describe elementary particles at the energy frontier.
###### Acknowledgements.
JD is grateful to Ben Gripaios and Joseph Tooby-Smith for passing on their knowledge of jet bundles a number of years ago. JD further thanks J. Tooby-Smith for comments on the first version of this manuscript, and Nakarin Lohitsiri for discussions. IB thanks J.C. Criado, R.V. Harlander and M. Schaaf for their support with the validation of the dimension-10 and 12 SMEFT bases. We thank Javier Lizana, Ken Mimasu, Dave Sutherland, Tevong You, and other participants of the HEFT 2023 workshop, for helpful discussions. We are also grateful to Nathaniel Craig and Yu-Tse Lee for their patience in waiting for us to complete a draft of our manuscript, and to coordinate with their release of [46]. The work of JD is funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant agreement 833280 (FLAY), and by the Swiss National Science Foundation (SNF) under contract 200020-204428. IB and MA acknowledge funding from SNF through the PRIMA grant no. 201508.
## Appendix A Diagnosing redundancies in the metric using invariants
Although the geometric invariants are unlikely to be useful when it comes to distinguishing Lagrangians, they do have an application when it comes to redundancies at the level of the metric.
Our expression for the Lagrangian is coordinate free, which has the benefit of allowing us to express the metric in our preferred coordinate system, but introduces ambiguities as it is difficult at a first glance to determine whether two metrics \(g\) and \(g^{\prime}\) will yield the same Lagrangian. We saw earlier that a coordinate transformation leaves the Lagrangian unchanged, thus if we can show that \(g\) and \(g^{\prime}\) are related by a coordinate transformation, then we know that they will generate the same Lagrangian. Writing down the most general coordinate transformation is quite challenging and quickly becomes unreasonable as the dimension of our jet manifold increases. Comparing scalar geometric invariants gives an approach to see if two metrics are dissimilar [90].
Scalar geometric invariants are coordinate independent, but they are typically expressed in coordinates making them difficult to compare directly. However, an alternative approach exists. One can start by calculating several scalar invariants, such as:
* Ricci scalar: \(\mathcal{R}_{1}=g^{\mu\nu}g^{\rho\sigma}R_{\mu\nu\rho\sigma}\)
* Kretschmann scalar: \(\mathcal{R}_{2}=R^{\mu\nu\rho\sigma}R_{\mu\nu\rho\sigma}\)
* Cubic curvature invariant: \(\mathcal{R}_{3}=R_{\alpha\beta}^{\mu\nu}R_{\rho\sigma}^{\alpha\beta}R_{\mu \nu}^{\rho\sigma}\)
Then one can establish functional relations among these invariants. If at any point the functional relations of the two metrics disagree, then we conclude that they are not related by a coordinate transformation. However, agreement in all computed functional relations is not proof that the metrics are related by a transformation, since there is an infinite number of scalar invariants.
In Eq. (6.16) we saw that several components of the metric pullback to the same term resulting in redundancies in the final expression. To identify the terms that could be removed by a coordinate transformation it is not enough to compare scalar invariants, rather we need a way to prove that two metrics generate the same geometry, which can be done using the Cartan-Karlhede algorithm [90; 91; 92; 93; 94].
The first step in applying the formalism is to choose a symmetric metric with constant signature (_e.g._\(\delta_{ij},\eta_{ij},\dots\)) and introduce a frame of vector fields \(\{e_{a}=e_{a}^{i}\partial_{i}\}\) such that
\[g_{ij}e_{a}^{i}e_{b}^{j}=\delta_{ab}\] (A.1)
the choice of frame is not unique since
\[e_{i}^{a}\to e_{i}^{\prime a}=\Lambda_{b}^{a}e_{i}^{b}\] (A.2)
with
\[\delta_{ab}\Lambda_{c}^{a}\Lambda_{d}^{b}=\delta_{cd}\] (A.3)
is also a valid frame of vector fields.
The matrices \(\Lambda\) form a Lie group with dimension \(\frac{n(n-1)}{2}\). Now we compute the Riemann tensor and project it along these frames looking for the subgroup \(H_{0}\) of matrices that preserve the form of the Riemann tensor
\[R_{abcd}=R_{\alpha\beta\gamma\delta}^{\prime}\] (A.4)
and determine the number \(\tau_{0}\) of independent parameters upon which the components of the Riemann tensor depend.
Afterwards we repeat the procedure for the covariant derivatives of the Riemann tensor and determine subgroups \(H_{1},H_{2},\dots\) as well as numbers of independent variables \(\tau_{1},\tau_{2},\dots\) stopping once we have reached an order \(q\) at which \(H_{q}=H_{q-1}\) and \(\tau_{q}=\tau_{q-1}\). If a frame exists at this order such that all the derivatives of Riemann tensor for each of the two metrics \(g\) and \(g^{\prime}\) agree then we conclude that they are equivalent and produce the same geometry and thus yield the same Lagrangian upon pullback.
The redundancies captured by this algorithm are only the ones that could be removed by a coordinate transformation. Redundancies that need to be removed via integration by parts relations and field redefinitions are only seen after pullback and thus are not related to coordinate transformations.
## Appendix B Higher jet bundles for higher derivatives: a completeness proof
One of the main points of this paper is that scalar EFTs with up to 4 derivatives admit a geometric interpretation on a 1-jet bundle. More precisely, we showed in a number of explicit examples that the Lagrangian obtained pulling back to spacetime a 1-jet bundle metric always contains a redundant set of effective operators with 0, 2 and 4 derivatives, that can be reduced down to a minimal basis by applying a small number of IBP (and EOM, if desired) relations.
In this Appendix we provide a general proof that the Lagrangian obtained by pulling back to spacetime the most general \(r\)-jet bundle metric always contains a redundant set of effective operators with up to \(2(r+1)\) derivatives and arbitrary number of field insertions. Moreover, within this set, it is always possible to identify a complete and non-redundant Green's basis that does not contain operators with boxes.
This result proves that the empirical observations in SSSS6, 7 generalize to any scalar theory, and that operators with 6 or more derivatives also admit a geometric interpretation in terms of higher-jet bundle metrics. The proof formally holds for a Lorentz-invariant scalar EFT in any number of spacetime dimensions (with Lorentz indices being everywhere contracted via the Minkowski metric, as justified generally in SS5.5.2), although the interpretation of the EFT expansion is of course different in different dimensions.42
Footnote 42: For example, in the special case of 2d EFTs, the scalar field is dimensionless and so the counting of derivatives coincides precisely with the mass dimension counting of EFT operators.
Our argument consists of two points, that we prove individually below:
1. Consider pure scalar interactions among \(N\) fields and with a total of \(2(r+1)\) derivatives. It is always possible to construct a complete and IBP-non-redundant basis of operators at this order, that does not contain operators with boxes, nor operators with more than \((r+1)\) derivatives acting on a single field.
2. Any operator containing an arbitrary number of scalar fields \(N\) and up to \(2(r+1)\) derivatives, such that no more than \((r+1)\) derivatives act on one field and that boxes are absent, can be obtained by pulling back to spacetime the most general metric of a \(r\)-jet bundle.
Put together, these statements imply that a non-redundant Green's basis can always be found within the set of box-less operators with \(N\) fields and up to \(2(r+1)\) derivatives produced by the \(r\)-jet bundle metric. Vice versa, any arbitrary scalar EFT Lagrangian always admits a geometric interpretation on the appropriate jet bundle.
In practice, the metric will always produce a redundant Lagrangian, that contains all possible box-less structures with up to \(2(r+1)\) derivatives, plus some (but not all!) of those with boxes. In general, there will be several options to reduce the resulting operator set down to a minimal basis. Here we choose, as an algorithmic rule, to remove all operators with boxes. This choice is convenient for a number of reasons: (i) statement 1. above guarantees that boxes can always be safely removed in any scalar theory at any order; (ii) statement 2. above ensures that a box-less basis will be obtained directly from the jet bundle metric, without requiring manipulations at the Lagrangian level; (iii) empirically we find that this choice removes most ambiguities in the basis construction. In fact, in all the examples considered, there turned out to be a unique basis satisfying the box-less requirement; (iv) any alternative choice aimed at retaining operators with boxes would need to distinguish between box operators that can and cannot be obtained from the metric, making the construction more cumbersome.
In the statements above we are also requesting the absence of operators with more than half of the derivatives \((r+1)\) acting on a single field. This condition is strictly necessary
in order to obtain the Lagrangian from a \(r\)-jet bundle. In principle one could have evaded this rule by choosing a higher-jet bundle, which however would have also produced terms with more than \(2(r+1)\) derivatives. It is interesting that the minimal (and more natural) choice of a \(r\)-jet bundle suffices to produce a complete basis.
In the next two subsections we provide proofs of points 1. and 2. above. The results in this Appendix hold independently of the number of scalar flavours present and on whether an internal symmetry is imposed or not, as they are insensitive to the presence of flavour indices on the scalar fields. Arbitrary contractions of internal indices can always be inserted appropriately in the metric functions.
### Proof of point 1.
Operators with more than \((r+1)\) derivatives on a single field can always be removed by trivially moving derivatives across the operator via IBP, until there is a maximum of \((r+1)\) derivatives on each field.43 Therefore they are always redundant.
Footnote 43: Of course, this requires at least 2 fields to be inserted in one operator. On the other hand, a term with 1 field only would automatically be a total derivative.
Operators with boxes acting on one or more fields can also be removed similarly, by using IBP to move one of the two derivatives contracted in a box to other fields. This operation requires to apply, in sequence, an IBP for each box, _e.g._
\[(\partial_{\mu}\Box\phi^{i_{1}})(\Box\phi^{i_{2}})(\partial^{\mu }\phi^{i_{3}})A(\phi) =-(\partial_{\mu}\partial_{\rho}\phi^{i_{1}})(\Box\phi^{i_{2}})( \partial^{\mu}\phi^{i_{3}})(\partial^{\rho}A(\phi))+\ldots \tag{114}\] \[=(\partial_{\sigma}\partial_{\mu}\partial_{\rho}\phi^{i_{1}})( \partial^{\sigma}\phi^{i_{2}})(\partial^{\mu}\phi^{i_{3}})(\partial^{\rho}A( \phi))+\ldots\]
where \(A\) is some analytic function of the fields, \(i_{k}\) is a potential flavour index carried by the \(k\)-th field, and the dots stand for other terms stemming from the IBPs. A key point is that this operation can never reintroduce boxes because, at each step, it moves a derivative with a different Lorentz index. Therefore operators with boxes can always be fully removed from an operator basis.
To conclude our proof, we need to check that the two redundancy classes can be treated independently, _i.e._ that the IBPs used to remove boxes do not re-introduce operators with more than \((r+1)\) derivatives on a single field. Let us consider an operator with \(N\) fields and \(2(r+1)\) derivatives, such that \(d_{k}\) derivatives act on the \(k\)-th field, and they are contracted to form \(b_{k}\) boxes, with \(2b_{k}\leq d_{k}\).
The boxes can be removed with a sequence of IBPs as shown above. They send
\[d_{k}\to d_{k}-b_{k}+s\,,\] with \[0\leq s\leq\sum_{j\neq k}b_{j}\,. \tag{115}\]
The term \(-b_{k}\) accounts for the fact that, for every box on the \(k\)-th field, one derivative is removed and placed elsewhere. In addition, the \(k\)-th field receives \(s\) derivatives coming from other boxes in the product. IBPs replace the initial operator with a sum over several terms, in which \(s\) takes all possible values in a range from 0 to the total number of boxes originally acting on fields other than the \(k\)-th. The latter number is bounded by
\[\sum_{j\neq k}b_{j}\leq\frac{2(r+1)-d_{k}-(d_{k}-2b_{k})}{2}=r-d_{k}+b_{k}+1\,. \tag{116}\]
The numerator on the righthand side counts how many Lorentz indices on \(\phi_{j\neq k}\) can be contracted into boxes: there are at most \(2(r+1)-d_{k}\), from which we need to subtract \((d_{k}-2b_{k})\) that must be contracted with open derivatives on \(\phi_{k}\). Putting together (114) and (115) we see that
\[d_{k}-b_{k}+s\leq d_{k}-b_{k}+\sum_{j\neq k}b_{j}\leq r+1\,. \tag{116}\]
This proves that the IBP required to remove boxes can never reintroduce terms with more than \(r+1\) derivatives on one field. Therefore both classes of operators can always be fully removed using IBPs, _i.e._ it is always possible to construct a complete and non-redundant Green's basis of operators that does not contain either.
### Proof of point 2.
To prove point 2, we write down the most general \(r\)-jet bundle metric, extending Eq. (112):
\[g^{(r)}=\left(dx^{\mu}\ du^{i}\ du^{i}_{\mu_{1}}\ du^{i}_{\mu_{1} \mu_{2}}\ \cdots\ du^{i}_{\mu_{1}\ldots\mu_{r}}\right)[g^{(r)}]\left(\begin{array}{ c}dx^{\nu}\\ du^{j}\\ du^{j}_{\nu_{1}}\\ du^{j}_{\nu_{1}\nu_{2}}\\ \vdots\\ du^{j}_{\nu_{1}\ldots\nu_{r}}\end{array}\right)\,, \tag{117}\] \[[g^{(r)}]=\left(\begin{array}{cccccc}g_{\mu\nu}&g_{\mu j}&g^{ \nu_{1}}_{\mu j}&g^{\nu_{1}\nu_{2}}_{\mu j}&\cdots&g^{\nu_{1}\ldots\nu_{r}}_{ \mu j}\\ g_{\nu i}&g_{ij}&g^{\nu_{1}}_{ij}&g^{\nu_{1}\nu_{2}}_{ij}&\cdots&g^{\nu_{1} \ldots\nu_{r}}_{ij}\\ g^{\mu_{1}}_{\nu i}&g^{\mu_{1}}_{ij}&g^{\mu_{1}\nu_{1}}_{ij}&g^{\mu_{1}\nu_{1 }\nu_{2}}_{ij}&\cdots&g^{\mu_{1}\nu_{1}\ldots\nu_{r}}_{ij}\\ g^{\mu_{1}\mu_{2}}_{\nu i}&g^{\mu_{1}\mu_{2}}_{ij}&g^{\mu_{1}\mu_{2}\nu_{1}}_{ ij}&g^{\mu_{1}\mu_{2}\nu_{1}\nu_{2}}_{ij}&\cdots&g^{\mu_{1}\mu_{2}\nu_{1} \ldots\nu_{r}}_{ij}\\ \vdots&\vdots&\vdots&\vdots&\ddots&\vdots\\ g^{\mu_{1}\ldots\mu_{r}}_{\nu i}&g^{\mu_{1}\ldots\mu_{r}}_{ij}&g^{\mu_{1} \ldots\mu_{r}\nu_{1}}_{ij}&g^{\mu_{1}\ldots\mu_{r}\nu_{1}\nu_{2}}_{ij}&\cdots& g^{\mu_{1}\ldots\mu_{r}\nu_{2}\ldots\nu_{r}}_{ij}\end{array}\right)\,, \tag{118}\]
where implicitly all the metric entries are functions of \(u^{i},u^{i}_{\mu_{1}},\ldots u^{i}_{\mu_{1}\ldots\mu_{r}}\) and we can assume
\[g_{\nu i}=g_{\mu j}\,, g^{\nu_{1}\ldots\nu_{s}}_{\mu j}=g^{\mu_{1}\ldots\mu_{s}}_{\nu i }\,, \tag{119}\]
upon relabeling indices, by symmetry of the metric tensor.
Pulling this metric back to spacetime along \(j^{1}\phi\) and contracting with \(\eta^{-1}\) gives
\[\mathcal{L}[\phi,g^{(r)}] =\frac{1}{2}\eta^{\mu\nu}g_{\mu\nu}+\frac{1}{2}g_{ij}(\partial_{ \mu}\phi^{i})(\partial^{\mu}\phi^{j})+g_{\mu j}(\partial^{\mu}\phi^{j}) \tag{120}\] \[+\sum_{s=1}^{r}g^{\nu_{1}\ldots\nu_{s}}_{\mu j}(\partial^{\mu} \partial_{\nu_{1}}\cdots\partial_{\nu_{s}}\phi^{j})+\frac{1}{2}g^{\mu_{1} \ldots\mu_{s}}_{ij}(\partial_{\rho}\partial_{\mu_{1}}\cdots\partial_{\mu_{s}} \phi^{i})(\partial^{\rho}\phi^{j})+\frac{1}{2}g^{\nu_{1}\ldots\nu_{s}}_{ij}( \partial_{\rho}\phi^{i})(\partial^{\rho}\partial_{\nu_{1}}\cdots\partial_{\nu _{s}}\phi^{j})\] \[+\frac{1}{2}\sum_{m=1}^{r}\sum_{s=1}^{r}g^{\mu_{1}\ldots\mu_{m} \nu_{1}\ldots\nu_{s}}_{ij}(\partial_{\rho}\partial_{\mu_{1}}\cdots\partial_{ \mu_{m}}\phi^{i})(\partial^{\rho}\partial_{\nu_{1}}\cdots\partial_{\nu_{s}} \phi^{j})\,.\]
At this point it easy to show that any effective operator with exactly \(2(r+1)\) derivatives and an arbitrary number of fields \(N\) can be matched to at least one of the terms in Eq. (114).
To do so, it is convenient to classify the operators according to the number of derivatives acting on each field, namely a string
\[d_{1}d_{2}\dots d_{N}\,,\] such that \[\sum_{k}d_{k}=2(r+1)\,, \tag{115}\]
that we take to be sorted from largest to smallest \(d_{k}\). In practice, the operator classes are labeled by the integer partitions of \(2(r+1)\) into at most \(N\) terms, and that do not contain numbers higher than \((r+1)\). For instance, with 8 derivatives and 4 fields, there are 8 classes:
\[4400\,, 4310\,, 4220\,, 4211\,, \tag{116}\] \[3320\,, 3311\,, 3221\,, 2222\,. \tag{117}\]
Each class can be easily put in correspondence with a metric element. For instance, operators of class \((r+1)(r+1)0\dots\) can be obtained by pulling back \(g^{\mu_{1}\dots\mu_{r}\nu_{1}\dots\nu_{r}}_{ij}\). In general, operators of each class can be obtained by pulling back multiple metric elements, that give identical expressions. To prove our hypothesis, it is enough to find one example for each, which is particularly simple in this notation:
\[g^{\mu_{1}\dots\mu_{m-1}\nu_{1}\dots\nu_{s-1}}_{ij} \to ms\dots 2\leq m,s\leq r+1\,,\] \[g^{\mu_{1}\dots\mu_{s-1}}_{ij} \to s1\dots 2\leq s\leq r+1\,,\] \[g_{ij} \to 111\dots\]
where the dots stand for factors in the integer decomposition of \(2(r+1)-m-s\). For instance, considering the example above with \(r=3,N=4\):
\[g^{\mu_{1}\mu_{2}\mu_{3}\nu_{1}\nu_{2}\nu_{3}}_{ij} \to 4400\,, g^{\mu_{1}\mu_{2}\mu_{3}\nu_{1}\nu_{2}}_{ij} \to 4310\,, g^{\mu_{1}\mu_{2}\mu_{3}\nu_{1}}_{ij} \to 4220,4221\,,\] \[g^{\mu_{1}\mu_{2}\nu_{1}\nu_{2}}_{ij} \to 3320,3311\,, g^{\mu_{1}\mu_{2}\nu_{1}}_{ij} \to 3221\,, g^{\mu_{1}\nu_{1}}_{ij} \to 2222\,. \tag{118}\]
This correspondence assumes that the metric entries are general functions of \(u^{i},u^{i}_{\mu},\dots u^{i}_{\mu_{1}\dots\mu_{r}}\), such that insertions of fields and derivatives in the \((N-2)\) rightmost terms in the string can be obtained by extracting the appropriate dependence of the metric on these variables. Let us translate this into concrete examples for the cases in (118). For simplicity, we omit the dependence on \(\Lambda\) and take the case of a single real scalar:
\[g^{\mu_{1}\mu_{2}\mu_{3}\nu_{1}\nu_{2}\nu_{3}}_{uu}=\eta^{\mu_{1 }\nu_{1}}\eta^{\mu_{2}\nu_{2}}\eta^{\mu_{3}\nu_{3}}u^{2}+\dots \to (\partial_{\mu}\partial_{\nu}\partial_{\rho}\partial_{\sigma}\phi)( \partial^{\mu}\partial^{\nu}\partial^{\rho}\partial^{\sigma}\phi)\phi^{2}\,, \tag{119}\] \[g^{\mu_{1}\mu_{2}\mu_{3}\nu_{1}\nu_{2}}_{uu}=\eta^{\mu_{1}\nu_{1 }}\eta^{\mu_{2}\nu_{2}}u^{\mu_{3}}u+\dots \to (\partial_{\mu}\partial_{\nu}\partial_{\rho}\partial_{\sigma}\phi)( \partial^{\mu}\partial^{\nu}\partial^{\rho}\phi)(\partial^{\sigma}\phi)\phi\,,\] (120) \[g^{\mu_{1}\mu_{2}\mu_{3}\nu_{1}}_{uu}=\eta^{\mu_{1}\nu_{1}}u^{\mu _{2}\mu_{3}}u+\dots \to (\partial_{\mu}\partial_{\nu}\partial_{\rho}\partial_{\sigma}\phi)( \partial^{\mu}\partial^{\nu}\phi)(\partial^{\rho}\partial^{\sigma}\phi)\phi\,,\] (121) \[g^{\mu_{1}\mu_{2}\mu_{3}\nu_{1}}_{uu}=\eta^{\mu_{1}\nu_{1}}u^{\mu _{2}}u^{\mu_{3}}u+\dots \to (\partial_{\mu}\partial_{\nu}\partial_{\rho}\partial_{\sigma}\phi)( \partial^{\mu}\partial^{\nu}\phi)(\partial^{\sigma}\phi)\phi\,,\] (122) \[g^{\mu_{1}\mu_{2}\nu_{1}}_{uu}=\eta^{\mu_{1}\nu_{1}}u^{\mu_{2} \sigma}u_{\sigma}+\dots \to (\partial_{\mu}\partial_{\nu}\partial_{\rho}\phi)(\partial^{\mu} \partial^{\nu}\phi)(\partial^{\rho}\partial^{\sigma}\phi)(\partial_{\sigma}\phi)\,,\] (123) \[g^{\mu_{1}\nu_{1}}_{uu}=u^{\mu_{1}\sigma}u^{\nu_{1}}_{\sigma}+\dots \to (\partial_{\mu}\partial_{\nu}\phi)(\partial^{\mu}\partial^{\rho} \phi)(\partial_{\rho}\partial_{\sigma}\phi)(\partial^{\nu}\partial^{\sigma}\phi )\,. \tag{124}\]
With a larger number of field insertions, more classes are present and metric terms with fewer Lorentz indices become relevant as well, _e.g._:
\[g^{\mu_{1}\mu_{2}\mu_{3}}_{uu} =u^{\mu_{1}}u^{\mu_{2}}u^{\mu_{3}}+\ldots \rightarrow (\partial_{\mu}\partial_{\nu}\partial_{\rho}\partial_{\sigma} \phi)(\partial^{\mu}\phi)(\partial^{\nu}\phi)(\partial^{\rho}\phi)(\partial^{ \sigma}\phi)\,, \tag{111}\] \[g_{uu} =(u_{\rho})^{6}+\ldots \rightarrow (\partial_{\mu}\phi)^{8}\,. \tag{112}\]
Within each operator class, it is always possible to choose an explicit form of the metric function that gives any arbitrary Lorentz structure in the Lagrangian, provided that at least one pair of derivatives acting on two different fields are contracted with each other (the \(\rho\) indices in (110)). If we stick to box-less operators, then this condition is always verified. Therefore _any_ box-less operator with \(2(r+1)\) derivatives and maximum \((r+1)\) derivatives on each field can be obtained from a \(r\)-jet bundle metric. It is not hard to see that several operators _with_ boxes, that satisfy that minimal condition, can be generated as well. For instance, we could modify (109) into
\[g^{\mu_{1}\mu_{2}\mu_{3}\nu_{1}\nu_{2}}_{uu}=\eta^{\mu_{1}\mu_{2}}\eta^{\nu_{ 1}\nu_{2}}u^{\mu_{3}}+\ldots\qquad\quad\rightarrow\qquad(\partial_{\mu} \partial_{\sigma}\Box\phi)(\partial^{\mu}\Box\phi)(\partial^{\sigma}\phi)\,. \tag{113}\]
However, it is not possible to obtain structures such as \((\Box\phi)^{4}\).
In the presence of multiple scalar flavours, arbitrary index assignments or contractions can be achieved. For instance, taking \(r=2\), we can have \(O(n)\) invariant contractions:
\[g^{\mu_{1}\mu_{2}\nu_{1}\nu_{2}}_{ij} =\delta_{ij}\eta^{\mu_{1}\nu_{1}}\eta^{\mu_{2}\nu_{2}}\eta^{\mu_{ 3}\nu_{3}}+\ldots \rightarrow (\partial_{\mu}\partial_{\nu}\partial_{\rho}\phi\cdot \partial^{\mu}\partial^{\nu}\partial^{\rho}\phi)\,, \tag{114}\] \[g^{\mu_{1}\nu_{1}}_{ij} =\delta_{ij}\eta^{\mu_{1}\nu_{1}}(u\cdot u_{\sigma})^{2}+\ldots \rightarrow (\partial_{\mu}\partial_{\nu}\phi\cdot\partial^{\mu}\partial^{ \nu}\phi)(\phi\cdot\partial_{\sigma}\phi)^{2}\,,\] (115) \[g_{ij} =u_{i}u_{j}(u_{\rho}\cdot u^{\rho})(u_{\sigma}\cdot u^{\sigma})+\ldots \rightarrow (\phi\cdot\partial_{\mu}\phi)^{2}(\partial_{\rho}\phi\cdot \partial^{\rho}\phi)^{2}\,, \tag{116}\]
or arbitrary flavour assignments \(i_{1}i_{2}\ldots i_{N}\) without any symmetry imposed (in this case the indices are just labels, they do not need to be contracted)
\[g^{\mu_{1}\mu_{2}\nu_{1}\nu_{2}}_{ij} =\delta_{ii_{1}}\delta_{ij_{2}}\eta^{\mu_{1}\nu_{1}}\eta^{\mu_{2} \nu_{2}}\eta^{\mu_{3}\nu_{3}}+\ldots \rightarrow (\partial_{\mu}\partial_{\nu}\partial_{\rho}\phi^{i_{1}})( \partial^{\mu}\partial^{\nu}\partial^{\rho}\phi^{i_{2}})\,, \tag{117}\] \[g^{\mu_{1}\nu_{1}}_{ij} =\delta_{ii_{1}}\delta_{ji_{2}}\eta^{\mu_{1}\nu_{1}}u^{i_{3}}_{q} u^{i_{4}\sigma}+\ldots \rightarrow (\partial_{\mu}\partial_{\nu}\phi^{i_{1}})(\partial^{\mu} \partial^{\nu}\phi^{i_{2}})(\partial_{\sigma}\phi^{i_{3}})(\partial^{\sigma} \phi^{i_{4}})\,,\] (118) \[g_{ij} =\delta_{ii_{1}}\delta_{ji_{2}}u^{i_{3}}_{\rho}u^{i_{4}\rho}u^{i_{ 5}}_{\sigma}u^{i_{6}\sigma}+\ldots \rightarrow (\partial_{\mu}\phi^{i_{1}})(\partial^{\mu}\phi^{i_{2}})( \partial_{\rho}\phi^{i_{3}})(\partial^{\rho}\phi^{i_{4}})(\partial_{\sigma} \phi^{i_{5}})(\partial^{\sigma}\phi^{i_{6}})\,. \tag{119}\]
This concludes the proof for scalar operators with exactly \(2(r+1)\) derivatives. The last step is extending it to lower derivatives. This is trivial: by the argument just concluded, operators with \(2n<2(r+1)\) derivatives can be obtained from a \((n-1)\)-jet bundle metric. Since \((n-1)<r\), the latter is contained as a sub-block in the \(r\)-jet bundle metric (see Eq. (102)). Finally, operators without derivatives can always be obtained from \(g_{\mu\nu}\), in any \(r\)-jet bundle metric.
|
2306.17548 | Calorons, monopoles and stable, charged solitons | We discuss the similarity of the constituent monopoles of calorons and stable
topological solitons with long range Coulombic interaction, classical solutions
of the model of topological particles. In the interpretation as electric
charges they can be compared to electrons and positrons with spin up and down,
with quantised charge and finite mass. | Manfried Faber | 2023-06-30T11:04:48Z | http://arxiv.org/abs/2306.17548v1 | # Calorons, monopoles and stable, charged solitons
###### Abstract
We discuss the similarity of the constituent monopoles of calorons and stable topological solitons with long range Coulombic interaction, classical solutions of the model of topological particles. In the interpretation as electric charges they can be compared to electrons and positrons with spin up and down, with quantised charge and finite mass.
## 1 Caloron monopoles
An important property of SU(2) Yang-Mills theory is the existence of topologically different vacua, characterised by a winding number. There are solutions to the classical equations of motion of this theory in Euclidean space-time, instantons, describing transitions between neighbouring vacua. Instantons have minimal action with the action density concentrated around an event in 4D space-time. They are characterised by a topological quantum number, the topological charge [1]. Periodic boundary conditions in Euclidean time model field theories at finite temperatures, where the temperature \(T\) is proportional to the inverse time extent. The solutions of the Yang-Mills equations are modified by finite temperature \(T\). As shown by Kraan and van Baal [2] and Lee and Lu [3] finite \(T\) deforms instantons to periodic solutions, calorons. With increasing \(T\) calorons separate into constituents, monopoles (dyons), as can be nicely observed in the action density, see e.g. Fig.1 of Ref. [2]. These interesting solutions fire our imagination due to the similarity to localised quantised charges in electrodynamics, atomic and nuclear physics.
On the 4D Euclidean lattice with the gauge field \(U_{\mu}(x)\in SU(2)\) defined on links, calorons are characterised by the Polyakov loops in terms of matrices
\[Q(\vec{x}):=\Pi_{i=1}^{N_{t}}U_{4}(\vec{x},t_{i})=q_{0}(\vec{x})-{\rm i}\vec{ \sigma}\vec{q}(\vec{x}),\quad q_{0}^{2}+\vec{q}^{\,2}=1, \tag{1}\]
by the distribution of these "unit quaternions" in 3D space. The constituent monopoles get equal size for \(q_{0}(\infty)=0\). Then their imaginary part \(\vec{q}\) has hedgehog form, as depicted in Table 1. There are four topologically different constituents, dubbed \(M,\bar{M}\) and \(L,\bar{L}\)[4] according to their electric \(e\) and magnetic \(m\) charges in SU(2) Yang-Mills theory.
In the rest of this article we show that with an appropriate Lagrangian the configurations depicted in Table 1 can be compared to quantised electric charges, to electrons and positrons with spin up and down without any divergencies, which QED is plagued with.
## 2 Formulation of the model of topological particles (MTP)
As described in Ref. [5] MTP uses the SO(3) degrees of freedom (dofs) of spatial Dreibeins to describe electromagnetic phenomena. The calculations are simplified by using SU(2) matrices,
\[Q(x)=\mathrm{e}^{-\mathrm{i}\alpha(x)\vec{\sigma}\vec{n}(x)}=\underbrace{\cos \alpha(x)}_{q_{0}(x)}-\mathrm{i}\vec{\sigma}\underbrace{\vec{n}\sin\alpha(x)}_ {\vec{q}(x)}\in SU(2)\cong\mathbb{S}^{3} \tag{2}\]
in Minkowski space-time as field variables, where arrows indicate vectors in the 3D algebra of su(2) with the basis vectors \(\sigma_{i}\), the Pauli matrices. The Lagrangian of MTP reads,
\[\mathcal{L}_{\mathrm{MTP}}(x):=-\frac{\alpha_{f}\hbar c}{4\pi}\left(\frac{1}{ 4}\,\vec{R}_{\mu\nu}(x)\vec{R}^{\mu\nu}(x)+\Lambda(x)\right)\quad\mathrm{with }\quad\Lambda(x):=\frac{q_{0}^{6}(x)}{r_{0}^{4}}, \tag{3}\]
with the curvature tensor \(\vec{R}_{\mu\nu}\) and the affine connection \(\vec{\Gamma}_{\mu}\)
\[\vec{R}_{\mu\nu}:=\vec{\Gamma}_{\mu}\times\vec{\Gamma}_{\nu}\quad\mathrm{and }\quad\left(\partial_{\mu}Q\right)Q^{\dagger}=:-\mathrm{i}\vec{\sigma}\vec{ \Gamma}_{\mu}. \tag{4}\]
We get contact to nature by defining the electromagnetic field strength tensor by
\[\vec{F}_{\mu\nu}:=-\frac{e_{0}}{4\pi\epsilon_{0}}^{\
MTP has four different classes of topologically stable single soliton configuration [5], represented by
\[n_{i}(x)=\pm\frac{x^{i}}{r},\quad\alpha(x)=\frac{\pi}{2}\mp\arctan\frac{r_{0}}{r }=\begin{cases}\arctan\frac{r}{r_{0}}\\ \pi-\arctan\frac{r}{r_{0}}\end{cases} \tag{6}\]
The diagrams for the imaginary part of the Q-fields (2) agree with the diagrams for the caloron constituents in Table 1, but with another interpretation. The four classes differ in two quantum numbers related to charge and spin, as explained in Sect. 3. The configurations within each of the four classes may differ by Poincare transformations. The rest energy of solitons,
\[E_{0}=\frac{\alpha_{f}\hbar c}{r_{0}}\frac{\pi}{4}, \tag{7}\]
can be adjusted to the electron rest energy \(m_{e}c_{0}^{2}=0.511\) MeV by choosing,
\[r_{0}=2.213\text{ fm}, \tag{8}\]
a scale which is close to the classical electron radius \(r_{\text{cl}}=2.818\) fm. The four parameters, \(r_{0},c_{0},E_{0}\) and \(e_{0}\), correspond to the natural scales of the four quantities, length, time, mass and charge, of the Systeme international d'unites, involved in this model. Eq. (7) can therefore be interpreted as a relation between the two fundamental physical quantities \(\alpha_{f}=1/137.036\) and \(\hbar\).
How to relate the four classes of solitons to the Dirac equation is discussed in Ref. [6].
## 3 Charge and spin
In the vacuum, at the minimum of the potential, \(Q\) is purely imaginary. Therefore, at distances of a few soliton radii \(r_{0}\) the \(Q\)-field approaches unit vectors \(\vec{n}\in\mathbb{S}^{2}\) and the non-abelian field strength tensor \(\vec{F}_{\mu\nu}\) aligns in the direction of \(\vec{n}\), it gets abelian
\[\vec{F}_{\mu\nu}\overset{r>\geq r_{0}}{\rightarrow}(\vec{F}_{\mu\nu}\vec{n}) \vec{n}. \tag{9}\]
This allows us to define the electric current
\[j_{q}^{\kappa}:=-\partial_{\lambda}\underbrace{\frac{e_{0}c}{4\pi}\frac{1}{2 }\epsilon^{\kappa\lambda\mu\nu}\,\vec{n}[\partial_{\mu}\vec{n}\times\partial_ {\nu}\vec{n}]}_{f^{\kappa\lambda}}=\frac{1}{\mu_{0}}\partial_{\lambda}f^{ \lambda\kappa}. \tag{10}\]
The sign of the \(\vec{n}\)-field (6) decides about the charge quantum number \(Z\) defined by a map \(\Pi_{2}(\mathbb{S}^{2})\) of closed surfaces around soliton centers to \(\mathbb{S}^{2}\). For soliton pairs a positive product of their charge numbers leads to repulsion and a negative product to attraction.
Field configurations are further characterised by the number \(\mathcal{Q}\) of coverings of \(\mathbb{S}^{3}\), by the map \(\Pi_{3}(\mathbb{S}^{3})\),
\[\mathcal{Q}:=\frac{1}{V(\mathbb{S}^{3})}\int_{0}^{\infty}\mathrm{d}r\int_{0}^{ \pi}\mathrm{d}\vartheta\int_{0}^{2\pi}\mathrm{d}\varphi\,\vec{\Gamma}_{r}( \vec{\Gamma}_{\vartheta}\times\vec{\Gamma}_{\varphi}), \tag{11}\]
with values \(\pm 1/2\) for single solitons configurations (6) as listed in Table 1. The sign of \(\mathcal{Q}\) defines an internal chirality \(\chi:=\text{sign}\mathcal{Q}\), the chirality of rotations of Dreibeins
along the coordinates \(x_{i}\) through the centers of solitons. We interpret the absolute value of \({\cal Q}\) as spin quantum number \(s\),
\[{\cal Q}=\chi\cdot s\quad\mbox{with}\quad s:=|{\cal Q}|, \tag{12}\]
with the usual addition rules for spins. The spin quantum number of two-soliton configurations indicates that \(\chi\) can be related to the sign of the magnetic spin quantum number.
Since the solitons of MTP are connected to the surrounding by the lines of constant \(\vec{n}\)-field, the interpretation of solitons as spin-1/2 objects is in direct relation to an important property of our 3-dimensional space. It is well known in mathematics that objects, wired to their neighbourhood, can disentangle these wires by their appropriate movement during \(4\pi\)-rotations, as shown in Fig. 1. This property is reflected in the \(4\pi\)-periodicity of the quantum phase of spin-1/2 particles, of fermions.
To get a feeling of the geometric realisations of charge and spin, we show in Fig. 2 the schematic diagrams of the \(\vec{q}\)-fields of a dipole in the spin-0 and the spin-1 configurations. Due to the structure of the vector field the \(S\)=0-dipole in the left diagram would fuse in a dynamical calculation since both solitons belong to the \(q_{0}\geq 0\) hemisphere of \(\mathbb{S}^{3}\). The size dependence of the energy of static dipoles of this type and the deviations from the Coulomb law of point-like charges was determined numerically in Ref. [7]. The spin-1 configuration in the right diagram of Fig. 2 covers upper and lower hemisphere of \(\mathbb{S}^{3}\). It is attractive only up to a minimal distance, when repulsion starts due to a drastic increase of the curvature energy.
According to the definition of the potential term \(\Lambda\) in the Lagrangian (3), MTP has a double degenerate vacuum with two types of Goldstone bosons. The degenerate
Figure 1: The small yellow sphere is connected to the surrounding by wires. If this sphere is rotated as indicated by the red dot, around the axis passing through the black dot, the wires get increasingly entangled up to rotations by \(2\pi\) and disentangled again when approaching \(4\pi\).
vacuum has a broken symmetry. In the diagrams in Fig. 2 this breaking of symmetry is reflected in the direction of the unit vectors at large distances from the soliton centers, they approach asymptotically the vertical direction, \(Q(\infty)=-{\rm i}\sigma_{3}\).
Figure 3: Schematic diagrams analogous to the right diagram of Fig. 2 after a \(45^{\circ}\) and a \(90^{\circ}\) rotation.
Figure 2: Schematic diagrams depicting the imaginary part \(\vec{q}=\vec{n}\sin\alpha\) of the \(Q\)-field (2) of two opposite unit charges by arrows. The lines represent some electric flux lines. We observe that they coincide with the lines of constant \(\vec{n}\)-field. The configurations are rotational symmetric around the axis through the two charge centres. In the red/green arrows, we encrypt also the positive/negative values of \(q_{0}=\cos\alpha\). For \(q_{0}\to 0\) the arrows are getting darker or black. The left configuration belongs to the topological quantum numbers \({\cal Q}=S=0\) and the right one to \({\cal Q}=S=1\), where \(S\) is the total spin quantum number of this dipole configuration.
Spin in MTP is a topological quantum number. Furthermore it has to contribute to the total angular momentum. We observe that the static soliton configurations depicted in Table 1 have no internal rotation and therefore no contribution to the angular momentum. But an orbital angular momentum forces charges to perform an internal rotation due to the broken vacuum which fixes the field at infinity during the rotation. This can be seen, comparing the right diagram of Fig. 2 with the left diagram of Fig. 3, representing a rotation by \(45^{\circ}\) and the right diagram of Fig. 3 for a \(90^{\circ}\) rotation. The internal rotation of the solitons can be observed especially inside the marked circles around the soliton centers. This is a nice picture for a possible realisation of spin in nature.
## 4 Comparison to other models
MTP with the Lagrangian (3) is a generalisaton of the Sine-Gordon model [8], from 1+1D with one degree of freedom to 3+1D Minkowski space with three SO(3) degrees of freedom (dofs).
With the field variables from SU(2) also the Skyrme model is using the rotational dofs, but with a different Lagrangian. In the Skyrme model [9; 10; 11] solitons are compressed by a term quadratic in the field derivatives, the Dirichlet term, which forbids long-range forces. The potential term of MTP allows for Coulombic forces. Both Lagrangians agree in the Skyrme term which tries to smoothen solitons. An essential difference between both models is their vacuum structure. There is only one vacuum in the Skyrme model, whereas there is a two-dimensional manifold of vacua in MTP.
Whether solitons are electric or magnetic is a question of interpretation. The magnetic monopoles which Dirac [12; 13] introduced are closely related to solitons. Both have quantised charges. But Dirac monopoles have two types of singularities, the Dirac string and the singularity in the center. Dirac strings were removed in two different ways by Wu and Yang. Firstly, by a fiber bundle description [14] with at least two types of vector fields. In the second method [15; 16] they formulated the monopoles with a non-Abelian gauge field. Wu-Yang monopoles still suffer from the singularity in the center. The solitons of MTP are soft-core Dirac monopoles, where all singularities are removed.
Polyakov [17] and 't Hooft [18] identified monopoles in the Georgi-Glashow model. This model has 15 dofs, a triplet of gauge fields and a triplet of scalar (Higgs) fields. The mass of these monopoles is of the order of the mass of the W-boson multiplied by the inverse of Sommerfeld's fine structure constant [17]. The interaction between the monopoles is a function of the charge and of the properties of the Higgs field.
The basic fields of MTP are the rotational degrees of freedom of spatial Dreibeins. Therefore, both long-range forces, gravitation and electromagnetism, are formulated with the properties of space-time only.
The immense success of Maxwell's electrodynamics requires a comparison with MTP. Due to the lack of space we can only enumerate agreements and differences. More details are exposed in Ref. [5].
#### In agreement with Maxwell's electrodynamics
* The Lagrangian is Lorentz covariant, thus the laws of special relativity are respected.
* Charges have Coulombic fields fulfilling Gauss' law.
\(\bullet\) Charges interact via \(O(\frac{1}{r^{2}})\) electric fields, they react to Coulomb and Lorentz forces.
\(\bullet\) A local U(1) gauge invariance is respected.
\(\bullet\) There are two dofs of massless excitations for photons.
_Differences to Maxwell's electrodynamics_
\(\bullet\) Field energy is the only source of mass.
\(\bullet\) Charges and their fields are described by the same dofs. Therefore charges cannot be separated from their fields.
\(\bullet\) Charges are quantised in analogy to Dirac monopoles.
\(\bullet\) The self-energy of elementary charges is finite and does not need regularisation and renormalisation.
\(\bullet\) The finite size of solitons leads to a running of the charge.
\(\bullet\) Field dofs can be interpreted as orientations of spatial Dreibeins.
\(\bullet\) The mirror properties of particles and antiparticles [19] are explained by their topological construction.
\(\bullet\) Particles are characterised by topological quantum numbers.
\(\bullet\) Solitons and antisolitons have opposite internal parity.
\(\bullet\) Spin has usual quantisation properties and combination rules.
\(\bullet\) Spin contributes to angular momentum due to internal rotations.
\(\bullet\) Solitons are characterised by a chirality quantum number which can be related to the sign of the magnetic quantum number.
\(\bullet\) The quantum numbers of the four classes of soliton configurations agree with the quantum numbers of fields in Dirac spinors.
\(\bullet\) The canonical energy-momentum tensor is automatically symmetric.
\(\bullet\) Static charges are described by the spatial components of vector fields. Moving charges need time-dependent fields.
\(\bullet\) Local U(1) gauge invariance emerges from choice of bases on \(\mathbb{S}^{2}\).
\(\bullet\) Photon number is given by the Gaussian linking number of fibres on \(\mathbb{S}^{2}\).
\(\bullet\) Photon number changes by interaction with charges.
\(\bullet\) Spin and magnetic moment are dynamical properties only.
\(\bullet\) Electric and magnetic field vectors are perpendicular to each other.
\(\bullet\) Existence of unquantised magnetic currents is allowed.
\(\bullet\)\(\alpha\)-waves in \(q_{0}=\cos\alpha\) could contribute to the (dark) matter density.
\(\bullet\)\(\alpha\)-waves lead to additional forces acting on particles and are a possible origin of quantum fluctuations.
\(\bullet\) Potential term suggests a mechanism of cosmic inflation.
\(\bullet\) Potential term contributes to the dark energy.
## 5 Conclusion
A fascination of calorons and their constituent monopoles is due their similarity to the properties which we expect to hold for elementary charges, aspects which are not reflected in our present picture of elementary particle physics. These features originate in the SU(2) manifold and the topological nature of calorons.
Calorons have these interesting properties in common with the model of topological particles (MTP), similarly formulated with SO(3) or its double covering, SU(2). This article gives a short introduction to MTP and enumerates its properties. Some of these properties we expect to hold for elementary charges, some seem to disagree at first look and some are rather speculative. Being optimistic one can hope that MTP is able to improve on our understanding of basic properties of nature by its geometric properties and fills some gaps, left in our present rather algebraic description of nature.
Possibly Galileo is more right than we thought, when he wrote in Il Saggiatore: "Philosophy is written in this grand book, the universe... It is written in the language of mathematics, and its characters are triangles, circles, and other geometric figures...\({}^{*}\)[20].
|
2309.05911 | Quality-Agnostic Deepfake Detection with Intra-model Collaborative
Learning | Deepfake has recently raised a plethora of societal concerns over its
possible security threats and dissemination of fake information. Much research
on deepfake detection has been undertaken. However, detecting low quality as
well as simultaneously detecting different qualities of deepfakes still remains
a grave challenge. Most SOTA approaches are limited by using a single specific
model for detecting certain deepfake video quality type. When constructing
multiple models with prior information about video quality, this kind of
strategy incurs significant computational cost, as well as model and training
data overhead. Further, it cannot be scalable and practical to deploy in
real-world settings. In this work, we propose a universal intra-model
collaborative learning framework to enable the effective and simultaneous
detection of different quality of deepfakes. That is, our approach is the
quality-agnostic deepfake detection method, dubbed QAD . In particular, by
observing the upper bound of general error expectation, we maximize the
dependency between intermediate representations of images from different
quality levels via Hilbert-Schmidt Independence Criterion. In addition, an
Adversarial Weight Perturbation module is carefully devised to enable the model
to be more robust against image corruption while boosting the overall model's
performance. Extensive experiments over seven popular deepfake datasets
demonstrate the superiority of our QAD model over prior SOTA benchmarks. | Binh M. Le, Simon S. Woo | 2023-09-12T02:01:31Z | http://arxiv.org/abs/2309.05911v1 | # Quality-Agnostic Deepfake Detection with Intra-model Collaborative Learning
###### Abstract
Deepfake has recently raised a plethora of societal concerns over its possible security threats and dissemination of fake information. Much research on deepfake detection has been undertaken. However, detecting low quality as well as simultaneously detecting different qualities of deepfakes still remains a grave challenge. Most SOTA approaches are limited by using a single specific model for detecting certain deepfake video quality type. When constructing multiple models with prior information about video quality, this kind of strategy incurs significant computational cost, as well as model and training data overhead. Further, it cannot be scalable and practical to deploy in real-world settings. In this work, we propose a universal intra-model collaborative learning framework to enable the effective and simultaneous detection of different quality of deepfakes. That is, our approach is the quality-agnostic deepfake detection method, dubbed QAD. In particular, by observing the upper bound of general error expectation, we maximize the dependency between intermediate representations of images from different quality levels via Hilbert-Schmidt Independence Criterion. In addition, an Adversarial Weight Perturbation module is carefully devised to enable the model to be more robust against image corruption while boosting the overall model's performance. Extensive experiments over seven popular deepfake datasets demonstrate the superiority of our QAD model over prior SOTA benchmarks.
## 1 Introduction
Deep learning approaches for facial manipulation, such as deepfakes, have recently received considerable attention [62, 35, 69, 29, 23, 26], because they can be abused for the malicious purposes such as fake news, pornography, etc. Due to the advancements made in Generative Adversarial Networks and other deep learning-based computer vision algorithms, deepfakes have also become more realistic and natural, making it harder not only for humans, but also for classifiers to tell them apart. Moreover, it has been simpler than ever before to create convincing deepfakes using simple programs and apps without requiring advanced machine learning knowledge. Such easy-to-create and realistic fake images and videos can be maliciously exploited, raising significant security, privacy, and societal concerns such as fake news propagation [46], and stealing personal information via phishing and scams [12].
To mitigate such problems caused by deepfakes, there has been a tremendous research effort put into constructing reliable detectors [36, 29, 10, 45, 69, 51]. Although they have achieved outstanding performance with high-quality deepfakes, most of them have failed to detect low-quality deepfakes effectively [10, 50]. While video compression steps do not significantly impact on visualization, it drastically drop deepfake detectors' performance on low-quality deepfakes (c40). A handful of research has been focused on detecting low-quality deepfakes such as ADD [32] and BZNet [33]. However, their methods can only detect
Figure 1: **A summary of our goal.** Our approach stands out from previous works that detect deepfakes using separate models for different qualities (_e.g._ Baseline 2 [32]) or a single model without considering the interaction between qualities (_e.g._ Baseline 1 [50]). Instead, our method employs all quality levels and improves the performance of the model on each quality level, leading to overall enhanced performance.
low-quality compressed deepfakes. And, those prior approaches expose a critical problem, when deployed in practice since the prior video quality information of the input is unknown. Moreover, developing different models for each input quality induces significant computational overhead. Other works, such as LipForensics [19], also attempt to make their detectors robust against various corruptions and compression. Nevertheless, it is unable to detect image-based deepfakes with random lossy compression like JPEG.
In this research, we propose the novel deepfake detection method, QAD, which can simultaneously detect both high and low-quality (quality-agnostic) deepfakes in a single model, as illustrated in Fig. 1. Especially, we propose a universal intra-model collaborative learning framework to provide the effective detection of different quality deepfakes. We modulate the conventional model-based collaborative learning [55] to an instance-based intra-model collaborative learning framework in our training. During the training phase, our single model simultaneously learns the representations of one image, but with different qualities. By utilizing the collaborative learning framework, our QAD can align the distributions of high and low-quality image representations to be geometrically similar. Hence, it can avoid the overfitting caused by compressed images and the overconfidence caused by raw images, while boosting its overall performance.
In particular, we perform a rigorous theoretical analysis, and show that the low-quality deepfake classification error can be bounded by two terms: classification loss and the distance between the representations of high and low-quality images. Instead of using a direct pairwise regularization to minimize the gaps between the high and low-quality image representations, we propose to apply _Hilbert-Schmidt Independence Criterion_ (HSIC) to maximize the dependence between a mini-batch of high and low-quality images, thus maximizing the mutual information between them, and supporting the high-level representations and effective output predictions. Meanwhile, to enhance the model's robustness under heavy input compression, we propose _Adversarial Weight Perturbation_ (AWP) [64, 4], which can further flatten the weight loss landscape of the model, bridging the gap in multiple quality learning for deepfake detection.
Finally, we conduct extensive experiments to show the effectiveness of our QAD with seven different popular benchmark datasets. We first show that our method can outperform previous baselines when training with data from various video and image compression qualities. Furthermore, we show that our QAD exceeds the performance of the SOTA quality-aware models such as BZNet [33] by a significant margin, while requiring remarkably fewer computational parameters and no prior knowledge of the inputs. Our contributions are summarized as follows:
**1)** We theoretically analyze and prove that the classification error of low-quality deepfakes can be bounded by its classification loss and the representation distance with its corresponding high-quality images.
**2)** We propose a unified quality-agnostic deepfake detection framework (QAD), utilizing instance-based intra-model collaborative learning. We use the _Hilbert-Schmidt Independence Criterion_ (HSIC) to maximize the geometrical similarity between intermediate representations of high and low-quality deepfakes, and _Adversarial Weight Perturbation_ (AWP) to make our model robust under varying input compression.
**3)** We demonstrate that our approach outperforms well-known baselines, including the total of _eight_ quality-agnostic and quality-aware SOTA methods with _seven_ popular benchmark datasets.
## 2 Related works
### Deepfake detection
Recently, deepfakes have been of the utmost crucial because they can cause serious security and privacy threats. Threefore, a large number of detection methods have been proposed to effectively identify such deepfakes [50, 35, 34, 25, 48, 62, 36, 29, 10, 69]. However, the majority of the aforementioned works focus on mining visual artifacts of deepfakes, such as the blending boundaries of generated faces [35], the irregularity of pupil shapes [17], the spatiotemporal inconsistency [7, 51], or exploring deep learning-based attention methods [69] to identify such artifacts. Meanwhile, several approaches also showed that exposing deepfakes in the frequency domain is effective. Such methods include analyzing the discrepancies of frequency spectrum [10, 31, 29], employing the checkerboard artifacts caused by the transposed convolutional operator [68, 13], or mining the statistical frequency features with dual deep learning models [45]. Nevertheless, such models' performance substantially decreases when encountering low-quality compressed images. To remedy the above shortcoming, recent studies proposed methods to detect the deepfake in highly compressed low-quality versions such as [32], which utilized a knowledge distillation. Also, [33] presented a super-resolution-based network for enhancing the performance of low-quality deepfake detection. However, all of the aforementioned approaches are limited in developing a single model for each quality of deepfakes, which is impractical to deploy in real-world scenarios due to the requirement of prior knowledge of the input quality.
### Collaborative learning
Collaborative learning proposed by [55] is designed to achieve a global minimum of a deep neural network, while maintaining the same computational complexity at inference time as at training time. Collaborative learning inherits
the advantages of auxiliary training [56], multi-task learning [66], and knowledge distillation [21]. Its applications cover supporting weakly-supervised learning [27], or integrating with online knowledge distillation [65, 18]. And, its training graph is divided into two or more sub-networks to ensure global minimum achievement [55]. Besides, [11] proposed an intra-model collaborative learning framework that shares a similar characteristic with self-knowledge distillation. However, all of the approaches are model-based collaborative learning, in which a single input generates multiple outputs (or _views_) through multiple classifier heads of one target network in both training and inference phase.
In this work, we distinguish ourselves by deploying the collaborative learning framework for simultaneously training deepfakes of various qualities with an undeviated single model, namely _instance-based collaborative learning_. Different from conventional mini-batch stochastic optimization, which independently samples random images from different qualities and optimizes the detector, our collaborative learning approach allows us to utilize the common features in the same image but from different qualities simultaneously. Thus, our deepfake detector circumvent the overfitting caused by compressed images or the overconfidence from raw images, enhancing its overall performance.
### Hilbert-Schmidt Independence Criterion
The Hilbert-Schmidt Independence Criterion (HSIC) [16] measures the statistical dependency between probability distributions. In fact, HSIC differs from the covariance, where \(Cov(X,Y)=0\) does not imply that _two_ random variables \(X\) and \(Y\) are independent [49], while HSIC shows its tractable computation and equivalency in terms of the independence property [16]. Moreover, HISC is easy to be estimated statistically and algorithmically. In practice, applications based on HSIC are found in a variety of practical domains, including maximizing the dependencies for self-supervised learning [37] and classification learning [38, 15], or defense against model inversion attacks [43]. In this paper, we utilize the HSIC to maximize the dependency between distributions of deepfake images of different qualities at intermediate layers. Therefore, we aim to constrain low-level representations of images not to be exactly the same, but to share a geometrical similarity of learning features that can support high-level output predictions.
## 3 Methods
In this section, we first theoretically examine the upper bound for our optimization problem by considering a DNN classifier of \(K\) classes, and two modalities of input quality: raw (high-quality) and compressed (low-quality). Then, we discuss how to more efficiently collaborate on the representations of deepfake images of differing quality.
### Preliminary & our inspiration
Given a sample \(x_{r}\) from a space \(\mathcal{X}\) and its compressed version at quantile \(c\), \(x_{c}\) can be expressed as \(x_{c}=x_{r}-\delta_{c}\), and we define the corresponding label \(y\in\{0,1\}\) (real and fake). Next, a family \(\mathcal{F}\) of learning functions \(f:\mathcal{X}\rightarrow\mathbb{R}^{2}\) returns a \(2\)-tuple \(f(x)=[f(x,j)]_{j=1}^{2}\), whose \(f(x,j)\) is proportional to the probability to assign \(x\) to the \(j\)-th class, and \(f\) is defined by learning parameters \(\theta\). Given a training data \(\mathcal{D}=\{(x_{i},y_{i})\}_{i=1}^{n}\subset\mathcal{X}\times\mathcal{Y}\), our goal is to minimize the expectation of a loss function \(\mathcal{L}:\mathbb{R}^{2}\times\mathcal{Y}\rightarrow\mathbb{R}\). Here, we consider \(\mathcal{L}(f(x),y)=1-\sigma_{T}(f(x,y))\), where \(\sigma_{T}\) is the softmax function with temperature \(T>0\):
\[\sigma_{T}(f(x),y)=\frac{\text{exp}(f(x,y)/T)}{\sum_{k=1}^{2}\text{exp}(f(x,k) /T)}. \tag{1}\]
**Theorem 1**.: _(Proof can be done similarly as [41] and [22] in Supp. Material) For any \(f\in\mathcal{F}\), and with probability \(1-\delta\) over the draw of \(\mathcal{D}\),_
\[\mathbb{E}[\mathbb{I}\{\hat{y}(x_{c})\neq y\}] \leq 2\mathbb{E}_{\mathcal{D}}\mathcal{L}(f(x_{c}),y)\] \[+\frac{8}{T}\mathbb{E}_{\mathcal{D}}\mathcal{L}_{i-col}(f(x_{r}),f(x_{c}))+4\mathfrak{R}_{\mathcal{D}}(\Phi_{\mathcal{W}})\] \[+\frac{16}{n}+\mathcal{O}\left(\sqrt{\frac{\text{log}(2/\delta) }{2n}}\right), \tag{2}\]
_where \(\mathfrak{R}_{\mathcal{D}}\) is the Rademacher complexity, \(\Phi_{\mathcal{W}}=\{\mathcal{L}(f(x_{r}),y),f\ \in\mathcal{F}\}\), and_
\[\mathcal{L}_{i-col}(f(x_{r}),f(x_{c}))=\parallel f(x_{r})-f(x_{c})\parallel. \tag{3}\]
**Insight of Theorem 1.** On the right-hand side of Eq. 2, our classifier \(f\) depends on two terms, where the first term is the classification loss \(\mathcal{L}(f(x_{c}),y)\) applied to the prediction of the compressed image \(x_{c}\). And, the second term is the instance-based collaborative loss \(\mathcal{L}_{i-col}(f(x_{r}),f(x_{c}))\) that measures the pairwise difference between predictions of the raw image and its compressed version. Therefore, minimizing the expectation over training data \(\mathcal{D}\) of \(2\mathcal{L}(f(x_{c}),y)+8\mathcal{L}_{i-col}(f(x_{r}),f(x_{c}))/T\), can decrease the true error. Note that Eq. 2 is also general so that it can be applicable for raw images. Hence, in practice, the first term can be generalized to both raw and compressed images.
In order to minimize the expectation of errors in both raw and compressed deepfake image predictions, our theoretical analysis shows that we can minimize both classification loss and collaborative loss at the output. However, as observed by [11] and in our experiments (see Tab. 4), this instance-based collaborative learning loss fails to achieve the best performance. Additionally, training solely with highly-compressed images makes the detector prone to overfitting, yielding a considerable gap between training and test performance [32]. As a result, the major research
challenge is how we can further lower the sensitivity of \(f_{\theta}\) to \(x\) at various compression settings and push their representations close to each other.
**Classification loss.** We can construct a robust model under input corruption by flattening the weight loss landscape. In particular, we apply the _Adversarial Weight Perturbation_[64] to search for the worst-case perturbations \(\phi^{*}\) of the model weights at every training step. Thereafter, optimizing the perturbed model via \(\mathcal{L}(f_{\theta+\phi^{*}},y)\) can enable it to be more robust under varying input image corruptions/distortions, which represent the varying qualities of deepfake inputs (See Fig. 2--Bottom-right panel). Furthermore, the classification loss in Eq. 2 is upper bounded by this new loss function due to the worst-case perturbations.
**Collaborative loss.** With respect to the collaborative learning loss, Eq. 3 shares similar characteristics with recent knowledge distillation research [21, 60]. However, our goal is to develop and train a single universal model, not having a teacher-student relationship. Nevertheless, we argue that the gap between raw and compressed image representations can be minimized more efficiently by regularizing their discrepancy at the low-level representations. Moreover, enforcing the similarity of the pairwise representations at the intermediate layers with an input difference of \(|\delta_{c}|\) can collapse layers' weights to zero or can lead a deep model to remember training data instead of learning discriminative features. Therefore, we relax this constraint by maximizing the kernel dependency in a mini-batch of data between the raw and compressed image representations by the _Hilbert-Schmidt Independence Criterion_ (HSIC). Using HSIC, we can instead enforce the geometrical structures of mini-batch of raw data and the compressed data to be similar, so that we can still effectively detect different-quality deepfakes in a single model. From a mutual information perspective, maximizing the kernel dependency can enforce the mutual information of the learned representations of different compression ratios, thus regularizing the detector to be more generalized (See Fig. 2--Top-right panel).
### Details of our methods
#### 3.2.1 Weight loss landscape flattening
Recent studies [24, 63] suggest that searching for a flatter minima can improve the generalization ability of the model. To achieve this, we propose using Adversarial Weight Perturbation [64] to identify flat local minima of the empirical risk. The worst-case perturbations \(\phi^{*}\) of the model weights which increases the loss dramatically is formulated as:
\[\phi^{*}=\operatorname*{arg\,max}_{\phi\in\mathcal{B}_{p}(\theta,\gamma)} \mathcal{L}(f_{\theta+\phi}(x),y), \tag{4}\]
where \(\mathcal{B}_{p}(\theta,\gamma)=\{v\in\Theta:\parallel\theta-v\parallel_{p} \leq\gamma\}\) is the feasible region of any perturbation \(\phi\). The AWP adds the worst-case perturbation to the model weight, so that \(\mathcal{L}(f_{\theta+\phi^{*}}(x),y)\)
Figure 2: **Overview of our QAD framework. A mini-batch of images from different quality modalities (_e.g._, two in this diagram) is forwarded through a single universal model. Although it is one model, we pictorially split it into two branches for the reader’s understanding. After training, we obtain one universal model that regulates different qualities. Top-right: The HSIC geometrically maximizes the dependency between images from various quality modalities at different resolutions, supporting high-level output predictions. Bottom-right: Through searching for the worst-case parameters’ corruption and compensating for input corruption (compression), the AWP flattens the model’s weight loss landscape, making the model robust under varying input compression.**
becomes the supremum value in \(\mathcal{B}_{p}(\theta,\gamma)\). Therefore, optimizing \(\mathcal{L}(f_{\theta+\phi^{*}}(x),y)\) pushes \(\theta\) adjust its values such that the loss landscape is more flatten with the same capability \(\mathcal{B}_{p}(\theta,\gamma)\). As a result, \(f\) become more stable under input image's changes.
Similar to adversarial example perturbation [39], \(\phi^{*}\) is generated by projected gradient method as follows:
\[\phi^{*}\leftarrow\Pi_{\theta}^{\gamma}\left(\phi+\eta\frac{\nabla\mathcal{L} (f_{\theta+\phi}(x),y)}{\parallel\nabla\mathcal{L}(f_{\theta+\phi}(x),y) \parallel}\parallel\theta\parallel\right), \tag{5}\]
where \(\Pi_{\theta}^{\gamma}\) is an operator that projects its input into the feasible region \(B_{p}(\theta,\gamma)\), and \(\eta\in\mathbb{R}\) is the step size. In fact, we empirically find that using a **one-step** projection for \(\phi^{*}\) is sufficient for the model's robustness under the image corruptions formed by compression. By adding \(\phi^{*}\) in Eq. 5 to \(\theta\), it is straightforward to bound \(\mathcal{L}(f(x_{c}),y)\) in Eq. 2.
#### 3.2.2 Intra-model collaborative learning
**Hilbert-Schmidt Independence Criterion (HSIC).** Let \(\mathcal{T}\) and \(\mathcal{G}\) be two separable Reproducing Kernel Hilbert Spaces (RKHS) on metric spaces \(\mathcal{U}\) and \(\mathcal{V}\), respectively. HSIC measures the dependency between two random variables \(U\) and \(V\) from a joint distribution on \(\mathcal{U}\) and \(\mathcal{V}\), by evaluating the cross-covariance of the nonlinear transformations of the two random variables:
\[HSIC(U,V)=\parallel\mathbb{E}[\zeta(U)\psi(V)^{T}]-\mathbb{E}\zeta(U)\mathbb{ E}\psi(V)^{T}\parallel_{HS}^{2}, \tag{6}\]
where \(\parallel\cdot\parallel_{HS}\) is the Hilbert-Schmidt norm, which becomes the Frobenius norm in finite dimensions. And, \(\zeta:\mathcal{U}\rightarrow\mathcal{T}\) and \(\psi:\mathcal{V}\rightarrow\mathcal{G}\) are nonlinear mapping functions. With appropriate transformation \(\zeta\) and \(\psi\), HSIC is a dependence test, which can identify the _nonlinear dependencies_ between \(U\) and \(V\) as follows: \(HSIC(U,V)=0\Leftrightarrow U\perp V\).
Also, inner products in \(\mathcal{T}\) and \(\mathcal{G}\) are formed by positive definite kernel functions: \(k(u,u^{\prime})=\langle\zeta(u),\zeta(u^{\prime})\rangle_{\mathcal{T}}\) and \(l(v,v^{\prime})=\langle\psi(v),\psi(v^{\prime})\rangle_{\mathcal{G}}\). And, let \((U^{\prime},V^{\prime})\) and \((U^{\prime\prime},V^{\prime\prime})\) be independent copies of \((U,V)\), then Eq. 6 can be expressed as follows:
\[HSIC(U,V)=\mathbb{E}[k(U,U^{\prime})l(V,V^{\prime})]\] \[-2\mathbb{E}[k(U,U^{\prime})]\mathbb{E}[l(V,V^{\prime\prime})]+ \mathbb{E}[k(U,U^{\prime})]\mathbb{E}[l(V,V^{\prime})]. \tag{7}\]
**Estimation of HSIC.** The empirical estimation of HSIC with an bias of \(\mathcal{O}(\frac{1}{n})\) using \(n\) samples _i.i.d_ drawn \(\{(u_{i},v_{i})\}_{i=1}^{n}\) from the joint distribution \((U,V)\) is provided as follows [16]:
\[\widehat{HSIC}(U,V)=\frac{1}{(n-1)^{2}}\textbf{tr}(KHLH), \tag{8}\]
where \(K_{i,j}=k(u_{i},u_{j})\), and \(L_{i,j}=l(v_{i},v_{j})\) are kernel matrices for the kernels \(k\), and \(l\), respectively, and \(H_{i,j}=\delta_{i,j}-\frac{1}{n}\) is a centering matrix. Regarding the kernel functions, Theorem 4 in [16] suggested that an universal kernel, such as Laplace and Gaussian RBF kernel, can guarantee the HSIC to detect any dependency between \(U\) and \(V\).
**HSIC for maximizing the geometrical similarity**. Let \(\tau\) and \(\rho\) be two different qualities of deepfakes, (e.g., raw vs. compressed). Consider the \(l\)-th layer of the learning network \(f\), we denote the learning features of a mini-batch of \(B\) images from \(\tau\) and \(\rho\) are \(Z_{l}^{\tau}=\{u_{i}\}_{B}\), and \(Z_{l}^{\rho}=\{v_{i}\}_{B}\), respectively, where \(u_{i},v_{i}\in\mathbb{R}^{H\times W\times C}\) and \(H,W\) and \(C\) are the height, width, and channel number. Our regularization aims to maximize the dependency between \(Z_{l}^{\tau}\) and \(Z_{l}^{\rho}\) via a mini-batch of representations. In other words, we try to minimize the following loss:
\[\mathcal{L}_{col}(\tau,\rho)=-\sum_{l\in L}\widehat{HSIC}(Z_{l}^{\tau},Z_{l}^ {\rho}), \tag{9}\]
where \(L\) is a predetermined collection of layers to apply the collaborative loss. And, the computational complexity for calculating Eq. 9 is \(\mathcal{O}(B^{2}L)\), which can be reduced to \(\mathcal{O}(BL)\) when applying random Fourier features [47].
### End-to-end training loss
Given a training mini-batch \(B\) that include all \(M\) quality modalities \(\mathcal{T}=\{r,c_{1},...c_{M-1}\}\), the overall collaborative
learning loss in our QAD framework is formulated as:
\[\mathcal{L}_{QAD}= \frac{1}{MB}\sum_{\tau\in\mathcal{T},i\in B}\mathcal{L}_{\phi^{*} }(x_{\tau,i},y_{i})+\alpha\sum_{\tau,\rho\in\mathcal{T}}^{\tau\neq\rho}\mathcal{ L}_{col}(\tau,\rho), \tag{10}\]
where \(\alpha\) is a hyper-parameter to balance contribution of each loss. It is worth noting that our QAD training loss is parameter-free, and is not affected by the order of the modalities. Further, unlike other model-based collaborative learning [55], our QAD does not derive any sub-models. In other words, it can be integrated with any backbone, _i.e._, ResNet50, and introduces no extra computation at the inference time. Note that Theorem 1 still holds when replacing the classification loss \(\mathcal{L}(f(x),y)\) with any cross-entropy based loss, since \(\mathcal{L}(f(x),y)\) is bounded by the cross-entropy loss. Finally, we present our end-to-end algorithm for optimizing Eq. 10 in Algorithm 1, and its pictorial illustration in Fig. 2.
## 4 Experimental Results
### Dataset and pre-processing
For evaluating our proposed method, we experiment with _seven_ different popular deepfake benchmark datasets: NeuralTextures (NT) [58], Deepfakes (DF) [5], Face2Face (F2F) [59], FaceSwap (FS) [6], FaceShifter (FSH) [34], CelebDFV2 (CDFv2) [67], and Face Forensics in the Wild (FFIW10K) [70]. Besides the raw version, these videos are also compressed into two types: medium (c23) and high (c40), utilizing the H.264 codec and constant rate quantization parameters of 23 and 40, respectively. These effectively
\begin{table}
\begin{tabular}{l|c c c c c c c|c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{8}{c}{Test Set AUC (\%)} \\ \cline{2-9} & NT & DF & F2F & FS & FSH & CDFv2 & FFIW10K & _Avg_ \\ \hline \hline \multicolumn{9}{c}{_Video Compression (raw + c23 + c40 of test set)_} \\ \hline MesoNet [1]\({}^{\Diamond}\) & 70.24 & 93.72 & 94.15 & 85.17 & 96.00 & 80.52 & 94.56 & _87.77_ \\ Rössler _et al._[50]\({}^{\Diamond}\) & 89.64 & 99.05 & 97.89 & 98.83 & 98.50 & 97.49 & **99.17** & _97.22_ \\ \(F^{3}\)Net [45]\({}^{\Diamond}\) & 86.79 & 98.73 & 96.32 & 97.82 & 97.45 & 95.06 & 97.94 & _95.73_ \\ MAT [69]\({}^{\Diamond}\) & 86.79 & 98.73 & 96.32 & 97.82 & 97.45 & 95.06 & 97.94 & _95.73_ \\ Fang \& Lin [11] & 89.30 & 98.98 & 97.33 & 98.43 & 98.66 & 96.58 & 98.94 & _96.89_ \\ \hline SBIs [53]\({}^{\dagger}\) & 78.33 & 95.19 & 79.74 & 80.37 & 80.48 & - & - & _82.82_ \\ \hline BZNet [33]\({}^{\dagger}\) & 80.12 & 98.81 & 94.10 & 97.71 & - & - & - & 91.01 \\ ADD [32]\({}^{\dagger}\) & 86.26 & 96.23 & 90.62 & 95.57 & 95.94 & - & - & _92.92_ \\ \hline QAD-R (_ours_) & 91.25 & **99.54** & 98.34 & 99.01 & 99.12 & 98.36 & 99.10 & _97.82_ \\ QAD-E (_ours_) & **94.92** & 99.53 & **98.94** & **99.27** & **99.12** & **98.38** & 99.16 & _98.47_ \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Classification performance in the quality-agnostic setting with video compression of test set**. The methods are trained using one of three approaches: simultaneously with three modalities (raw + c23 + c40), individually with each of the three modalities, or with a mid-level of compression (c23) to prevent performance degradation resulting from lossy compression. In the inference phase, _video compression_ is applied to the input. The best results are highlighted in **bold**. \(\dagger\) and \(\Diamond\) indicate results were obtained from methods’ pre-trained weights and published code, respectively.
\begin{table}
\begin{tabular}{l|c c c c c c c|c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{8}{c}{Test Set AUC (\%)} \\ \cline{2-9} & NT & DF & F2F & FS & FSH & CDFv2 & FFIW10K & _Avg_ \\ \hline \hline \multicolumn{9}{c}{_Random Image Compression (JPEG on raw of test set)_} \\ \hline MesoNet [1]\({}^{\Diamond}\) & 70.23 & 92.02 & 88.32 & 82.60 & 91.84 & 81.12 & 91.87 & _85.43_ \\ Rössler _et al._[50]\({}^{\Diamond}\) & 69.89 & 98.62 & 94.97 & 96.66 & 96.76 & 96.98 & 98.81 & _93.24_ \\ \(F^{3}\)Net [45]\({}^{\Diamond}\) & 70.95 & 97.89 & 92.83 & 96.34 & 94.72 & 95.44 & 97.19 & _92.19_ \\ MAT [69]\({}^{\Diamond}\) & 69.53 & 98.96 & **95.53** & 97.99 & 96.97 & 98.21 & 98.91 & _93.73_ \\ Fang \& Lin [11] & 75.49 & 98.32 & 94.63 & 97.64 & 97.28 & 96.67 & 98.39 & _94.06_ \\ \hline SBIs [53]\({}^{\dagger}\) & 77.75 & 97.83 & 82.05 & 86.10 & 85.42 & - & - & _85.83_ \\ \hline BZNet [33]\({}^{\dagger}\) & **79.00** & 98.77 & 95.23 & 97.92 & - & - & - & 92.73 \\ ADD [32]\({}^{\dagger}\) & 75.84 & 96.83 & 92.23 & 95.24 & 96.00 & - & - & _91.23_ \\ \hline \hline QAD-R (_ours_) & 75.18 & 98.86 & 93.72 & 98.52 & 98.18 & 98.51 & **98.96** & _94.56_ \\ QAD-E (_ours_) & 76.27 & **99.20** & 94.44 & **98.69** & **98.60** & **98.52** & 98.86 & _94.94_ \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Classification performance in the quality-agnostic setting with image compression of test set**. The training approach resembles that of Table 1’s setting, however, in the inference phase, _random image compression_ is applied to the input.
result in different quality of deepfakes, and details of these datasets are provided in our Supp. Material.
### Experimental Settings
The models are trained with the Adam optimizer [30] with a learning rate of 2e-3, scheduled by one-cycle strategy [54] in 32 epochs. We use a mini-batch size of 64. In every training epoch, the model is evaluated _ten_ times, and we save the best one based on the validation accuracy. Regarding the backbone network, we use the ResNet-50 [20] (QAD-R) and EfficientNet-B1 [57] (QAD-E) with their default input size of \(224\times 224\) and \(240\times 240\), respectively. The backbone models utilize pre-trained weights from ImageNet dataset [8]. Our hyper-parameters settings \(\{\sigma=6,\alpha=0.004,\gamma=0.002\}\) are obtained by fine-tuning on ResNet50 with NeuralTextures dataset and are kept the same throughout all datasets, whereas that of EfficientNet-B1 are \(\{\sigma=6,\alpha=0.002,\gamma=0.006\}\).
### Results
This section reports the results of our QAD and other baselines under two scenarios: 1) _quality-agnostic_ setting, which represents no model has prior knowledge of the input images' quality, and 2) _quality-aware_ setting, which baselines are required to know inputs' quality information.
#### 4.3.1 Quality-agnostic models
We use the popular deepfake detection benchmark methods on our datasets: 1) **MesoNet** with Inception layer by [1], 2) **Xception** model proposed by [50], 3) \(F^{3}\)**Net**[45], 4) **MAT**[69] - a multi-attention deepfake detector, 5) a deviation of the method proposed by [11] to the _instance-based_** collaborative learning**, SBIs [53] - a self-blended method using real image only during training, 7) **ADD**[32] - a knowledge distillation-based approach for detecting low-quality deepfakes, and 8) **BZNet**[33] - a super-resolution approach for improving detection of low-quality deepfakes. Each method has a different training approach to defend against performance degradation caused by lossy compression. The first five methods are trained with a mixture of the three data quality types (_raw+c23+c40_). SBIs is trained with the mid-level of video compression (_c23_), which is commonly adopted in many works. Meanwhile, ADD and BZNet models are trained on _raw_, _c23_, and _c40_, respectively; however, in the inference phase, they are blindly tested over the entire dataset without the prior knowledge of quality types, and we report their average performance. In the test set, we include both video compression and random JPEG image compression [3].
The results for the video compression are presented in Table 1, where our QAD outperforms other SOTA baselines across multiple benchmark datasets. Notably, we achieve a significant improvement in AUC score of up to 5.28% for heavily compressed datasets, such as NeuralTextures (89.64% vs 94.92%). We also surpassed previous works on various deepfake datasets by \(0.44\%\) to \(1.05\%\) points, with the exception of Deepfakes and FFIW10K datasets, which are easy to detect even when compressed. Compared to the collaborative learning baseline by [11], which is a comparative benchmark, our QAD still gains a decent improvement on average, up to \(0.93\%\) and \(1.54\%\) points with QAD-R and QAD-E, respectively. Finally, our QAD-E models achieved the highest score on average, reaching \(98.47\%\).
Regarding the random image compression experiment, the results are provided in Table 2. Although BZNet is marginally outperform our model on face-reenactment deepfakes (NT and F2F), our method still achieves the best performance with the highest scores on _five_ over _seven_ datasets. On average, our enhancements show decent improvements, with margins of 0.5% and 0.88% of QAD-R and QAD-E, respectively, compared to the second-best competitor (Fang & Lin).
\begin{table}
\begin{tabular}{l|c|c|c c c c c c c|c} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{w/ prior infor.} & \multirow{2}{*}{\#params} & \multicolumn{8}{c}{Test Set AUC (\%)} \\ \cline{3-11} & & & & NT & DF & F2F & FS & FSH & CDFv2 & FFIW10K & _Avg_ \\ \hline \hline BZNet [33]\({}^{\dagger}\) [\(\times\)3] & ✘ & 22M \(\times\) 3 & 91.01 & 99.30 & 96.90 & 98.82 & - & - & - & 96.51 \\ ADD [32]\({}^{\dagger}\) [\(\times\)3] & ✘ & 23.5M \(\times\) 3 & 89.08 & 99.25 & 96.53 & 98.21 & 98.25 & - & - & 96.26 \\ \hline ResNet50 [\(\times\)3] & ✘ & 23.5M \(\times\) 3 & 88.96 & 99.26 & 97.04 & 98.63 & 98.71 & 97.09 & 98.58 & _96.90_ \\ \hline QAD-R (_ours_) & ✘ & 23.5M \(\times\) 1 & 88.85 & 99.42 & 97.77 & 98.83 & 98.93 & 97.56 & 98.93 & _97.18_ \\ \hline EfficientNet-B1[\(\times\)3] & ✘ & 6.5M \(\times\) 3 & 87.63 & 99.05 & 96.72 & 98.16 & 97.95 & 96.70 & 98.54 & _96.39_ \\ \hline QAD-E (_ours_) & ✘ & **6.5M \(\times\) 1** & **92.25** & **99.46** & **98.30** & **99.08** & **98.90** & **97.50** & **99.01** & _97.79_ \\ \hline \end{tabular}
\end{table}
Table 3: **Classification performance in the quality-aware setting with video compression of test set**. Except for our model, each model is trained with three modalities: raw, c23, and c40, respectively (denoted [\(\times\)3]). In the inference phase, while our QAD uses _one single_ pre-trained model, other methods use their _corresponding_ pre-trained model (_e.g._, pre-trained ResNet-50 on raw) to detect a given testing input (_e.g._, raw). Reported performances are averaged score of the three modalities.
#### 4.3.2 Quality-aware models
In this experiment, we compare our models with quality-aware benchmark baselines. In particular, beside ResNet-50 and EfficientNet-B1, for each of the _raw_, _c23_, and _c40_ datasets, we implement ADD [32] and BZNet [33] models. Since ADD and BZNet are the best performing methods, in which they utilized knowledge distillation and super-resolution approaches, respectively, for detecting deepfakes in different qualities. Hence, we only include them in this experiment. In the inference phase, the performance of these models is validated with the prior knowledge of the input image's quality, _i.e_., _c40_ images are evaluated by the same quality _c40_ pre-trained models. Meanwhile, our universal QAD is **blindly** evaluated without such prior knowledge. We integrate our QAD on ResNet50 and EfficientNet-B1 and present their performance in Table 3. As we can obbserve, our QAD-E model performs slightly better or on par with ResNet-50, BZNet, and ADD models, despite having only **one-third of the number of parameters** and **no prior knowledge of input image quality**. Moreover, when integrating with EfficientNet-B1, QAD-E achieves a new SOTA performance with an improvement of up to \(0.89\%\) points (97.79% vs. 96.90%), with a modest number of parameters (6.5M).
### Ablation studies
#### 4.4.1 \(\alpha\) and \(\gamma\) of our loss
We investigate the sensitivities of our QAD with respect to \(\alpha\) and \(\gamma\), and summarize the results of our analysis in Fig. 3. In this study, we experiment with ResNet50 on the NeuralTextures dataset, which is the hardest dataset to detect when compressed. And, we vary the values of the hyperparameters \(\alpha\in\{0.002,0.004,0.008\}\) and \(\gamma\in\{0.001,0.002,0.003,0.004\}\), where the value at \((\alpha,\gamma)=(0.0,0.0)\) indicate the baseline. The results suggest that when \(\alpha\) is greater than \(0.002\), the performance of our model is high and stable, surpassing current SOTA with any \(\alpha\) greater than \(0.002\). Additionally, increasing \(\gamma\) generally improves performance. Note that, as we did not tune the hyper-parameters to optimize the test accuracy, Section 4.3's hyper-parameters are not the best, despite outperforming all current methods on the datasets.
#### 4.4.2 Selection of losses
We study different alternatives for the _collaborative learning loss_ and the _adversarial weight perturbation_ approach. In particular, for the collaborative learning loss, we apply the loss function that was introduced by [55], in which they aggregate the logits of different views, combining them with their true labels to generate the soft labels. Besides, we replace our HSIC regularization with intermediate pairwise loss (Eq. 3) and center loss. Regarding the adversarial weight perturbation, we further apply the KL divergence between the representations of raw and compressed images to perturb the model's weights.
We report the results in Table 4, where we observe that the soft label loss fails to improve the baselines due to a
\begin{table}
\begin{tabular}{c|c|c c} \hline \hline \multicolumn{1}{c|}{\multirow{2}{*}{Model / loss}} & \multicolumn{3}{c}{ResNet-50} \\ \cline{3-4} & & ACC (\%) & AUC (\%) \\ \hline \hline \multicolumn{4}{c}{Baseline} & 78.8 & 88.2 \\ \hline \multirow{4}{*}{_Coll. loss_} & Soft-label & 77.0 & 84.0 \\ & Pairwise loss & 79.7 & 89.1 \\ & Center loss & 79.8 & 88.9 \\ & HSIC & **80.3** & **90.1** \\ \hline \multirow{2}{*}{_Adv. loss_} & AWP-KL & 80.9 & 89.4 \\ & AWP-XE & **81.7** & **90.7** \\ \hline \multicolumn{4}{c}{QAD _(ours)_} & **82.2** & **91.3** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance (ACC & AUC) of ResNet50 integrated with different losses.
Figure 3: Model’s performance versus \(\alpha\) and \(\gamma\) on the NeuralTextures.
ack of low-level representation agreement between different image qualities. While both pairwise and center loss slightly improve the baselines and are unstable with different model architecture, our HSIC consistently achieves the best performance by relaxing the instance-base constraint. Meanwhile, we can observe the model's performance drops in terms of both ACC and AUC, when replacing our AWP with cross-entropy loss with KL divergence. Generally, this experiment shows that using pairwise differences of various quality image representations at the output, such as soft label, pairwise constraint, or AWP-KL, for optimizing the model can hinder its convergence to the optimal parameters.
#### 4.4.3 Experiment with different backbones
Table 5 shows the comparability of our QAD with _three_ different backbone networks: ResNet18, ResNet34, and EfficientNet-B0. The hyperparameter settings are kept the same for ResNet50 and EfficientNet-B1. As shown in Table 5, our QAD consistently improves the baselines, from 0.86% to 1.3% points on average of seven deepfake datasets.
#### 4.4.4 Performance at different input scales
Unlike other classification tasks, a notable factor that substantially affects the deepfake detection performance, which is omitted by most previous works [50, 45, 69], is the input size of faces. We resize the input images from \(56\) to \(336\) and demonstrate how it impacts our QAD in comparison with ResNet50 baseline. The experiment is performed with the NeuralTextures dataset, and its results are reported in Fig. 4. We note that our proposed QAD and the baselines consistently improve their performance, when increasing the input size. Beside, our method also keep its staging improvement across the input resolutions compared to ResNet50 baseline.
#### 4.4.5 Feature distribution visualization
To verify the consistency of invariant representation upon the input quality, we draw the feature distribution of EfficientNet-B1 and our QAD-EfficientNet-B1 pre-trained on NeuralTextures (with raw, c23, and c40 datasets) with t-SNE [61]. The results are shown in Fig. 5. As observed, our QAD model's representations are less dispersed both in terms of intra-class and inter-quality. This experiment demonstrates that traditional cross-entropy loss trained with multiple input quality are confused due to the low-level constraints, while our QAD enables the model to achieve more generalization regardless of input quality.
## 5 Conclusion
Most deep learning-based deepfake detectors use a single model for each video quality, leaving an unsolved practical issue of their generalizability for detecting different quality of deepfakes. In this work, we propose a universal deepfake detection framework (QAD). Using intra-model collaborative learning, we minimize the geometrical differences of images in various qualities at different intermediate layers by the HSIC module. Moreover, our adversarial weight perturbation (AWP) module is directly applied to the model's parameters to provide its robustness against input image compression. Extensive experiments show that our QAD achieves competitive detection accuracy and marks the new SOTA results on various deepfake datasets without prior knowledge of input image quality.
**Acknowledgements.** This work was partly supported by Institute for Information & communication Technology Planning & evaluation (IITP) grants funded by the Korean government MSIT: (No. 2022-0-01199, Graduate School of Convergence Security at Sungkyunkwan University), (No. 2022-0-01045, Self-directed Multi-Modal Intelligence for solving unknown, open domain problems), (No. 2022-0-00688, AI Platform to Fully Adap Adapt and Reflect Privacy-Policy Changes), (No. 2021-0-02068, Artificial Intelligence Innovation Hub), (No. 2019-0-00421, AI Graduate School Support Program at Sungkyunkwan University), and (No. RS-2023-00230337, Advanced and Proactive AI Platform Research and Development Against Malicious deepfakes).
Figure 4: Performance (AUC) of our proposed method at different input resolutions with NeuralTextures dataset.
Figure 5: t-SNE visualisation of baseline and our QAD. |
2305.19753 | The Tunnel Effect: Building Data Representations in Deep Neural Networks | Deep neural networks are widely known for their remarkable effectiveness
across various tasks, with the consensus that deeper networks implicitly learn
more complex data representations. This paper shows that sufficiently deep
networks trained for supervised image classification split into two distinct
parts that contribute to the resulting data representations differently. The
initial layers create linearly-separable representations, while the subsequent
layers, which we refer to as \textit{the tunnel}, compress these
representations and have a minimal impact on the overall performance. We
explore the tunnel's behavior through comprehensive empirical studies,
highlighting that it emerges early in the training process. Its depth depends
on the relation between the network's capacity and task complexity.
Furthermore, we show that the tunnel degrades out-of-distribution
generalization and discuss its implications for continual learning. | Wojciech Masarczyk, Mateusz Ostaszewski, Ehsan Imani, Razvan Pascanu, Piotr Miłoś, Tomasz Trzciński | 2023-05-31T11:38:24Z | http://arxiv.org/abs/2305.19753v2 | # The Tunnel Effect: Building Data Representations
###### Abstract
Deep neural networks are widely known for their remarkable effectiveness across various tasks, with the consensus that deeper networks implicitly learn more complex data representations. This paper shows that sufficiently deep networks trained for supervised image classification split into two distinct parts that contribute to the resulting data representations differently. The initial layers create linearly-separable representations, while the subsequent layers, which we refer to as _the tunnel_, compress these representations and have a minimal impact on the overall performance. We explore the tunnel's behavior through comprehensive empirical studies, highlighting that it emerges early in the training process. Its depth depends on the relation between the network's capacity and task complexity. Furthermore, we show that the tunnel degrades out-of-distribution generalization and discuss its implications for continual learning.
## 1 Introduction
Neural networks have been the powerhouse of machine learning in the last decade. A significant effort has been put into understanding the mechanisms underlying their effectiveness. One example is the analysis of building representations in neural networks applied to image processing [19]. The consensus is that networks learn to use layers in the hierarchy by extracting more complex features than the layers before [22; 41], meaning that each layer contributes to the final network performance.
Extensive research has shown that increasing network depth exponentially enhances capacity, measured as the number of linear regions [35; 46; 50]. However, practical scenarios reveal that deep and overparameterized neural networks tend to simplify representations with increasing
Figure 1: **The tunnel effect** for VGG19 trained on the CIFAR-10. In the tunnel ( shaded area), the performance of linear probes attached to each layer saturates (blue line), and the representations rank is steeply reduced (red dashed line).
depth [13; 53]. This paradox arises because, despite their large capacity, these networks strive to reduce dimensionality and focus on discriminative patterns during supervised training [13; 15; 42; 53]. Motivated by these contradictory findings, we aim to investigate this phenomenon further and formulate the following research questions:
_How do representations depend on the depth of a layer?_
Our investigation focuses on severely overparameterized neural networks through the prism of their representations as the core components for studying neural network behavior [20; 38].
We challenge the commonly held intuition that deeper layers are responsible for capturing more complex and task-specific features [41; 57].
Specifically, we demonstrate that deep neural networks split into two parts exhibiting distinct behavior. The first part, which we call the extractor, builds representations, while the other, dubbed _the tunnel_, propagates the representations further to the model's output, compressing them significantly. To investigate the tunnel effect, we conduct multiple experiments that support our findings and shed some light on the potential source of this behavior. Our findings can be summarized as follows:
* We discover and extensively examine the tunnel effect, namely, deep networks naturally split into _the extractor_ responsible for building representations and _the compressing tunnel_, which minimally contributes to the final performance. The extractor-tunnel split emerges early in training and persists later on.
* We show that the tunnel deteriorates the generalization ability on out-of-distribution data.
* We show that the tunnel exhibits task-agnostic behavior in a continual learning scenario. Simultaneously it leads to higher catastrophic forgetting of the model.
## 2 The tunnel effect
The paper introduces and studies the dynamics of representation building in overparameterized deep neural networks called _the tunnel effect_. The following section validates the tunnel effect hypothesis in a number of settings. Through an in-depth examination in Section 3.1, we reveal that the tunnel effect is present from the initial stages and persists throughout the training process. Section 3.2 focuses on the out-of-distribution generalization and representations compression. Section 3.3 hints at important factors that impact the depth of the tunnel. Finally, in Section 4, we confront an auxiliary question: How does the tunnel's existence impact a model's adaptability to changing tasks and its vulnerability to catastrophic forgetting? To answer these questions we formulate our main claim as:
_The tunnel effect hypothesis: Sufficiently large * neural networks develop a configuration in which network layers split into two distinct groups. The first one which we call the extractor, builds linearly-separable representations. The second one, the tunnel, compresses these representations, hindering the model's out-of-distribution generalization._
Footnote *: We note that ‘sufficiently large’ covers most modern neural architectures, which tend to be heavily overparameterized.
### Experimental setup
To examine the phenomenon, we designed the setup to include the most common architectures and datasets, and use several different metrics to validate the observations.
**Architectures** We use three different families of architectures: MLP, VGGs, and ResNets. We vary the number of layers and width of networks to test the generalizability of results. See details in Appendix A.1.
**Tasks** We use three image classification tasks to study the tunnel effect: CIFAR-10, CIFAR-100, and CINIC-10. The datasets vary in the number of classes: \(10\) for CIFAR-10 and CINIC-10 and \(100\) for CIFAR-100, and the number of samples: \(50000\) for CIFAR-10 and CIFAR-100 and \(250000\) for CINIC-10). See details in Appendix A.2.
We probe the effects using: _the average accuracy of linear probing, spectral analysis of representations, and the CKA similarity between representations_. Unless stated otherwise, we report the average of \(3\) runs.
**Accuracy of linear probing:** a linear classification layer is attached to a given layer \(\ell\) of the neural network. We train this layer on the classification task and report the average accuracy. This metric measures to what extent \(\ell\)'s representations are linearly separable.
**Numerical rank of representations:** we compute singular values of the sample covariance matrix for a given layer \(\ell\) of the neural network. Using the spectrum, we estimate the numerical rank of the given representations matrix as the number of singular values above a certain threshold \(\sigma\). The numerical rank of the representations matrix can be interpreted as the measure of the degeneracy of the matrix.
**CKA similarity:** is a metric computing similarity between two representations matrices. Using this normalized index, we can identify the blocks of similar representations within the network. The definition and more details can be found in Appendix E.
**Inter and Intra class variance:** inter-class variance refers to the measure of dispersion or dissimilarity between different classes or groups in a dataset, indicating how distinct they are from each other. Intra-class variance, on the other hand, measures the variability within a single class or group, reflecting the homogeneity or similarity of data points within that class. The exact formula for computing these values can be found in Appendix F
### The main result
Table 1 presents our main result. Namely, we report the network layer at which the tunnel begins which we define as the point at which the network reaches \(95\%\) (or \(98\%\)) of its final accuracy. We found that all tested architectures exhibit the extractor-tunnel structure across all datasets used in the evaluation, but the relative length of the tunnel varies between architectures.
\begin{table}
\begin{tabular}{c c c|c c} \hline \hline Architecture & \# layers & Dataset & \(>0.95\) & \(>0.98\) \\ \hline \hline MLP & 13 & CIFAR-10 & 4 (31\%) & 5 (38\%) \\ \hline \multirow{3}{*}{VGG} & \multirow{3}{*}{19} & CIFAR-10 & 7 (36\%) & 7 (36\%) \\ & & CIFAR-100 & 8 (42\%) & 8 (42\%) \\ \cline{1-1} & & CINIC-10 & 7 (36\%) & 7 (36\%) \\ \hline \multirow{2}{*}{ResNet} & \multirow{2}{*}{34} & CIFAR-10 & 20 (58\%) & 29 (85\%) \\ \cline{1-1} & & CIFAR-100 & 29 (85\%) & 30 (88\%) \\ \end{tabular}
\end{table}
Table 1: The tunnel of various lengths is present in all tested configurations. For each architecture and dataset, we report the layer for which the _average linear probing accuracy is above \(0.95\) and \(0.98\) of the final performance_. The values in the brackets describe the part of the network utilized for building representations with the extractor.
Figure 2: The tunnel effect for networks trained on CIFAR-10. The blue line depicts the linear probing accuracy, and the shaded area depicts the tunnel. The red dashed line is the numerical rank of representations. The spike in the ResNet-34 representations rank coincides with the end of the penultimate residual stage.
We now discuss the tunnel effect using MLP-12, VGG-19, and ResNet-34 on CIFAR-10 as an example. The remaining experiments (for other architectures, datasets combinations) are available in Appendix B. As shown in Figure 1 and Figure 2, the early layers of the networks, around five for MLP and eight for VGG, are responsible for building linearly-separable representations. Linear probes attached to these layers achieve most of the network's final performance. These layers mark the transition between the extractor and the tunnel part (shaded area). In the case of ResNets, the transition takes place in deeper stages of the network at the \(19^{th}\) layer.
While the linear probe performance nearly saturates in the tunnel part, the representations are further refined. Figure 2 shows that the numerical rank of the representations (red dashed line) is reduced to approximately the number of CIFAR-10 classes, which is similar to the neural collapse phenomenon observed in [42]. For ResNets, the numerical rank is more dynamic, exhibiting a spike at \(29^{th}\) layer, which coincides with the end of the penultimate residual block. Additionally, the rank is higher than in the case of MLPs and VGGs.
Figure 3 reveals that for VGG-19 the inter-class representations variation decreases throughout the tunnel, meaning that representations clusters contract towards their centers. At the same time, the average distance between the centers of the clusters grows (inter-class variance). This view aligns with the observation from Figure 2, where the rank of the representations drops to values close to the number of classes. Figure 3 (right) presents an intuitive explanation of the behavior with UMAP [31] plots of the representations before and after the tunnel.
To complement this analysis, we studied the similarity of MLPs representations using the CKA index and the L1 norm of representations differences between the layers. Figure 4 shows that the representations change significantly in early layers and remain similar in the tunnel part when measured with the CKA index (left). The L1 norm of representations differences between the layers is computed on the right side of Figure 4.
Tunnel effect analysis
This section provides empirical evidence contributing to our understanding of the tunnel effect. We hope that these observations will eventually lead to explanations of this phenomenon. In particular, we show that a) the tunnel develops early during training time, b) it compresses the representations and hinders OOD generalization, and c) its size is correlated with network capacity and dataset complexity.
### Tunnel development
**Motivation** In this section, we investigate tunnel development during training. Specifically, we try to understand whether the tunnel is a phenomenon exclusively related to the representations and which part of the training is crucial for tunnel **Experiments** We train a VGG-19 on CIFAR-10 and save intermediate checkpoints every \(10\) epochs of training. We use these checkpoints to compute the layer-wise weight change during training ( Figure 5) and the evolution of numerical rank throughout the training (Figure 6).
**Results** Figure 5 shows that the split between the extractor and the tunnel is also visible in the parameters space. It could be perceived already at the early stages, and after that, its length stays roughly constant. Tunnel layers change significantly less than layers from the extractor. This result raises the question of whether the weight change affects the network's final output. Inspired by [59], we reset the weights of these layers to the state before optimization. However, the performance of the model deteriorated significantly. This suggests that although the change within the tunnel's parameters is relatively small, it plays an important role in the model's performance. Figure 6 shows that this apparent paradox can be better understood by looking at the evolution of representations' numerical rank during the very first gradient updates of the model. Throughout these steps, the rank collapses to values near-the-number of classes. It stays in this regime until the end of the training, meaning that the representations of the model evolve within a low-dimensional subspace. It remains to be understood if (and why) low-rank representations and changing weights coincide with forming linearly-separable representations.
**Takeaway** Tunnel formation is observable in the representation and parameter space. It emerges early in training and persists throughout the whole optimization. The collapse in the numerical rank of deeper layers suggest that they preserve only the necessary information required for the task.
Figure 6: The representations rank for deeper layers collapse early in training. The curves present the evolution of representations’ numerical rank over the first \(75\) training steps for all layers of the VGG-19 trained on CIFAR-10. We present a more detailed tunnel development analysis in Appendix G.
### Compression and out-of-distribution generalization
**Motivation** Practitioners observe intermediate layers to perform better than the penultimate ones for transfer learning [5; 23; 48]. However, the reason behind their effectiveness remains unclear [9]. In this section, we investigate whether the tunnel and, specifically, the collapse of numerical rank within the tunnel impacts the performance on out-of-distribution (OOD) data.
**Experiments** We train neural networks (MLPs, VGG-19, ResNet-34) on a source task (CIFAR-10) and evaluate it with linear probes on the OOD task, in this case, a subset of 10 classes from CIFAR-100. We report the accuracy of linear probing and the numerical rank of the representations.
**Results** Our results presented in Figure 7 reveal that _the tunnel is responsible for the degradation of out-of-distribution performance_. In most of our experiments, the last layer before the tunnel is the optimal choice for training a linear classifier on external data. Interestingly, we find that the OOD performance is tightly coupled with the numerical rank of the representations, which significantly decreases throughout the tunnel.
To assess the generalization of our findings we extend the proposed experimentation setup to additional dataset. To that end, we train a model on different subsets of CIFAR-100 while evaluating it with linear probes on CIFAR-10. The results presented in Figure 8 are consistent with our initial findings. We include detailed analysis with reverse experiment (CIFAR-10 \(\rightarrow\) CIFAR-100), additional architectures and datasets in the Appendix C.
In all tested scenarios, we observe a consistent relationship between the start of the tunnel and the drop in OOD performance. An increasing number of classes in the source task result in a shorter tunnel and a later drop in OOD performance. In the fixed source task experiment (Appendix C), the drop in performance occurs around the \(7^{th}\) layer of the network for all tested target tasks, which matches the start of the tunnel. This observation aligns with our earlier findings suggesting that the tunnel is a prevalent characteristic of the model rather than an artifact of a particular training or dataset setup.
Moreover, we connect the coupling of the numerical rank of the representations with OOD performance, to a potential tension between the objective of supervised learning and the generalization of OOD setup. Analogous tension was observed in [52] where adversarial robustness is at odds with model's accuracy. The results in Figure 7 align with the findings presented in Figure 3, demonstrating how the tunnel compresses clusters of class-wise representations. In work [54], the authors show that reducing the variation within each class leads to lower model transferability. Our experiments support this observation and identify the tunnel as the primary contributor to this effect.
**Takeaway** Compression of representations happening in the tunnel severely degrades the OOD performance of the model which is tightly coupled with the drop of representations rank.
Figure 8: Fewer classes in the source task create a longer tunnel, resulting in worse OOD performance. The network is trained on subsets of CIFAR-100 with different classes, and linear probes are trained on CIFAR-100. Shaded areas depict respective tunnels.
Figure 7: The tunnel degrades the out-of-distribution performance correlated with the representations’ numerical rank. The accuracy of linear probes (blue) was trained on the out-of-distribution data subset of 10 classes from CIFAR-100. The backbone was trained on CIFAR-10. The shaded area depicts the tunnel, and the red dashed line depicts the numerical rank of representations.
### Network capacity and dataset complexity
**Motivation** In this section, we explore what factors contribute to the tunnel's emergence. Based on the results from the previous section we explore the impact of dataset complexity, network's depth, and width on tunnel emergence.
**Experiments** First, we examine the impact of networks' depth and width on the tunnel using MLPs (Figure 9), VGGs, and ResNets (Table 2) trained on CIFAR-10. Next, we train VGG-19 and ResNet34 on CIFAR-{10,100} and CINIC-10 dataset investigating the role of dataset complexity on the tunnel.
**Results** Figure 9 shows that the depth of the MLP network has no impact on the length of the extractor part. Therefore increasing the network's depth contributes only to the tunnel's length. Both extractor section and numerical rank remain relatively consistent regardless of the network's depth, starting the tunnel at the same layer. This finding suggests that overparameterized neural networks allocate a fixed capacity for a given task independent of the overall capacity of the model.
Results in Table 2 indicate that the tunnel length increases as the width of the network grows, implying that representations are formed using fewer layers. However, this trend does not hold for ResNet34, as the longest tunnel is observed with the base width of the network. In the case of VGGs, the number of layers in the network does not affect the number of layers required to form representations. This aligns with the results in Figure 9.
The results presented above were obtained from a dataset with a consistent level of complexity. The data in Table 3 demonstrates that the number of classes in the dataset directly affects the length of the tunnel. Specifically, even though the CINIC-10 training dataset is three times larger than CIFAR-10, the tunnel length remains the same for both datasets. This suggests that the number of samples in the dataset does not impact the length of the tunnel. In contrast, when examining CIFAR-100 subsets, the tunnel length for both VGGs and ResNets increase. This indicates a clear relationship between the dataset's number of classes and the tunnel's length.
**Takeaway** Deeper or wider networks result in longer tunnels. Networks trained on datasets with fewer classes have longer tunnels.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline & \(1/4\) & I & 2 \\ \hline \hline VGG-16 & 8 (50\%) & 7 (44\%) & 7 (44\%) \\ \hline VGG-19 & 8 (42\%) & 7 (37\%) & 7 (37\%) \\ \hline ResNet18 & 15 (83\%) & 13 (72\%) & 13 (72\%) \\ \hline ResNet34 & 24 (68\%) & 20 (59\%) & 24 (68\%) \\ \hline \end{tabular}
\end{table}
Table 2: Widening networks layers results in a longer tunnel and shorter extractor. Column headings describe the factor in which we scale each model’s base number of channels. The models were trained on the CIFAR-10 to the full convergence. We use the \(95\%\) threshold of probing accuracy to estimate the tunnel beginning.
\begin{table}
\begin{tabular}{c|c|c c c} \hline \hline model & dataset & 30\% & 50\% & 100\% \\ \hline \hline \multirow{3}{*}{VGG-19} & CIFAR-10 & 6 (32\%) & 7 (37\%) & 7 (37\%) \\ & CIFAR-100 & 8 (42\%) & 8 (42\%) & 9 (47\%) \\ & CINIC-10 & 6 (32\%) & 7 (37\%) & 7 (37\%) \\ \hline \multirow{2}{*}{ResNet34} & CIFAR-10 & 19 (56\%) & 19 (56\%) & 21 (61\%) \\ & CIFAR-100 & 30 (88\%) & 30 (88\%) & 31 (91\%) \\ \hline \end{tabular}
\end{table}
Table 3: Networks trained on tasks with fewer classes utilize fewer resources for building representations and exhibit longer tunnels. Column headings describe the size of the class subset used in training. Within the (architecture, dataset) pair, the number of gradient steps during training in all cases was the same. We use the \(95\%\) threshold of probing accuracy to estimate the tunnel beginning.
Figure 9: Networks allocate a fixed capacity for the task, leading to longer tunnels in deeper networks. The extractor is consistent across all scenarios, with the tunnel commencing at the 4th layer.
The tunnel effect under data distribution shift
Based on the findings from the previous section and the tunnel's negative impact on transfer learning, we investigate the dynamics of the tunnel in continual learning scenarios, where large models are often used on smaller tasks typically containing only a few classes. We focus on understanding the impact of the tunnel effect on transfer learning and catastrophic forgetting [11]. Specifically, we examine how the tunnel and extractor are altered after training on a new task.
### Exploring the effects of task incremental learning on extractor and tunnel
**Motivation** In this section, we aim to understand the tunnel and extractor dynamics in continual learning. Specifically, we examine whether the extractor and the tunnel are equally prone to catastrophic forgetting.
**Experiments** We train a VGG-19 on two tasks from CIFAR-10. Each task consists of 5 classes from the dataset. We subsequently train on the first and second tasks and save the corresponding extractors \(E_{t}\) and tunnels \(T_{t}\), where \(t\in\{1,2\}\) is the task number. We also save a separate classifying head for trained on each task, that we use during evaluation.
**Results** As presented in Table 4, in any combination changing \(T_{1}\) to \(T_{2}\) or vice versa have a marginal impact on the performance. This is quite remarkable, and suggests that the tunnel is not specific to the training task. It seems that it _compresses the representations in a task-agnostic way_. The extractor part, on the other hand, is _task-specific_ and prone to forgetting as visible in the first four rows of Table 4. In the last two rows, we present two experiments that investigate how the existence of a tunnel affects the possibility of recovering from this catastrophic forgetting. In the first one, referred to as (\(E_{2}+T_{1}(FT)\)), we use original data from Task 1 to retrain a classifying head attached on top of extractor \(E_{2}\) and the tunnel \(T_{1}\). As visible, it has minimal effect on the accuracy of the first task. In the second experiment, we attach a linear probe directly to the extractor representations (\(E_{2}(FT)\)). This difference hints at a detrimental effect of the tunnel on representations' usability in continual learning.
In Appendix D.1 we study this effect further by training a tunnels on two tasks with a different number of classes, where \(n_{1}>n_{2}\). In this scenario, we observe that tunnel trained with more classes (\(T_{1}\)) maintains the performance on both tasks, contrary to the tunnel (\(T_{2}\)) that performs poorly on Task 1. This is in line with our previous observations in Section 2.2, that the tunnel compresses to the effective number of classes.
These results present a novel perspective in the ongoing debate regarding the layers responsible for causing forgetting. However, they do not align with the observations made in the previous study [47]. In Appendix D, we delve into the origin of this discrepancy and provide a comprehensive analysis of the changes in representations with a setup introduced with this experiment and the CKA similarity.
**Takeaway** The tunnel's task-agnostic compression of representations provides immunity against catastrophic forgetting when the number of classes is equal. These findings offer fresh perspectives on studying catastrophic forgetting at specific layers, broadening the current understanding in the literature.
\begin{table}
\begin{tabular}{l|c c} \hline \hline \multicolumn{1}{c}{} & First Task & Second Task \\ \hline \hline \(E_{1}+T_{1}\) & 92.04\% & 56.8\% \\ \hline \(E_{1}+T_{2}\) & 92.5\% & 58.04 \% \\ \hline \(E_{2}+T_{2}\) & 50.84 \% & 93.94 \% \\ \hline \(E_{2}+T_{1}\) & 50.66 \% & 93.72 \% \\ \hline \hline \(E_{2}+T_{1}(FT)\) & 56.1\% & – \\ \hline \(E_{2}(FT)\) & 74.4\% & – \\ \hline \end{tabular}
\end{table}
Table 4: The tunnel part is task-agnostic and can be freely mixed with different extractors retaining the original performance. We test the model’s performance on the first or second task using a combination of extractor \(E_{t}\) and tunnel \(T_{t}\) from tasks \(t\in\{1,2\}\). The last two rows \((FT)\) show how much performance can be recovered by retraining the linear probe attached to the penultimate layer \(E_{1}+T_{1}\) or the last layer of the \(E_{2}\).
### Reducing catastrophic forgetting by adjusting network depth
**Motivation** Experiments from this section verify whether it is possible to retain the performance of the original model by training a shorter version of the network. A shallower model should also exhibit less forgetting in sequential training.
**Experiments** We train VGG-19 networks with different numbers of convolutional layers. Each network is trained on two tasks from CIFAR-10. Each task consists of 5 classes from the dataset.
**Results:** The results shown in Figure 10 indicate that training shorter networks yields similar performance compared to the original model. However, performance differences become apparent when the network becomes shorter than the extractor part in the original model. This observation aligns with previous findings suggesting that the model requires a certain capacity to perform the task effectively. Additionally, the shorter models exhibit significantly less forgetting, which corroborates the conclusions drawn in previous works [32; 34] on the importance of network depth and architecture in relation to forgetting.
**Takeaway** It is possible to train shallower networks that retain the performance of the original networks and experience significantly less forgetting. However, the shorter networks need to have at least the same capacity as the extractor part of the original network.
## 5 Limitations and future work
This paper empirically investigates the tunnel effect, opening the door for future theoretical research on tunnel dynamics. Further exploration could involve mitigating the tunnel effect through techniques like adjusting learning rates for specific layers. One limitation of our work is its validation within a specific scenario (image classification), while further studies on unsupervised or self-supervised methods with other modalities would shed more light and verify the pertinence of the tunnel elsewhere.
In the experiments, we observed that ResNet-based networks exhibited shorter tunnels than plain MLPs or VGGs. This finding raises the question of whether the presence of skip connections plays a role in tunnel formation. In Appendix H, we take the first step toward a deeper understanding of this relationship by examining the emergence of tunnels in ResNets without skip connections.
## 6 Related work
The analysis of representations in neural network training is an established field [28; 56; 58]. Previous studies have explored training dynamics and the impact of model width [18; 26; 30; 45; 51; 55], but there is still a gap in understanding training dynamics [4; 37; 47; 58]. Works have investigated different architectures' impact on continual learning [33; 34] and linear models' behavior [10; 24; 25; 29]. Our work builds upon studies examining specific layers' role in model performance [4; 9; 38; 39; 45; 59] and sheds light on the origins of observed behaviors [12; 16; 42; 62].
Previous works have explored the role of specific layers in model performance [4; 9; 38; 39; 45; 59]. While some studies have observed a block structure in neural network representations, their analysis was limited to ResNet architectures and did not consider continual learning scenarios. In our work, we investigate a similar phenomenon, expanding the range of experiments and gaining deeper insights into its origins.
In [59], authors distinguish between critical and robust layers, highlighting the importance of the former for model performance, while individual layers from the latter can be reset without impacting the final performance. Our analysis builds upon this finding and further categorizes these layers into the extractor and tunnel, providing insights into their origins and their effects on model performance and generalization ability.
Figure 10: Training shorter networks from scratch gives a similar performance to the longer counterparts (top) and results in significantly lower forgetting (bottom). The horizontal lines denote original model’s performance.
Our findings are related to the Neural Collapse phenomenon [42], which has gained recent attention [12; 16; 62]. In our experiments, we also analyze the rank of the representation matrix and observe that the examined tunnel is characterized by a low representation rank.
## 7 Conclusions
This work presents new insights into the behavior of deep neural networks during training. We discover the tunnel effect, an intriguing phenomenon in modern deep networks where they split into two distinct parts - the extractor and the tunnel. The extractor part builds representations, and the tunnel part compresses these representations to a minimum rank without contributing to the model's performance. This behavior is prevalent across multiple architectures and is positively correlated with overparameterization, i.e., it can be induced by increasing the model's size or decreasing the complexity of the task.
We discuss potential sources of the tunnel and highlight the unintuitive behavior of neural networks during the initial training phase. This novel finding has significant implications for improving the performance and robustness of deep neural networks. Moreover, we demonstrate that the tunnel hinders out-of-distribution generalization and can be detrimental in continual learning settings.
Overall, our work offers new insights into the mechanisms underlying deep neural networks and can potentially improve the performance and robustness of these powerful models. |
2309.07860 | Identifying the Group-Theoretic Structure of Machine-Learned Symmetries | Deep learning was recently successfully used in deriving symmetry
transformations that preserve important physics quantities. Being completely
agnostic, these techniques postpone the identification of the discovered
symmetries to a later stage. In this letter we propose methods for examining
and identifying the group-theoretic structure of such machine-learned
symmetries. We design loss functions which probe the subalgebra structure
either during the deep learning stage of symmetry discovery or in a subsequent
post-processing stage. We illustrate the new methods with examples from the
U(n) Lie group family, obtaining the respective subalgebra decompositions. As
an application to particle physics, we demonstrate the identification of the
residual symmetries after the spontaneous breaking of non-Abelian gauge
symmetries like SU(3) and SU(5) which are commonly used in model building. | Roy T. Forestano, Konstantin T. Matchev, Katia Matcheva, Alexander Roman, Eyup B. Unlu, Sarunas Verner | 2023-09-14T17:03:50Z | http://arxiv.org/abs/2309.07860v1 | # Identifying the Group-Theoretic Structure of Machine-Learned Symmetries
###### Abstract
Deep learning was recently successfully used in deriving symmetry transformations that preserve important physics quantities. Being completely agnostic, these techniques postpone the identification of the discovered symmetries to a later stage. In this letter we propose methods for examining and identifying the group-theoretic structure of such machine-learned symmetries. We design loss functions which probe the subalgebra structure either during the deep learning stage of symmetry discovery or in a subsequent post-processing stage. We illustrate the new methods with examples from the U(n) Lie group family, obtaining the respective subalgebra decompositions. As an application to particle physics, we demonstrate the identification of the residual symmetries after the spontaneous breaking of non-Abelian gauge symmetries like SU(3) and SU(5) which are commonly used in model building.
+
Footnote †: journal: Physics Letters B
## 1 Introduction
Investigations of fundamental symmetries and the possible mechanisms for their violations in Nature are at the forefront of modern theoretical physics research [1]. The ideas of supersymmetry [2; 3; 4; 5] and a grand unified theory (GUT) [6; 7] represent attractive possibilities for physics beyond the standard model (SM), and have stimulated significant model-building and phenomenology efforts in the past [8; 9; 10; 11]. The use of artificial intelligence for studying such symmetry paradigms is a tantalizing possibility which recently has been attracting a lot of interest. The initial focus was on symmetry discovery in data collected in specific physical systems, e.g., planetary systems, electrodynamics, etc. [12; 13; 14; 15]. Subsequent studies shifted to the discovery of symmetries in purely theoretical constructs as well [16; 17; 18; 19; 20; 21; 22; 23; 24; 25]. In either case, the natural language for discussing such sets of symmetries is group theory. It was shown that through a suitable choice of a loss function, it is possible to find a closed orthonormal set of symmetry generators that form a Lie algebra [22; 23; 24; 25]. The natural follow-up questions to ask are: What kind of Lie algebra has been found? What is its rank? Is it semi-simple? Can it be decomposed into a direct sum of sub-algebras and how?
The issue of subgroups and their respective subalgebras is a central one in the discussion of spontaneous symmetry breaking, whereby the symmetry of the full Lie group is reduced to that of one of its subgroups. Textbook examples from particle physics include the breaking of the electroweak \(SU(2)\times U(1)\) gauge symmetry in the Standard Model (SM) to the \(U(1)\) of electromagnetism, as well as various GUT breaking scenarios to the SM itself.
The main focus of this letter is on investigating the group-theoretic structure of a machine-learned set of symmetries. The object of interest will be the set \(\mathfrak{s}\) of \(N_{\mathfrak{s}}\) symmetry generators \(\mathbb{J}_{\alpha}\), \(\alpha=1,2,\ldots,N_{\mathfrak{s}}\), which can be found numerically following the recent methods in [21; 22; 23; 24; 25]. The specific system exhibiting these symmetries is of no particular significance -- it could be a numerical dataset, or a theory model. Our goal will be to identify the subalgebra structure of the set \(\mathfrak{s}\) by addressing the following questions.
* **Allowed subalgebras.** Does the set \(\mathfrak{s}\) contain valid subalgebras \(\mathfrak{b}\subset\mathfrak{s}\), with \(N_{\mathfrak{b}}<N_{\mathfrak{s}}\) generators? If so, what are all the possible integer values of \(N_{\mathfrak{b}}\)?
* **Cartan subalgebra.** What is the rank of \(\mathfrak{s}\), i.e., what is the dimension of the _maximal_ abelian subalgebra \(\mathfrak{b}_{c}\)?
* **Composition series.** Can the full symmetry algebra \(\mathfrak{s}\) be represented as a direct sum of \(h\) simple algebras as \(\mathfrak{b}_{1}\oplus\mathfrak{b}_{2}\oplus\ldots\oplus\mathfrak{h}_{h}\), for some value of \(h\)?
The paper is organized as follows. In Section 2 we develop the general formalism for addressing those
questions. In Section 3 the technique is illustrated with the example of the \(u(4)\) algebra. Particle physics applications are considered in Section 4, where we apply the method to gauge models exhibiting spontaneous symmetry breaking and identify the residual symmetries. Section 5 contains our conclusions. A provides useful background on the \(SO(5)\) subgroup of \(U(4)\).
## 2 Identifying subalgebra structures
A symmetry transformation generally acts on an arbitrary \(n\)-dimensional vector \(\mathbf{x}\equiv\{x^{(1)},x^{(2)},\ldots,x^{(n)}\}\), where \(\mathbf{x}\in\mathbb{R}^{n}\) for \(O(n)\) groups or \(\mathbf{x}\in\mathbb{C}^{n}\) for \(U(n)\). In our procedure, a group transformation on a real space \(\mathbb{R}^{n}\) or a complex space \(\mathbb{C}^{n}\) is represented by a matrix operation acting on a set of \(m\) points \(\{\mathbf{x}\}\equiv\{\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{m}\}\) sampled from a finite domain. The choice of sampling distribution, domain size, and location is inconsequential. For definiteness, we use a standard normal distribution with a sample size of \(m=300\).
A symmetry implies a conservation law, i.e., the invariance
\[\varphi(\mathbf{x}_{i}^{\prime})=\varphi(\mathbf{x}_{i}),\quad\forall i\,=\,1,2,\ldots,m\,, \tag{1}\]
of an oracle function \(\varphi(\mathbf{x})\) with respect to an infinitesimal transformation \(\delta\mathbf{f}\)
\[\mathbf{x}\xrightarrow{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,
terms in the loss function. Furthermore, the invariance loss term is also not needed at this stage, since \(\{\mathbb{J}\}\) are symmetry generators and any linear combination of them represents a symmetry as well. Therefore, the training for \(\mathbb{O}\) is done with a loss function which only enforces closure (and optionally sparsity).
### Loss Function Modifications
#### 2.3.1 Finding the full symmetry algebra
The first order of business is to find the full symmetry algebra \(\mathfrak{s}\), i.e., the largest possible closed set of generators \(\{\mathbb{J}\}\). As discussed in [22; 25], this is a relatively straightforward exercise. In the one-shot approach of Section 2.1, the number of generators \(N_{\mathfrak{g}}\) in the closure loss (5) is treated as a hyperparameter which is continually being incremented. Some values of \(N_{\mathfrak{g}}\) result in successful training, others do not. Whenever a valid closed set of symmetry generators is found during this process, this guarantees the existence of a subalgebra \(\mathfrak{h}\) with \(N_{\mathfrak{h}}=N_{\mathfrak{g}}\) generators. The maximum obtained value, \(N_{\mathfrak{s}}\equiv\max\{N_{\mathfrak{h}}\}\), of \(N_{\mathfrak{h}}\), is the dimension of the full symmetry algebra \(\mathfrak{s}\). In the sequential approach of Section 2.2 the idea is very similar -- one keeps trying to learn a new non-trivial symmetry generator which is orthogonal to the set found so far. When no such new generator can be found, the number of existing generators found by then is precisely \(N_{\mathfrak{s}}\).
Note that both approaches (the one-shot learning from Section 2.1 and the sequential learning from Section 2.2) result in the same outcome. The only difference is that in the former case, the loss functions are written in terms of the generator matrices \(\mathbb{G}_{\alpha}\), whose components are the learnable parameters, while in the latter case, the loss functions are written in terms of the rotated generators \(\tilde{\mathbb{J}}_{\alpha}(\mathbb{O})\), and the learnable parameters are the components of the rotation matrix \(\mathbb{O}\). In what follows we shall use the notation of Section 2.1 and write our loss functions in terms of \(\mathbb{G}_{\alpha}\). It should be understood that in the case of the sequential approach of Section 2.2, the same loss functions can be used, but with the replacement \(\mathbb{G}_{\alpha}\rightarrow\tilde{\mathbb{J}}_{\alpha}(\mathbb{O})\).
#### 2.3.2 Identifying the rank
Next, we would like to find the rank of the thus found algebra \(\mathfrak{s}\). The rank of a Lie algebra is the dimension of its Cartan subalgebra (the maximal _Abelian_ subalgebra). In other words, we are looking for the largest subalgebra \(\mathfrak{b}_{c}\) whose elements all commute with each other. In order to find \(\mathfrak{b}_{c}\), we can repeat the previous exercise from Section 2.3.1, only this time we set all structure constants \(a_{\alpha\beta}^{\gamma}\) in (6) to zero, resulting in
\[\mathbb{C}_{\alpha\beta}(\{\mathbb{G}\})=[\mathbb{G}_{\alpha},\mathbb{G}_{ \beta}]. \tag{8}\]
Under those conditions, the maximal allowed value for the hyperparameter \(N_{\mathfrak{g}}\) will give the dimension \(N_{\mathfrak{h}_{c}}\) of the Cartan subalgebra.
#### 2.3.3 Testing the subalgebra structure
The procedure outlined in Section 2.3.1 already singles out the allowed values for the number of generators \(N_{\mathfrak{h}}\) in the allowed subalgebras of \(\mathfrak{s}\). As depicted in Table 1, we can now further probe the structure of these subalgebras \(\mathfrak{h}\), by looking for factor decompositions. Specifically, we conjecture a partition of any sublgebra \(\mathfrak{h}\) into a direct sum of \(h\) subalgebras \(\mathfrak{h}_{1},\mathfrak{h}_{2},\ldots,\mathfrak{h}_{h}\):
\[\mathfrak{h}\ =\ \mathfrak{h}_{1}\oplus\mathfrak{h}_{2}\oplus\cdots\oplus \mathfrak{h}_{h}\,. \tag{9}\]
We shall label the number of generators in each subalgebra \(\mathfrak{h}_{i}\) with \(N_{\mathfrak{h}_{i}}\) (in Table 1 this value is listed as a parentheses-enclosed superscript). Therefore, the decomposition (9) implies
\[N_{\mathfrak{h}}=N_{\mathfrak{h}_{1}}+N_{\mathfrak{h}_{2}}+\ldots+N_{ \mathfrak{h}_{h}}. \tag{10}\]
In the special case when \(\mathfrak{h}\) represents the full algebra, this procedure will give its decomposition, \(\mathfrak{s}=\mathfrak{h}_{1}\oplus\mathfrak{h}_{2}\oplus\cdots\oplus \mathfrak{h}_{h}\). Note that often there are several inequivalent ways to partition \(N_{\mathfrak{h}}\) generators into \(h\) groups. Table 1 shows one such example already at \(N_{\mathfrak{h}}=4\) and \(h=2\): we can split the 4 generators into groups of \(3+1\) or \(2+2\). In our numerical experiments in the next two sections, we consider all possible such partitions.
The decomposition (9) implies that i) generators belonging to two different subalgebras \(\mathfrak{h}_{i}\) and \(\mathfrak{h}_{j}\) with
\begin{table}
\begin{tabular}{||c||c|c|c|c||} \hline \multicolumn{2}{||c||}{} & \multicolumn{4}{c||}{Number of subalgebra factors \(h\)} \\ \hline \(N_{\mathfrak{h}}\) & 1 & 2 & 3 & 4 \\ \hline \hline
1 & \(\mathfrak{h}_{1}^{(1)}\) & — & — & — \\ \hline
2 & \(\mathfrak{h}_{1}^{(2)}\) & \(\mathfrak{h}_{1}^{(1)}\oplus\mathfrak{h}_{2}^{(1)}\) & — & — \\ \hline
3 & \(\mathfrak{h}_{1}^{(3)}\) & \(\mathfrak{h}_{1}^{(2)}\oplus\mathfrak{h}_{2}^{(1)}\) & \(\mathfrak{h}_{1}^{(1)}\oplus\mathfrak{h}_{2}^{(1)}\oplus\mathfrak{h}_{3}^{(1)}\) & — \\ \hline
4 & \(\mathfrak{h}_{1}^{(4)}\) & \(\mathfrak{h}_{1}^{(3)}\oplus\mathfrak{h}_{2}^{(1)}\) & \(\mathfrak{h}_{1}^{(2)}\oplus\mathfrak{h}_{2}^{(1)}\oplus\mathfrak{h}_{3}^{(1)}\) & \(\mathfrak{h}_{1}^{(1)}\oplus\mathfrak{h}_{2}^{(1)}\oplus\mathfrak{h}_{3}^{(1)} \oplus\mathfrak{h}_{4}^{(1)}\) \\ \hline \(\vdots\) & \multicolumn{4}{c||}{\(\vdots\)} \\ \hline \hline \end{tabular}
\end{table}
Table 1: The setup for the subalgebra search discussed in Section 2.3.3. For each possible integer value \(N_{\mathfrak{h}}\) of the total number of generators in a subalgebra \(\mathfrak{h}\), we consider all possible partitions into \(h\lesssim N_{\mathfrak{h}}\) distinct subgroups, each subgroup being a closed subalgebra \(\mathfrak{h}_{i}^{(N_{\mathfrak{h}_{i}})}\) with \(N_{\mathfrak{h}_{i}}\) generators.
\(i\neq j\) necessarily commute, and ii) that the Lie bracket of two generators belonging to the same subalgebra \(\mathfrak{h}_{j}\) must close on the generators from that subalgebra. We can combine these two requirements together by modifying the closure loss (5) as follows
\[L_{\text{closure}}([\mathbb{G}],a_{\alpha\beta}^{\gamma})=\sum_{i=1}^{h}\sum_{j= 1}^{h}\sum_{\alpha=1}^{N_{\mathfrak{h}_{j}}}\sum_{\beta=1}^{N_{\mathfrak{h}_{j }}}\text{Tr}\left(\mathbb{C}_{\alpha\beta}^{(i\beta)}\cdot\left(\mathbb{C}_{ \alpha\beta}^{(i\beta)}\right)^{\dagger}\right), \tag{11}\]
where
\[\mathbb{C}_{\alpha\beta}^{(ij)}\equiv[\mathbb{G}_{\alpha}^{(i)},\mathbb{G}_{ \beta}^{(j)}]-\delta_{ij}\sum_{\gamma=1}^{N_{\mathfrak{h}_{j}}}\left(a_{i} \right)_{\alpha\beta}^{\gamma}\mathbb{G}_{\gamma}^{(i)}. \tag{12}\]
Here \(a_{i}\) denotes the tensor of structure constants of the subalgebra \(\mathfrak{h}_{i}\). The upper indices \((i)\) and \((j)\), while redundant, serve as useful reminders that the index \(\alpha\) runs over the generators in \(\mathfrak{h}_{i}\), while the index \(\beta\) runs over the generators in \(\mathfrak{h}_{j}\).
## 3 \(U(4)\) subalgebra structure
We now illustrate the techniques described in the previous section with a specific example, for which we chose the \(U(4)\) unitary group (although not shown here, we also worked out the cases of \(U(2)\), \(U(3)\) and \(U(5)\)). While perhaps not as popular as its "neighbors" \(U(3)\) and \(U(5)\), \(U(4)\) has found applications in various areas of physics [26; 27; 28; 29] and a complete account of its subalgebras is available [30]. For our purposes, \(U(4)\) strikes a nice balance between the relative simplicity of the widely used \(U(2)\) and \(U(3)\) groups, and the complexity of \(U(5)\) and higher groups used in GUT model building. Furthermore, \(U(4)\) already contains an interesting non-trivial subgroup, namely \(SO(5)\sim Sp(4)\), which is described in detail in A.
The \(U(4)\) symmetry results from the following oracle defined on \(\mathbf{x}\in\mathbb{C}^{4}\)
\[\varphi_{U}(\mathbf{x})\ \equiv\ |\mathbf{x}|^{2}\ =\ \sum_{j=1}^{4}(x^{(j)})^{*}x^{(j)},\quad x ^{(j)}\in\mathbb{C}. \tag{13}\]
Using the procedure from Section 2.3.1, we vary the hyperparameter \(N_{g}\) to find the allowed values for the number of generators \(N_{\mathfrak{h}}\) in a subalgebra \(\mathfrak{h}\). The result is shown in Figure 1. We see that the largest possible number of generators in this case is \(N_{\mathfrak{s}}=16\), which corresponds to the full algebra \(\mathfrak{s}=u(4)\). Based on the low values of the loss, we conclude that there exist subalgebras with any \(N_{\mathfrak{h}}\) from \(1\) to \(10\), and also \(N_{\mathfrak{h}}=15\). Our main task now will be to decipher exactly what type of subalgebras are those.
Next we use the method of Section 2.3.2 to determine the rank of the so found algebra \(\mathfrak{s}\). Figure 2 depicts the evolution of the value of the total loss function \(\mathcal{L}_{total}\) as a function of the training step, with the Abelian closure condition \([\mathbb{G}_{\alpha},\mathbb{G}_{\beta}]=0\) imposed. In order to find the maximal Abelian algebra, we increment the value of the number of candidate generators \(N_{g}\), as listed in the legend. We see that the training is successful and the loss is driven to zero for \(N_{g}=2\), \(3\) and \(4\). However,
Figure 1: The final value of the loss function as a function of the requested number of generators \(N_{g}\) for the \(U(4)\) example of Section 3. The colored symbols identify the dominant contribution to the total loss: magenta diamonds for closure and red crosses for orthogonality. For the green circles the total loss is zero (to within machine precision). The learning rate was 0.001 and the training was done for 7,000 epochs.
Figure 2: Finding the Cartan subalgebra of the \(u(4)\) algebra. The evolution of the value of the total loss function with the Abelian closure condition (8) is plotted for different number of generators \(N_{g}\) as shown in the legend.
as soon as \(N_{\rm g}\) hits 5 or higher, the loss remains large, indicating that there is no valid abelian subalgebra of that size. We therefore conclude that the rank of the 16-dimensional full algebra discovered in the previous step is 4, which is precisely the result expected from Lie group theory.
Finally, we apply the technique of Section 2.3.3 to obtain the decompositions (9) of the valid subalgebras found in Fig. 1. The results are summarized in Table 2, and depend on the value of the number of generators \(N_{\rm b}\) in the subalgebra, and the number of subalgebra groups \(h\). For compactness, in Table 2 we use Slansky notation [31], where subscripts denote the dimensionality \(n\), i.e., \(u_{4}\equiv u(4)\), \(su_{4}\equiv su(4)\), \(sp_{4}\equiv sp(4)\), etc. Green (yellow) boxes in the table indicate the existence (the absence) of a valid decomposition. For example, we confirm the result from Fig. 1 that there are no closed subalgebras with \(N_{\rm b}=11\), \(12\), \(13\) or \(14\) generators. In the remaining cases, we do find valid subalgebras, which, as a rule, can themselves be further decomposed into factors (the only exceptions being the cases of \(N_{\rm b}=1\) and \(N_{\rm b}=15\)).
Note that sometimes there are two different non-isomorphic subalgebras for the same values of \(N_{\rm b}\) and \(h\). Such cases appear as separate entries on different rows in the corresponding box in the table. For example, consider \(N_{\rm b}=6\) and \(h=2\). There are three viable subalgebra decompositions into two groups: i) \(3+3\), which is the case of \(su_{2}\oplus su_{2}\); ii) \(4+2\), which is the case of \(u_{2}\oplus(u_{1}\oplus u_{1})\); and iii) \(5+1\), which is isomorphic to the previous case and is given by \((u_{2}\oplus u_{1})\oplus u_{1}\).
A particularly interesting case occurs for \(N_{\rm b}=10\) and \(h=1\). We obtain two different subalgebras, the rank 4 subalgebra \(su_{3}\oplus u_{1}\oplus u_{1}\), which can be decomposed further into three factors, and the rank 2 subalgebra \(sp_{4}\sim so_{5}\), which is simple and cannot be decomposed further. Figures 3 and 4 show the learned sparse generators in those two cases, respectively. In Fig. 3, the \(su_{3}\) factor consists of \(\mathbb{J}_{1}\), \(\mathbb{J}_{2}\), \(\mathbb{J}_{3}\), \(\mathbb{J}_{4}\), \(\mathbb{J}_{8}\), \(\mathbb{J}_{9}\), and two traceless linear combinations of \(\mathbb{J}_{6}\), \(\mathbb{J}_{7}\), and \(\mathbb{J}_{10}\). In order to demonstrate that the learned generators of Fig. 4 form an \(so_{5}\) algebra, we can map them explicitly to the ten \(so_{5}\) generators (A.2) discussed in Appendix A as follows
\[\mathbb{J}_{1} =-\frac{1}{\sqrt{2}}\left(L_{34}-L_{15}\right), \tag{14a}\] \[\mathbb{J}_{2} =-\frac{1}{\sqrt{2}}\left(L_{34}+L_{15}\right),\] (14b) \[\mathbb{J}_{3} =-\frac{1}{\sqrt{2}}\left(L_{45}+L_{13}\right),\] (14c) \[\mathbb{J}_{4} =-\frac{1}{\sqrt{2}}\left(L_{14}-L_{35}\right),\] (14d) \[\mathbb{J}_{5} =-\frac{1}{\sqrt{2}}\left(L_{45}-L_{13}\right),\] (14e) \[\mathbb{J}_{6} =-\frac{1}{\sqrt{2}}\left(L_{23}-L_{24}\right),\] (14f) \[\mathbb{J}_{7} =-\frac{1}{\sqrt{2}}\left(L_{14}+L_{35}\right),\] (14g) \[\mathbb{J}_{8} =-\frac{1}{\sqrt{2}}\left(L_{23}+L_{24}\right),\] (14h) \[\mathbb{J}_{9} =-\frac{1}{\sqrt{2}}\left(L_{25}+L_{12}\right),\] (14i) \[\mathbb{J}_{10} =-\frac{1}{\sqrt{2}}\left(L_{25}-L_{12}\right), \tag{14j}\]
where \(L_{ij}\) are given in the representation (A.6).
\begin{table}
\begin{tabular}{||c||c|c|c|c||} \cline{2-5} \multicolumn{1}{c||}{} & \multicolumn{4}{c||}{Number of subalgebra factors \(h\)} \\ \hline \(N_{\rm b}\) & 1 & 2 & 3 & 4 \\ \hline \hline
1 & \(u_{1}\) & — & — & — \\ \hline
2 & \(u_{1}^{2}\) & \(u_{1}\oplus u_{1}\) & — & — \\ \hline
3 & \(u_{1}^{3}\) & \(u_{1}^{2}\oplus u_{1}\) & \(u_{1}\oplus u_{1}\oplus u_{1}\) & — \\ & \(su_{2}\) & & & — \\ \hline
4 & \(u_{1}^{4}\) & \(u_{1}^{3}\oplus u_{1}\) & \(u_{1}^{2}\oplus u_{1}\oplus u_{1}\) & \(u_{1}\oplus u_{1}\oplus u_{1}\) \\ & \(u_{2}\) & \(su_{2}\oplus u_{1}\) & & \\ \hline
5 & \(\rightarrow\) & \(u_{2}\oplus u_{1}\) & \(su_{2}\oplus u_{1}\oplus u_{1}\) & \\ \hline
6 & \(\rightarrow\) & \(\rightarrow\) & \(u_{2}\oplus u_{1}\oplus u_{1}\) & \(su_{2}\oplus u_{1}\oplus u_{1}\) \\ & \(so_{4}\) & \(su_{2}\oplus su_{2}\) & & \\ \hline
7 & \(\rightarrow\) & \(u_{2}\oplus u_{2}\) & \(su_{2}\oplus su_{2}\oplus u_{1}\) & \(su_{2}\oplus su_{2}\oplus u_{1}\oplus u_{1}\) \\ \hline
8 & \(\rightarrow\) & \(u_{2}\oplus u_{2}\) & \(u_{2}\oplus su_{2}\oplus u_{1}\) & \(su_{2}\oplus su_{2}\oplus u_{1}\oplus u_{1}\) \\ & \(su_{3}\) & & & \\ \hline
9 & \(u_{3}\) & \(su_{3}\oplus u_{1}\) & & \\ \hline
10 & \(\rightarrow\) & \(u_{3}\oplus u_{1}\) & \(su_{3}\oplus u_{1}\oplus u_{1}\) & \\ & \(sp_{4}\) & & & \\ \hline
11 & & & & \\ \hline
12 & & & & \\ \hline
13 & & & & \\ \hline
14 & & & & \\ \hline
15 & \(su_{4}\) & & & \\ \hline
16 & \(u_{4}\) & \(su_{4}\oplus u_{1}\) & & \\ \hline \end{tabular}
\end{table}
Table 2: The subalgebra decomposition results for the case of \(u(4)\) presented in analogy to Table 1. The viable partitions are only up to \(h=4\), which is the rank of the full algebra. Green (yellow) boxes indicate the existence (the absence) of a valid decomposition. Results appearing on the same row are isomorphic to each other, while results with the same \(N_{\rm b}\), but on different rows represent different (non-isomorphic) subalgebras.
## 4 Spontaneous symmetry breaking of non-abelian gauge symmetries
In this section we demonstrate the application of the symmetry finding and identification procedures from the previous sections to particle theory model building, using a couple of examples from the classic textbook [32].
### Su(3) Model
Following Chapter 20 in [32], consider an \(SU(3)\) gauge theory with an adjoint scalar
\[\Phi=\sum_{a=1}^{8}\phi_{a}\,t^{a}, \tag{15}\]
where \(t^{a}\) are the \(3\times 3\) Hermitian matrices representing the generators in the adjoint representation of \(SU(3)\). A gauge transformation \(U\) acts on the Higgs field \(\Phi\) as
\[\Phi\longrightarrow U\,\Phi\,U^{\dagger}. \tag{16}\]
The physics is contained in the potential \(V\) of the theory, which in our case will play the role of the oracle \(\varphi(\mathbf{x})\). In turn, the role of the features \(\mathbf{x}\) will be taken by the field components \(\phi_{a}\). In order to generate spontaneous symmetry breaking, we can choose a potential
\[V(\phi)=\left[\operatorname{Tr}\left(\Phi^{\dagger}\Phi\right)-\frac{v^{2}}{2 }\right]^{2} \tag{17}\]
with a non-vanishing parameter \(v\). At the minimum of this potential, \(\Phi\) has a non-zero vacuum expectation value (vev), \(\Phi_{0}\equiv\langle\Phi\rangle\), and breaks the \(SU(3)\) symmetry spontaneously. Expanding around the vev as
\[\Phi\equiv\Phi_{0}+\eta, \tag{18}\]
the potential can be rewritten in terms of the physical degrees of freedom \(\eta\) as
\[V(\eta)=\left[\operatorname{Tr}\left(\eta^{\dagger}\eta\right)+\operatorname{ Tr}\left(\Phi_{0}\eta\right)+\operatorname{Tr}\left(\eta^{\dagger}\Phi_{0} \right)\right]^{2}. \tag{19}\]
In order to derive the symmetry of this model, we create a dataset by sampling the 8-dimensional complex vector \((\phi_{1},\phi_{2},\ldots,\phi_{8})\), then forming \(\eta\) and looking for linearized transformations (16) of the form
\[\eta\longrightarrow\eta+i\,\varepsilon\,\mathbb{G}\,\eta-i\,\varepsilon\, \eta\,\mathbb{G}^{\dagger} \tag{20}\]
where the generator \(\mathbb{G}\) is a \(3\times 3\) complex Hermitian matrix.
Depending on the orientation of the vacuum-expectation value of the Higgs field, different symmetry
Figure 4: The same as Fig. 3, but for the \(sp_{4}\sim so_{5}\) case of \(N_{\text{b}}=10\) in Table 2.
Figure 3: The learned generators for the \(su_{3}\oplus u_{1}\oplus u_{1}\) case of \(N_{\text{b}}=10\) in Table 2. In this and all subsequent such figures, each learned generator matrix \(\mathbb{J}_{a}\) is represented by a pair of panels (one for the real and one for the imaginary parts). The values of the individual elements of the matrix are color-coded and can be read off the color bar.
breaking patterns may emerge. For example, if
\[\Phi_{0}\ =\ \frac{v}{2}\ \mathrm{diag}\left(1,-1,0\right), \tag{21}\]
the \(SU(3)\) gauge symmetry is broken down to \(U(1)\times U(1)\). This situation is depicted in Fig. 5, which shows the learned symmetry generators in that case. The three diagonal generators shown in the figure can be used to form two traceless linear combinations which correspond to the two independent \(U(1)\)'s.
Another possible choice for a symmetry breaking vacuum is
\[\Phi_{0}\ =\ \frac{v}{2\sqrt{3}}\ \mathrm{diag}\left(1,1,-2\right). \tag{22}\]
As shown in Figure 6, this results in the residual symmetry pattern \(SU(2)\times U(1)\): \(\mathbb{J}_{1}\), \(\mathbb{J}_{3}\) and the antisymmetric combination of \(\mathbb{J}_{4}\) and \(\mathbb{J}_{5}\) combine to form the \(SU(2)\) factor, while \(2\mathbb{J}_{2}+\mathbb{J}_{4}+\mathbb{J}_{5}\) is the remaining \(U(1)\) factor.
### \(Su(5)\) GUT Model
The previous example can be generalized to larger groups, and in particular \(SU(5)\) grand unification (see Problem 20.1 in [32]). The analysis from Section 4.1 goes through largely intact, the only difference being that the scalar field \(\Phi\) is now expanded in terms of the 24 generators \(T^{a}\) of the adjoint representation of \(SU(5)\)[33]
\[\Phi=\sum_{a=1}^{24}\phi_{a}\,T^{a}. \tag{23}\]
Using the same potential (17) and expanding as in (18), we again obtain the oracle (19), which now represents a map \(\mathbb{C}^{24}\to\mathbb{R}\).
The \(SU(5)\) symmetry is spontaneously broken by a non-vanishing vev for \(\Phi\). If the vev happens to be along the diagonal \(T^{24}\) generator, i.e.,
\[\langle\Phi\rangle=v\,T^{24}\equiv v\,\sqrt{\frac{3}{5}}\ \mathrm{diag}\left(- \frac{1}{3},-\frac{1}{3},-\frac{1}{3},\frac{1}{2},\frac{1}{2}\right), \tag{24}\]
then the remaining symmetry is precisely that of the Standard Model, \(SU(3)\times SU(2)\times U(1)\). In order to derive the residual symmetry in this case, we sample the 24-dimensional complex vector \((\phi_{1},\phi_{2},\ldots,\phi_{24})\), then form \(\eta_{a}\) and look for transformations of the type (20) with \(5\times 5\) complex Hermitian matrices \(\mathbb{G}\). The result is shown in Fig. 7 and indeed corresponds to the SM gauge symmetry. For example, the \(SU(3)\) of color is generated by \(\mathbb{J}_{1}\), \(\mathbb{J}_{3}\), \(\mathbb{J}_{4}\), \(\mathbb{J}_{6}\), \(\mathbb{J}_{8}\), \(\mathbb{J}_{10}\), \(\mathbb{J}_{12}+\mathbb{J}_{13}\), and \(2\mathbb{J}_{5}-\mathbb{J}_{12}+\mathbb{J}_{13}\); the weak \(SU(2)\) is generated by \(\mathbb{J}_{2}\), \(\mathbb{J}_{9}\) and \(\mathbb{J}_{7}-\mathbb{J}_{11}\), and the traceless \(U(1)\) factor is \(2\mathbb{J}_{5}-3\mathbb{J}_{7}-3\mathbb{J}_{11}+2\mathbb{J}_{12}-2\mathbb{J}_{13}\).
## 5 Conclusions
The research presented in this letter is the natural extension of the program started in [22; 23; 24; 25] of using machine learning techniques to find symmetries in data or theory. We showed how the group-theoretic structure of the learned symmetry generators can be identified either during the learning process, or as a post-processing procedure. The approach was outlined in Section 2 and subsequently illustrated with examples from group theory (in Section 3) and from particle physics (in Section 4). The obtained insights into the learned symmetries offer clarity and explainability to the machine learning methodology.
**Acknowledgements.** We thank S. Gleyzer, R. Houtz, K. Kong, S. Mrenna, H. Prosper and P. Shyamsundar for useful discussions. We thank P. Ramond for group theory insights and inspiration. This work is supported
Figure 5: The symmetry generators after breaking \(SU(3)\) with the adjoint vev (21).
Figure 6: The symmetry generators after breaking \(SU(3)\) with the adjoint vev (22).
Figure 7: The symmetry generators after breaking \(SU(5)\) with the adjoint vev (24).
in part by the U.S. Department of Energy award number DE-SC0022148.
## Appendix A The \(So(5)\) subgroup of \(Su(4)\).
The \(so(5)\) algebra is given by
\[[L_{mn},L_{pq}]=i\left(\delta_{mp}L_{nq}+\delta_{nq}L_{mp}-\delta_{mq}L_{np}- \delta_{np}L_{mq}\right). \tag{10}\]
Here each of the ten elements of the algebra, \(L_{mn}\), is labelled by an antisymmetric pair of indices \(mn\), where \(m,n\in\{1,2,3,4,5\}\) and \(m\neq n\). Since \(L_{mn}=-L_{mn}\), for concreteness and without loss of generality we can take the defining set of 10 independent generators of \(SO(5)\) to be those with \(m<n\), i.e.
\[\{L_{12},\,L_{13},\,L_{14},\,L_{15},\,L_{23},\,L_{24},\,L_{25},\,L_{34},\,L_{ 35},\,L_{45}\}. \tag{11}\]
Interestingly, the \(so(5)\) algebra (10) can be neatly summarized with a Desargues configuration as illustrated in Figure 8 [34; 35]. The 10 generators (11) are used as the 10 points of the configuration and the directed lines encode _all of_ the commutation relations (10) -- the commutator of any two generators on the same line equals \(i\) times the third generator on the line, with the sign being \(+\) or \(-\), depending on whether we are following or going against the arrow. Any two generators which are not connected by a line in the diagram, commute.
\(SO(5)\) is the group of rotations in \(\mathbb{R}^{5}\). Therefore, it has a representation in terms of 5\(\times\)5 orthogonal matrices whose generators are given by
\[(L_{mn})_{ij}=-i\left(\delta_{mi}\delta_{nj}-\delta_{mj}\delta_{ni}\right), \tag{12}\]
where \(i\) and \(j\) are the matrix indices (\(i,j\in\{1,2,3,4,5\}\)) indicating the plane of rotation in \(\mathbb{R}^{5}\). Explicitly,
\[L_{12}=\begin{pmatrix}0&-i&0&0&0\\ i&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0\end{pmatrix}, \tag{13}\]
and so on for the remaining generators in (11).
According to Table 2, \(so(5)\) is a subalgebra of \(u(4)\). Therefore, in addition to the 5 \(\times\) 5 representation (13), the \(so(5)\) algebra must also have a representation in terms of \(4\times 4\) complex unitary matrices. Indeed, one such representation is given by [34; 35]
\[L_{13} =\frac{1}{\sqrt{2}}\sigma_{3}\otimes\sigma_{1},\hskip 14.226378ptL_{34 }=\frac{1}{\sqrt{2}}I_{2}\otimes\sigma_{3}, \tag{14a}\] \[L_{14} =\frac{1}{\sqrt{2}}\sigma_{3}\otimes\sigma_{2},\hskip 14.226378ptL_{35 }=-\frac{1}{\sqrt{2}}I_{2}\otimes\sigma_{2},\] (14b) \[L_{15} =\frac{1}{\sqrt{2}}\sigma_{3}\otimes\sigma_{3},\hskip 14.226378ptL_{4 5}=\frac{1}{\sqrt{2}}I_{2}\otimes\sigma_{1},\] (14c) \[L_{23} =\frac{1}{\sqrt{2}}\sigma_{1}\otimes\sigma_{1},\hskip 14.226378ptL_{12 }=\frac{1}{2}\sigma_{2}\otimes I_{2},\] (14d) \[L_{24} =\frac{1}{\sqrt{2}}\sigma_{1}\otimes\sigma_{2},\] (14e) \[L_{25} =\frac{1}{\sqrt{2}}\sigma_{1}\otimes\sigma_{3}, \tag{14f}\]
Here \(I_{2}\) is the \(2\times 2\) unit matrix, \(\sigma_{i}\) are the Pauli matrices
\[\sigma_{1}=\begin{pmatrix}0&1\\ 1&0\end{pmatrix},\hskip 14.226378pt\sigma_{2}=\begin{pmatrix}0&-i\\ i&0\end{pmatrix},\hskip 14.226378pt\sigma_{3}=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}\]
and \(\otimes\) stands for tensor product, e.g.,
\[L_{12}=\frac{1}{2}\sigma_{2}\otimes I_{2}=\frac{1}{2}\begin{pmatrix}0&0&-i&0 \\ 0&0&0&-i\\ i&0&0&0\\ 0&i&0&0\end{pmatrix}\]
The Desargues diagram of Figure 8 helps understand the \(3+3+3+1\) pattern of the representation (14). First, one chooses a Pauli matrix, in this case \(\sigma_{2}\), and associates it with the center of perspectivity \(L_{12}\) of the configuration via the product \(\sigma_{2}\otimes I_{2}\). The other two Pauli matrices, \(\sigma_{3}\) and \(\sigma_{1}\), are respectively used to form
Figure 8: A pictorial representation of the \(so(5)\) algebra (10) in terms of a Desargues configuration.
the two triangles in central perspective via the products \(\sigma_{3}\otimes\sigma_{i}\) and \(\sigma_{1}\otimes\sigma_{i}\), \(i=1,2,3\). Finally, the axis of perspectivity is formed by the Pauli matrices themselves (more precisely, by the products \(I_{2}\otimes\sigma_{i}\), \(i=1,2,3\))
However, there are other equivalent \(4\times 4\) representations of the \(so(5)\) algebra which can be built out of \(I_{2}\) and the Pauli matrices following the same pattern. For example, the particular representation which was obtained in Fig. 4, corresponds to associating the sum \(\sigma_{1}+\sigma_{2}\) with the center of perspectivity, and proceeding to build the rest of the representation as described above. Concretely,
\[L_{12}=\frac{1}{2}I_{2}\otimes(\sigma_{1}+\sigma_{2}), \tag{10a}\] \[L_{13}=\frac{1}{\sqrt{2}}\sigma_{1}\otimes\sigma_{3},\] (10b) \[L_{14}=\frac{1}{\sqrt{2}}\sigma_{2}\otimes\sigma_{3},\] (10c) \[L_{15}=\frac{1}{\sqrt{2}}\sigma_{3}\otimes\sigma_{3},\] (10d) \[L_{23}=\frac{1}{2}\sigma_{1}\otimes(\sigma_{1}-\sigma_{2}),\] (10e) \[L_{24}=\frac{1}{2}\sigma_{2}\otimes(\sigma_{1}-\sigma_{2}),\] (10f) \[L_{25}=\frac{1}{2}\sigma_{3}\otimes(\sigma_{1}-\sigma_{2}),\] (10g) \[L_{34}=\frac{1}{\sqrt{2}}\sigma_{3}\otimes I_{2},\] (10h) \[L_{35}=-\frac{1}{\sqrt{2}}\sigma_{2}\otimes I_{2},\] (10i) \[L_{45}=\frac{1}{\sqrt{2}}\sigma_{1}\otimes I_{2}. \tag{10j}\]
The fact that the ten matrices (10) or (10) form an \(so(5)\) algebra may not be immediately obvious, but can be easily verified by substituting into the \(so(5)\) commutation relations (11) and checking that all 90 of them are identically satisfied (with a structure constant \(\sqrt{2}\) instead of 1).
|
2305.19747 | Analyzing Text Representations by Measuring Task Alignment | Textual representations based on pre-trained language models are key,
especially in few-shot learning scenarios. What makes a representation good for
text classification? Is it due to the geometric properties of the space or
because it is well aligned with the task? We hypothesize the second claim. To
test it, we develop a task alignment score based on hierarchical clustering
that measures alignment at different levels of granularity. Our experiments on
text classification validate our hypothesis by showing that task alignment can
explain the classification performance of a given representation. | Cesar Gonzalez-Gutierrez, Audi Primadhanty, Francesco Cazzaro, Ariadna Quattoni | 2023-05-31T11:20:48Z | http://arxiv.org/abs/2305.19747v1 | # Analyzing Text Representations by Measuring Task Alignment
###### Abstract
Textual representations based on pre-trained language models are key, especially in few-shot learning scenarios. What makes a representation good for text classification? Is it due to the geometric properties of the space or because it is well aligned with the task? We hypothesize the second claim. To test it, we develop a task alignment score based on hierarchical clustering that measures alignment at different levels of granularity. Our experiments on text classification validate our hypothesis by showing that task alignment can explain the classification performance of a given representation.
## 1 Introduction
Recent advances in text classification have shown that representations based on pre-trained language models are key, especially in few-shot learning scenarios (Ein-Dor et al., 2020; Lu et al., 2019). It is natural to ask: What makes a representation good for text classification in this setting? Is the representation good due to intrinsic geometric properties of the space or because it is well _aligned_ with the classification task? The goal of this paper is to answer this question to better understand the reason behind the performance gains obtained with pre-trained representations.
Our hypothesis is that representations better aligned with class labels will yield improved performance in few-shot learning scenarios. The intuition is simple: in this setting, the limited number of labeled samples will only provide a sparse coverage of the input domain. However, if the representation space is properly aligned with the class structure, even a small sample can be representative. To illustrate this, take any classification task. Suppose we perform clustering on a given representation space that results in a few pure clusters (with all samples belonging to the same class). Then, any training set that 'hits' all the clusters can be representative. Notice that there is a trade-off between the number of clusters and their purity. A well-aligned representation is one for which we can obtain a clustering with a small number of highly pure clusters. Based on this, we propose a task alignment score based on hierarchical clustering that measures alignment at different levels of granularity: Task Hierarchical Alignment Score (Thas).
To test our hypothesis that task alignment is key we conduct experiments on several text classification datasets comparing different representations. Our results show that there is a clear correlation between the Thas of a representation and its classification performance under the few-shot learning scenario, validating our hypothesis and showing that task alignment can explain performance. In contrast, our empirical study shows that intrinsic geometric properties measured by classical clustering quality metrics fail to explain representation performance in the few-shot learning scenario.
Our study suggests an answer to our main question: A good efficient representation (i.e. one that enables few-shot learning) is a representation that induces a good alignment between latent input structure and class structure. Our main contributions are: 1) We develop a score based on hierarchical clustering (SS2) that measures the extent to which a representation space is aligned with a given class structure and 2) We conduct an empirical study using several textual classification datasets
Figure 1: Three-step process for computing Thas.
(SS3) that validates the hypothesis that the best representations are those with a latent input structure that is well aligned with the class structure.
## 2 Task Hierarchical Alignment Score
We now present the Task Hierarchical Alignment Score (Thas) designed to measure the alignment between a textual representation and the class label for a given task. The idea is quite simple, in a good representation space, points that are close to each other should have a higher probability of belonging to the same class. Therefore, we could perform clustering of the points and obtain _high purity_ clusters, where most points belong to the same class. We assume that we are given: a dataset \(S=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{n}\) of \(n\) labeled data points where \(\mathbf{x}\in\mathcal{X}\) is a text fragment and \(y\in\mathcal{Y}\) its corresponding class label (e.g., a sentiment classification label) and a representation function \(r:\mathcal{X}\rightarrow\mathbb{R}^{d}\) mapping points in \(\mathcal{X}\) to a \(d\)-dimensional representation space \(\mathbb{R}^{d}\) (e.g., a sparse bag-of-words).
Our goal is to compute a metric \(\tau(S,r)\) that takes some labeled domain data and a representation function and computes a real value score. Fig. 1 illustrates the steps involved in computing Thas. There are three main steps: 1) hierarchical clustering, 2) computing clustering partition alignments, and 3) computing the aggregate metric. In the first step, we compute the representation of each point and build a data dendrogram using hierarchical clustering. The data dendrogram is built by merging clusters, progressively unfolding the latent structure of the input space. Traversing the tree, for each level we get a partition of the training points into \(k\) clusters. In step 2, for each partition, we measure its alignment with the class label distribution producing an alignment curve as a function of \(k\). Finally, we report the area under this curve. Algorithm 1 summarizes the whole procedure. Implementation details and performance information can be found in A.1.
```
Input: Dataset \(S=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{n}\), representation function \(r\) Output:\(\tau(S,r)\)
1 Get representation: \(\mathbf{R}=\{r(\mathbf{x}_{i})\mid\mathbf{x}_{i}\in\mathbf{X}\}\)
2 Run Hierarchical Clustering: \(\mathcal{D}=\text{HC}(\mathbf{R})=\{\mathcal{P}_{k}\}_{k=1}^{n}\)
3 Traverse the dendrogram: foreach partition \(\mathcal{P}_{k}\subset\mathcal{D}\)do
4 Predict scores for all points: foreachpoint\(\mathbf{x}_{i}\in\mathbf{X}\) in \(i=1,\ldots,n\) where \(r(\mathbf{x}_{i})\in C\subset\mathcal{P}_{k}\)do
5 Label prediction scores: foreach\(y^{\prime}_{j}\in\mathcal{Y}\) in \(j=1,\ldots,|\mathcal{Y}|\)do\(\hat{\mathbf{Y}}_{k,i,j}=s(\mathbf{x}_{i},y^{\prime}_{j})\)
6 Partition alignment score: \(a(\mathcal{P}_{k})=\text{AUC}_{y^{+}}(\hat{\mathbf{Y}}_{k},\mathbf{Y})\)
7 Final aggregate metric: \(\tau(S,r)=\frac{1}{n}\sum_{k=1}^{n}a(\mathcal{P}_{k})\)
```
**Algorithm 1**Thas
### Hierarchical Clustering
In the first step, we will consider the input points \(\mathbf{X}=\{\mathbf{x}_{i}\mid(\mathbf{x}_{i},y_{i})\in S\}\) and the representation function \(r\) to obtain a representation of all points \(\mathbf{R}=\{r(\mathbf{x}_{i})\mid\mathbf{x}_{i}\in\mathbf{X}\}\).
We then apply Hierarchical Clustering (HC) to the points in \(\mathbf{R}\) obtaining a dendrogram \(\mathcal{D}=\text{HC}(\mathbf{R})=\{\mathcal{P}_{k}\}_{k=1}^{n}\) that defines a set of \(n\) cluster partitions. Fig. 1 (left) shows a diagram of a dendrogram. The root of this tree is the whole set and, at the leaves, each point corresponds to a singleton. At intermediate levels, top-down branching represents set splitting.
For each level \(k=1,\ldots,n\) of the dendrogram there is an associated clustering partition of the input points into \(k\) clusters \(\mathcal{P}_{k}=\{C_{j}\}_{j=1}^{k}\). That is, for any particular level we have a family of \(k\) non-empty disjoint clusters that cover the representation \(\mathbf{R}=\bigcup_{j=1}^{k}C_{j}\), where each representation point \(r(\mathbf{x})\in\mathbf{R}\) is assigned to one of the \(k\) clusters.
### Partition Alignment Score
We use the gold labels \(\mathbf{Y}=\{y_{i}\mid(\mathbf{x}_{i},y_{i})\in S\}\) to compute an alignment score \(a(\mathcal{P}_{k})\) for each partition \(\mathcal{P}_{k}\subset\mathcal{D}\). We compute it in two parts.
First, for every point \(\mathbf{x}\in\mathbf{X}\) and label \(y^{\prime}\in\mathcal{Y}\) we compute a label probability score by looking at the gold label distribution of the cluster \(C\) to which the point belongs in the clustering partition:
\[s(\mathbf{x},y^{\prime})=\frac{1}{|C|}\#[y^{\prime}\in C] \tag{1}\]
where \(\#[y^{\prime}\in C]\) is the number of samples in cluster \(C\) with gold label \(y^{\prime}\). Intuitively, this assigns to a point \(\mathbf{x}\) a label probability that is proportional to the distribution of that label in the cluster \(C\).
Second, we use the label probability scores of all points \(\hat{\mathbf{Y}}_{k}=\{s(\mathbf{x}_{i},y^{\prime}_{j})\mid\mathbf{x}_{i}\in\mathbf{X},y^{ \prime}_{j}\in\mathcal{Y}\}\) and the
dataset gold labels \(\mathbf{Y}\) to compute a partition alignment score. We choose as a single metric the area under the precision-recall curve (AUC) because it has the nice property that it applies to tasks with both balanced and unbalanced class distributions.1 More specifically, we compute the AUC of the target (positive) class \(y^{+}\in\mathcal{Y}\) of the dataset (more details in the experimental part in SS3):
Footnote 1: F1 could be a valid alternative, but this metric requires the validation of decision thresholds.
\[a(\mathcal{P}_{k})=\text{AUC}_{y^{+}}(\hat{\mathbf{Y}}_{k},\mathbf{Y}) \tag{2}\]
### Final Aggregate Metric: Thas
Once we have an alignment score for every level of the hierarchical dendrogram, we are ready to define our final Task Hierarchical Alignment Score (Thas). Consider the alignment scoring function \(a\) applied to the partition corresponding to the lowest level of the dendrogram. The alignment score will be \(a(\mathcal{P}_{n})=1\) because every cluster in this partition is a singleton and therefore \(\#[y^{\prime}\in C]\) will be \(1\) for the gold label and \(0\) for any other label. At the other end, for the partition corresponding to the root of the dendrogram (where all points belong to a single cluster), the alignment score \(a(\mathcal{P}_{1})\) is the AUC corresponding to assigning to every point \(\mathbf{x}\in\mathbf{X}\) a prediction score for each label \(y^{\prime}\in\mathcal{Y}\) equal to the relative frequency of \(y^{\prime}\) in \(\mathbf{Y}\).
Consider now the alignment score as a function of the size of the partition. As we increase \(k\) we will get higher scores. A good representation is one that can get a high score while using as few clusters as possible. Instead of choosing a predefined level of granularity, we propose to leverage the alignment information across all levels. To achieve this, we consider the alignment score as a function of the number of clusters and measure the area under \(a(\mathcal{P}_{k})\).2 We are ready to define our final metric:
Footnote 2: We could consider weighting methods that neutralize uninformative areas in the curve. In particular, we could subtract the scores originating from a random clustering. However, this contribution is solely determined by the sample size and the prior distribution. As a result, it would not have any impact when comparing representations.
\[\tau(S,r)=\frac{1}{n}\sum_{k=1}^{n}a(\mathcal{P}_{k}) \tag{3}\]
## 3 Experimental Setup
In this section we empirically study the correlation of few-shot learning performance with 1) Thas and 2) an unsupervised clustering quality metric.
We use four text classification datasets with both balanced and imbalanced label distributions: IMDB (IM; Maas et al., 2011), WikiToxic (WT; Wulczyn et al., 2017), Sentiment140 (S1; Maas et al., 2011) and CivilComments (CC; Borkan et al., 2019).
We will compare the following representations: a sparse bags-of-words (BoW); BERT embeddings (Devlin et al., 2019) using two token average pooling strategies (BERT\({}_{\text{all}}\) and BERT\({}_{\text{cls}}\)); GloVe (Pennington et al., 2014); and fastText (Bojanowski et al., 2017; Joulin et al., 2016).
For further details, please refer to A.2.
### Few-Shot Performance vs. Thas
Since the focus of these experiments is comparing representations, we follow previous work on probing representations and use a simple model (Tenney et al., 2019; Lu et al., 2019). More precisely, we use a linear max-entropy classifier trained with \(l2\) regularization.
To simulate a few-shot learning scenario, we create small training sets by selecting \(N\) random samples, from \(100\) to \(1000\) in increments of \(100\). For each point \(N\) in the learning curve we create an
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Repr.**} & \multicolumn{4}{c}{**ALC**} & \multicolumn{4}{c}{**Thas**} & \multicolumn{4}{c}{**ADBI**} \\ \cline{2-13} & IM & WT & CC & S1 & \(\mu\) & IM & WT & CC & S1 & \(\mu\) & IM & WT & CC & S1 & \(\mu\) \\ \hline BERT\({}_{\text{all}}\) & _.84_ & _.50_ & _.32_ & _.79_ & **.61** & _.84_ & _.67_ & _.27_ & _.75_ & **.63** & 2.87 & 3.03 & 3.31 & 3.25 & 3.11 \\ GloVe &.80 &.48 &.26 &.74 &.57 &.80 &.63 &.26 &.73 &.60 & _2.62_ & _2.12_ & 2.01 & _2.47_ & **2.31** \\ BERT\({}_{\text{cls}}\) &.80 &.48 &.23 &.74 &.56 &.80 &.56 &.22 &.74 &.58 & 2.81 & 2.97 & 3.15 & 2.92 & 2.96 \\ fastText &.75 &.41 &.18 &.66 &.50 &.77 &.57 &.21 &.71 &.56 & 2.78 & 2.13 & _1.93_ & _2.47_ & 2.33 \\ BoW &.76 &.32 &.11 &.59 &.45 &.71 &.50 &.20 &.68 &.52 & 3.14 & 3.83 & 4.23 & 3.86 & 3.76 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Learning curve performance (ALC), task alignment (Thas), and unsupervised clustering quality (ADBI) for different representations and datasets. (Rows are sorted by average ALC.)
80%/20% 5-fold cross-validation split to find the optimal hyper-parameters. We then train a model using the full \(N\) training samples and measure its performance on the test set. We repeat the experiment with 5 random seeds and report the mean results. As the evaluation metric, we use accuracy for the balanced datasets (IMDB and Sentiment140) and F1 for the imbalanced datasets (WikiToxic and CivilComments).
We generate learning curves for each dataset and representation (A.3). To study the correlation between task alignment and few-shot learning performance, it is useful to have a single score that summarizes the learning curve: We use the area under the learning curve (ALC). Representations with a larger ALC perform better in the few-shot learning scenario.3 We observe that BERT\({}_{\text{all}}\) is consistently the best representation followed by BERT\({}_{\text{cls}}\) and GloVe performing similarly. Representations based on word embeddings are better than the sparse baseline for all datasets, except for fastText which does not exhibit a consistent improvement.
Footnote 3: Alternatively, we could have picked a single point but we believe that ALC provides a more robust measure of few-shot learning performance and allows for a more concise analysis.
To test for correlation, we also computed Thas for each representation and dataset. (The corresponding curves can be found in A.3.) Since this metric is a measure of the alignment between a label distribution and an input representation, there is a Thas score per label.4 In the classification tasks that we consider there is always a single target class (e.g., toxicity for WikiToxic). We measure the alignment score with respect to this class.
Footnote 4: We could also aggregate the scores of different classes, for example taking the average of the scores over all labels.
Table 1 summarizes the results showing ALC (left) and corresponding Thas (center) for all representations and datasets. Overall, BERT\({}_{\text{all}}\) is the best representation for few-shot learning followed by GloVe and BERT\({}_{\text{cls}}\). All the representations based on pre-trained word embeddings significantly outperform the baseline sparse BoW representation. Thas predicts accurately the relative ranking between representations and the larger gap between BERT\({}_{\text{all}}\) and the rest. Fig. 2 shows a scatter plot of Thas as a function of ALC (blue dots; each point corresponds to a dataset and representation). We compute the correlation coefficients, which are displayed in Table 2. We observe a clear positive correlation between the two metrics, providing supporting evidence for our main hypothesis that a good representation under few-shot learning is a representation that is well aligned with the classification task.
### Unsupervised Clustering Quality
We now look at standard metrics of cluster quality and test if they can explain few-shot learning performance. We use the Davies and Bouldin (1979) index (DBI) to measure the quality of the cluster partitions at every level of the dendrogram. This metric measures the compactness of each cluster and their separation, with better cluster partitions scoring lower. Similar to the computation of Thas described in SS2, we compute DBI as a function of the number of clusters \(k\) corresponding to each level of the dendrogram. As an aggregate metric, we calculate the area under these curves to obtain a single ADBI score. (The curves are shown in A.3.)
The right side of Table 1 shows the results for the same datasets and representations used for Thas. GloVe induces the best clusters according to the ADBI metric. BERT\({}_{\text{all}}\) does not produce particularly good clusters despite being the strongest few
\begin{table}
\begin{tabular}{l l l} \hline (\(\mu\))ALC vs & \(r_{p}\) (p-value) & \(r_{s}\) (p-value) \\ \hline Thas & \(0.98\) (\(<10^{-12}\)) & \(0.99\) (\(<10^{-17}\)) \\ ADBI & \(0.11\) (\(0.62\)) & \(0.07\) (\(0.76\)) \\ \(\mu\)Thas & \(0.98\) (\(0.002\)) & \(1.0\) (\(0.017\)) \\ \(\mu\)ADBI & \(-0.41\) (\(0.48\)) & \(-0.3\) (\(0.68\)) \\ \hline \end{tabular}
\end{table}
Table 2: Pearson correlation coefficient (\(r_{p}\)) and Spearman’s correlation coefficient (\(r_{s}\)) with the corresponding p-values for ALC vs. Thas and ALC vs. ADBI, and similar analysis for mean scores across all datasets.
Figure 2: Few-shot performance (ALC) vs. task alignment (Thas) and clustering quality (ADBI).
shot representation. Fig. 2 (red crosses) and Table 2 show that there is a low correlation between the two metrics. This suggests that the geometric properties of the clusters alone can not explain few-shot performance.
## 4 Related Work
Representation choice has recently gained significant attention from the active learning (AL) community (Schroder and Niekler, 2020; Shnarch et al., 2022; Zhang et al., 2017). Some work has attempted to quantify what representation is best when training the initial model for AL, which is usually referred to as the cold start problem (Lu et al., 2019). The importance of word embeddings has been also studied in the context of highly imbalanced data scenarios (Sahan et al., 2021; Naseem et al., 2021; Hashimoto et al., 2016; Kholghi et al., 2016). Most research conducted by the AL community on textual representations has focused on determining _which_ representations lead to higher performance for a given task. However, our paper aims to investigate _why_ a certain representation performs better in the few-shot scenario.
Our work, focused on examining properties of various textual representations, is closely related to recent research on evaluating the general capabilities of word embeddings. Many studies are interested in testing the behavior of such models using probing tasks that signal different linguistic skills (Conneau et al., 2018; Conneau and Kiela, 2018; Marvin and Linzen, 2018; Tenney et al., 2019; Miaschi and Dell'Orletta, 2020). Others have targeted the capacity of word embeddings to transfer linguistic content (Ravishankar et al., 2019; Conneau et al., 2020).
Looking at approaches that analyze the properties of representations directly, without intermediate probes, Saphra and Lopez (2019) developed a correlation method to compare representations during consecutive pre-training stages. Analyzing the geometric properties of contextual embeddings is also an active line of work (Reif et al., 2019; Ethayarajh, 2019; Hewitt and Manning, 2019). While these previous works focus on analyzing representation properties independently, without considering a specific task, our study investigates the relationship between representations and task labels. We conduct a comparison between this relationship and the unsupervised analysis of representation properties.
Our work falls in line with broader research on the relationship between task and representation. Yauney and Mimno (2021) proposed a method to measure the alignment between documents and labels in a given representation space using a data complexity measure developed in the learning-theory community. In the computer vision area, Frosst et al. (2019) introduced a loss metric and investigated the entanglement of classes in the representation space during the learning process. Zhou and Srikumar (2021) proposed a heuristic to approximate the version space of classifiers using hierarchical clustering, highlighting how representations induce the separability of class labels, thereby simplifying the classification task. In contrast, our work specifically examines the few-shot performance and emphasizes the importance of unbalanced scenarios. We find that in these more realistic situations, the choice of representation plays a critical role, paving the way for advanced strategies in active learning.
## 5 Conclusion
In this paper, we asked the question: What underlying property characterizes a good representation in a few-shot learning setting? We hypothesized that good representations are those in which the structure of the input space is well aligned with the label distribution. We proposed a metric to measure such alignment: Thas. To test our hypothesis, we conducted experiments on several textual classification datasets, covering different classification tasks and label distributions (i.e. both balanced and unbalanced). We compared a range of word embedding representations as well as a baseline sparse representation.
Our results showed that when labeled data is scarce the best-performing representations are those where the input space is well aligned with the labels. Furthermore, we showed that the performance of a representation can not be explained by looking at classical measures of clustering quality.
The main insight provided in this work could be leveraged to design new strategies in active learning. The fact that good representations induce clusters of high purity at different granularities creates opportunities for wiser exploration of the representation space in an active manner. Similar to the work of Dasgupta and Hsu (2008), we could employ the data dendrogram to guide this exploration.
### Limitations
In this paper, we focused on analyzing the properties of textual representations in the few-shot learning scenario. Its applicability to broader annotation scenarios could be presumed but is not supported by our empirical results.
Our experimental setup is based on binary classification tasks using English datasets. While our approach is general and could be easily extended to multi-class scenarios, more work would be required to extend it to other more complex structured prediction settings such as sequence tagging.
We see several ways in which this work could be extended. The most obvious extension consists of trying to generalize the notion of alignment to other tasks beyond sequence classification, such as sequence tagging. In this paper, we have used Thas to understand the quality of a given textual representation. However, since Thas is a function of a labeling and a representation, it could also be used to measure the quality of a labeling Yan and Huang (2018), given a fixed representation. For example, this might be used in the context of hierarchical labeling, to measure which level of label granularity is better aligned with some input representation.
The goal of this paper was to provide an explanation for the success of pre-trained word embeddings for text classification in the few-shot learning scenario. We believe that with our proposed methodology we have successfully achieved this goal. However, it should be clear to the reader that we do not provide a method for picking the best representation, i.e. for model selection. This is because our analysis requires access to labeled data and if labeled data is available the best way to select a model will be via cross-validation.
## Acknowledgements
This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant agreement No 853459. The authors gratefully acknowledge the computer resources at ARTEMISA, funded by the European Union ERDF and Comunitat Valenciana as well as the technical support provided by the Instituto de Fisica Corpuscular, IFIC (CSIC-UV). This research is supported by a recognition 2021SGR-Cat (01266 LQMC) from AGAUR (Generalitat de Catalunya).
|
2309.16979 | MEMQSim: Highly Memory-Efficient and Modularized Quantum State-Vector
Simulation | In this extended abstract, we have introduced a highly memory-efficient state
vector simulation of quantum circuits premised on data compression, harnessing
the capabilities of both CPUs and GPUs. We have elucidated the inherent
challenges in architecting this system, while concurrently proposing our
tailored solutions. Moreover, we have delineated our preliminary implementation
and deliberated upon the potential for integration with other GPU-oriented
simulators. In forthcoming research, we aim to present a more comprehensive set
of results, bolstering the assertion of the efficacy and performance of our
approach. | Boyuan Zhang, Bo Fang, Qiang Guan, Ang Li, Dingwen Tao | 2023-09-29T04:55:17Z | http://arxiv.org/abs/2309.16979v1 | # MEMQSim: Highly Memory-Efficient and Modularized Quantum State-Vector Simulation
###### Abstract.
The field of quantum computing has seen a marked advancement (Boyuan et al., 2023). However, despite the substantial potential, present-day quantum computing mechanisms are challenged by considerable environmental noise, and the efficacy of quantum error correction in the Noise-Intermediate-Scale-Quantum (NISQ) era remains limited (Boyuan et al., 2023). Quantum circuit simulation serves as an essential tool for researchers from a variety of disciplines, offering invaluable benefits to validate the accuracy of quantum algorithms (Boyuan et al., 2023).
Nonetheless, the simulation of quantum circuits poses significant challenges, primarily because memory utilization escalates exponentially with the increment of qubit quantity. For instance, the Frontier system, with a memory capacity of 47.3 PB, is only equipped to simulate 51 qubits, while the Summit, with a memory capacity of 2.8 PB, can merely simulate 47 qubits (Boyuan et al., 2023). In light of the reality that vast-memory systems such as high-performance computing (HPC) or cloud infrastructures are not readily available to the majority of practitioners in the field of quantum computing, the capability to simulate the execution of quantum circuits is considerably restricted by devices' memory capacities.
Cutting-edge quantum state-vector simulators (Boyuan et al., 2023; Boyuan et al., 2023; Boyuan et al., 2023; Boyuan et al., 2023) have, thus far, not placed an emphasis on minimizing the memory footprint during the simulation process. A prior research endeavor (Boyuan et al., 2023) did incorporate compression into state vector simulation with the aim of expanding the number of qubits accommodated within a restricted memory space. This approach, while promising, still presents unresolved complications that necessitate substantial research attention: (1) In this study, compression and decompression
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: journal: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
FootnoteFootnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
FootnoteFootnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
FootnoteFootnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
FootnoteFootnote †: J. Comput.
+
FootnoteFootnote †: J. Comput.
+
FootnoteFootnote †: J. Comput.
+
FootnoteFootnote †: J. Comput.
+
FootnoteFootnote †: J. Comput.
+
FootnoteFootnote †: J. Comput.
+
Footnote †: J. Comput.
+
FootnoteFootnote †: J. Comput.
+
FootnoteFootnote †: J. Comput.
+
FootnoteFootnote †: J. Comput.
+
Footnote †: J. Comput.
+
FootnoteFootnote †: J. Comput.
+
Footnote †: J. Comput.
+
Footnote †: J. Comput.
+
FootnoteFootnote †: J. Comput.
+
FootnoteFootnote †: J. Comput.
+
Footnote †: J. Comput.
+
FootnoteFootnote †: J. Comput.
+
FootnoteFootnote †: J. Comput.
+
FootnoteFootnote †: J. Comput.
+
FootnoteFootnote †: J. Comput.
+
FootnoteFootnoteFootnote †: J. Comput.
+
_Design challenges_: (1) The intensive data exchange between the CPU and GPU requires careful scheduling. Since normally GPU memory capacity is much less, the GPU must retrieve data from the CPU, conducting computational operations one partial data at a time. (2) The frequency and granularity of compression and decompression significantly influence the simulation speed. Excessive compression/decompression could result in substantial overhead on the end-to-end time, and a coarser granularity could precipitate a significant memory footprint issue, while excessively fine granularity could lead to a lower compression ratio. (3) Different quantum algorithms' behaviors affect the access pattern on the state vector.
In light of these identified challenges, we propose our design, MEMQSim. An overview of our approach is illustrated in Figure 2. We explain the overall simulation process below:
_Offline stage_ MEMQSim partitions the input circuit and the corresponding state vector and each data chunk of the state vector is compressed independently and stored in CPU memory with such compressed format.
_Online stage:_ As shown in Figure 1, MEMQSim pipelines the decompression, buffer transfer between CPU and GPU and GPU computation. In particular, MEMQSim (1) decompresses a selection of data chunks to the CPU buffers and (2) transfers the corresponding state vector amplitudes to the GPU memory. This process is repeated throughout the entire state vector until the GPU memory is fully occupied with ordered state vector amplitudes. (3) MEMQSim initiates the GPU kernel asynchronously to update the state vector amplitudes during the CPU-GPU data transfer and (4) returns the updated values back to the CPU buffers. (5) Subsequently, the CPU leverages idle cores to decompress the data chunks and perform updates to the state vector amplitudes on the CPU side. (6) Finally, the data block is re-compressed and stored back into the main memory. Upon the GPU's completion of a single iteration, the aforementioned procedure is reiterated to update all amplitudes. Subsequently, the process advances to the subsequent stage and continues until all stages have been addressed. We have developed a prototype of MEMQSim, plugged into the SV-SIM (Deng et al., 2019) framework. As we move forward, our design harbours the potential to serve as a plugin for a range of GPU simulators, while also being adaptable to accommodate various compression algorithms.
As of step (2), we have devised two strategies to execute this process. The first approach entails the transfer of corresponding state vector elements to the GPU memory one at a time, utilizing CUDA asynchronous copies. The alternate strategy involves allocating a buffer on the GPU side and shifting the data chunk from the CPU buffer to the GPU buffer. Following this, GPU threads are employed to map all these amplitudes to their appropriate positions.
We present some preliminary results pertaining to the time taken by various data movement strategies between the CPU and GPU, as depicted in 1. The synchronous strategy entails the transfer of a complete data chunk through a singular CUDA memory copy operation, thereby exemplifying the minimum time necessary for the transfer between the CPU and GPU. As indicated, the host-to-device time associated with the asynchronous strategy is approximately 870 times longer than the synchronous time. This discrepancy is achieved by reducing multiple initiations of CUDA memory copy operations that cause significant overhead. As for the buffer strategy, although it demands additional memory space, it significantly boosts the data movement speed: the time needed for the buffer strategy is only about 1.03x compared to the synchronous version. By employing the state-of-the-art data compressor, we extrapolate that on average 5 more qubits to simulate can be achieved without slowing down the original quantum circuit simulation.
## 3. Conclusion and Future Work
In this extended abstract, we have introduced a highly memory-efficient state vector simulation of quantum circuits premised on data compression, harnessing the capabilities of both CPUs and GPUs. We have elucidated the inherent challenges in architectecting this system, while concurrently proposing our tailored solutions. Moreover, we have delineated our preliminary implementation and deliberated upon the potential for integration with other GPU-oriented simulators. In forthcoming research, we aim to present a more comprehensive set of results, bolstering the assertion of the efficacy and performance of our approach.
|
2309.09859 | RIS-Assisted Energy Harvesting Gains for Bistatic Backscatter Networks:
Performance Analysis and RIS Phase Optimization | Inexpensive tags powered by energy harvesting (EH) can realize green
(energy-efficient) Internet of Things (IoT) networks. However, tags are
vulnerable to energy insecurities, resulting in poor communication ranges,
activation distances, and data rates. To overcome these challenges, we explore
the use of a reconfigurable intelligent surface (RIS) for EH-based IoT
networks. The RIS is deployed to enhance RF power at the tag, improving EH
capabilities. We consider linear and non-linear EH models and analyze
single-tag and multi-tag scenarios. For single-tag networks, the tag's maximum
received power and the reader's signal-to-noise ratio with the optimized RIS
phase-shifts are derived. Key metrics, such as received power, harvested power,
achievable rate, outage probability, bit error rate, and diversity order, are
also evaluated. The impact of RIS phase shift quantization errors is also
studied. For the multi-tag case, an algorithm to compute the optimal RIS
phase-shifts is developed. Numerical results and simulations demonstrate
significant improvements compared to the benchmarks of no-RIS case and random
RIS-phase design. For instance, our optimal design with a \num{200}-element RIS
increases the activation distance by \qty{270}{\percent} and \qty{55}{\percent}
compared to those benchmarks. In summary, RIS deployment improves the energy
autonomy of tags while maintaining the basic tag design intact. | Diluka Galappaththige, Fatemeh Rezaei, Chintha Tellambura, Sanjeewa Herath | 2023-09-18T15:21:09Z | http://arxiv.org/abs/2309.09859v1 | RIS-Assisted Energy Harvesting Gains for Bistatic Backscatter Networks: Performance Analysis and RIS Phase Optimization
###### Abstract
Inexpensive tags powered by energy harvesting (EH) can realize green (energy-efficient) Internet of Things (IoT) networks. However, tags are vulnerable to energy insecurities, resulting in poor communication ranges, activation distances, and data rates. To overcome these challenges, we explore the use of a reconfigurable intelligent surface (RIS) for EH-based IoT networks. The RIS is deployed to enhance RF power at the tag, improving EH capabilities. We consider linear and nonlinear EH models and analyze single-tag and multi-tag scenarios. For single-tag networks, the tag's maximum received power and the reader's signal-to-noise ratio with the optimized RIS phase-shifts are derived. Key metrics, such as received power, harvested power, achievable rate, outage probability, bit error rate, and diversity order, are also evaluated. The impact of RIS phase shift quantization errors is also studied. For the multi-tag case, an algorithm to compute the optimal RIS phase-shifts is developed. Numerical results and simulations demonstrate significant improvements compared to the benchmarks of no-RIS case and random RIS-phase design. For instance, our optimal design with a \(200\)-element RIS increases the activation distance by \(270\,\mathrm{\char 37}\) and \(55\,\mathrm{\char 37}\) compared to those benchmarks. In summary, RIS deployment improves the energy autonomy of tags while maintaining the basic tag design intact.
Bistatic backscatter communication (BiBC), Reconfigurable intelligent surface (RIS), Performance analysis.
## I Introduction
### _The Problems with Energy Harvesting Backscatter Tags_
Parcel tracking using passive electronic tags is one of the many potential applications of the Internet of Things (IoT). The global parcel volume surpassed \(131\) billion in \(2020\), showing a \(27\,\mathrm{\char 37}\) year-over-year increase. In the United States alone, \(59\) million parcels were generated daily in \(2021\), projected to reach \(25\)-\(40\) billion with a \(5\,\mathrm{\char 37}\)-\(10\,\mathrm{\char 37}\) annual growth rate from \(2022\)-\(2027\). Similar growth trends are observed globally. Barcode-based tracking is currently employed, but electronic tag-based tracking offers advantages such as enhanced labor productivity, throughput, warehousing efficiency, and real-time data accuracy for quality control. Tags without batteries are particularly suitable because of their cost-effectiveness and compact size. Their applications include medical and healthcare, agriculture, livestock, logistics, retail chains, and passive IoT networks [1, 2, 3]. These tags have low-cost and low-power circuits with limited processing capabilities. They rely on backscatter modulation, a process described in detail in [4, 5], where they reflect radio-frequency (RF) signals to communicate with the reader.
Passive tags encounter two main problems related to their reliance on RF energy harvesting (EH) for power: activation failure and energy outage (EO). Activation failure occurs when the tag fails to reach the activation threshold (\(P_{b}\)), typically around \(-20\,\mathrm{dBm}\)[6], required to initiate the EH circuitry [7]. Imperfections in the matching network between the tag's antenna and the EH circuit can cause this failure. The matching network aims to align the complex impedance of the EH circuit with the antenna's impedance, optimizing power transfer and minimizing signal reflections. However, the EH circuit's impedance depends on the incident input power due to nonlinear devices, leading to reduced circuit efficiency with changes in input power. The second problem is EO. Ambient energy sources are unpredictable with RF power density values as low as \(1\)\(\sim\)\(100\,\mathrm{\SIUnitSymbolMicro}\mathrm{W}\mathrm{/}\mathrm{c}\mathrm{m}^{2}\) and varying with distance [3]. As a result, there is a risk of an EO where the tag does not reach the activation threshold. These problems cause ultra-low power (\(\mathrm{nW}\)-\(\mathrm{\SIUnitSymbolMicro}\mathrm{W}\)), short communication ranges (\(\leq 6\,\mathrm{m}\)), short activation distances, and low data rates (\(\leq 1\,\mathrm{bps}\mathrm{/}\mathrm{Hz}\)). It is clear that all these problems are initiated whenever the incident RF energy is low. Addressing that issue is the main focus of this paper.
Backscatter networks can be categorized into three types: monostatic, bistatic, and ambient. In monostatic systems, the reader and emitter are co-located, resulting in doubled path loss [8]. Ambient systems rely on reflecting existing RF signals, which are highly unpredictable. Bistatic systems, on the other hand, offer better support for applications such as warehouses (Fig.1). These systems deploy dedicated RF emitters, either single or multiple, to provide energy to the tags and enable backscatter modulation. By optimizing the locations of multiple emitters, a larger area can be covered (Fig.1), maximizing coverage and performance. Dedicated emitters have advantages over ambient signals, including predictability, reduced interference, control over the system, and knowledge of ambient signal parameters [9]. However, the high cost, complexity, and transmit powers associated with dedicated emitters can be problematic.
These considerations motivate the following questions: 1) what is the best way to increase the chance of the incident RF power on the tag exceeding \(P_{b}\)? 2) How can that goal be reached without increasing dedicated RF emitters' cost and energy expenditure? |
2309.07874 | Ca$^2$Lib: Simple and Accurate LiDAR-RGB Calibration using Small Common
Markers | In many fields of robotics, knowing the relative position and orientation
between two sensors is a mandatory precondition to operate with multiple
sensing modalities. In this context, the pair LiDAR-RGB cameras offer
complementary features: LiDARs yield sparse high quality range measurements,
while RGB cameras provide a dense color measurement of the environment.
Existing techniques often rely either on complex calibration targets that are
expensive to obtain, or extracted virtual correspondences that can hinder the
estimate's accuracy. In this paper we address the problem of LiDAR-RGB
calibration using typical calibration patterns (i.e. A3 chessboard) with
minimal human intervention. Our approach exploits the planarity of the target
to find correspondences between the sensors measurements, leading to features
that are robust to LiDAR noise.
Moreover, we estimate a solution by solving a joint non-linear optimization
problem. We validated our approach by carrying on quantitative and comparative
experiments with other state-of-the-art approaches. Our results show that our
simple schema performs on par or better than other approches using complex
calibration targets. Finally, we release an open-source C++ implementation at
\url{https://github.com/srrg-sapienza/ca2lib} | Emanuele Giacomini, Leonardo Brizi, Luca Di Giammarino, Omar Salem, Patrizio Perugini, Giorgio Grisetti | 2023-09-14T17:22:49Z | http://arxiv.org/abs/2309.07874v1 | # _Ca\({}^{2}\)Lib_: Simple and Accurate LiDAR-RGB Calibration
###### Abstract
In many fields of robotics, knowing the relative position and orientation between two sensors is a mandatory precondition to operate with multiple sensing modalities. In this context, the pair LiDAR-RGB cameras offer complementary features: LiDARs yield sparse high quality range measurements, while RGB cameras provide a dense color measurement of the environment.
Existing techniques often rely either on complex calibration targets that are expensive to obtain, or extracted virtual correspondences that can hinder the estimate's accuracy.
In this paper we address the problem of LiDAR-RGB calibration using typical calibration patterns (i.e. A3 chessboard) with minimal human intervention. Our approach exploits the planarity of the target to find correspondences between the sensors measurements, leading to features that are robust to LiDAR noise. Moreover, we estimate a solution by solving a joint non-linear optimization problem.
We validated our approach by carrying on quantitative and comparative experiments with other state-of-the-art approaches. Our results show that our simple schema performs on par or better than other approaches using complex calibration targets. Finally, we release an open-source C++ implementation at [https://github.com/srrg-sapienza/ca2lib](https://github.com/srrg-sapienza/ca2lib)
## I Introduction
The ability to fuse readings from heterogeneous sensors is often beneficial in many robotics and perception applications. In particular, LiDAR and RGB sensors exhibit a strong compatibility: the former being able to capture high precision sparse range readings while the latter measure dense color intensity measurements.
These properties makes integration between the two sensors suited for the task of _depth estimation_. Historically, stereo based solutions leverage the known relative offset between two cameras, along with concepts from epipolar geometry to estimate a depth value for every pixel in a image. Albeit its popularity, due to its optical nature, these approaches suffer in texture-less regions and in areas where the depth exceed a maximum value determined by the baseline of the stereo. While the texture-less problem has been partially solved by the usage of active stereo sensors (i.e. Realsense D435, Kinect), the maximum range still poses a challenge. On the contrary, LiDARs operates using Time of Flight (TOF) principle, which is proven to be robust at detecting range measurements on non-reflective surfaces with accuracy even at high distances.
These considerations lead the community to investigate the problem of _depth-completion_, namely estimating a dense depth image by superimposing an accurate sparse depth measurement along an intensity RGB image. Multiple publicly available datasets like KITTI and VOID [20][21] allowed the community to tackle this problem either by fully leveraging the sparse depth measurement (unguided) or by fusing RGB features (guided). Furthermore, in the field of 3D reconstruction, recent findings show that coupling the two sensors may lead to a more robust and accurate trajectory estimate [2].
Besides, to accomplish any of these tasks, one would require to know the relative offset between the two sensors.
This work aims at solving the task of LiDAR-RGB calibration, namely, estimating the relative offset (extrinsic parameters) between the two sensors, using their raw measurements.
The core idea behind this multi-modal calibration is to find spatial correspondences between the sensors measurements. Most approaches rely on one or more calibration patterns to establish common features betwe
Fig. 1: Reprojection of a LiDAR point cloud on a fisheye RGB camera rigidly attached to the former. The offset between the sensors leads to shadows on parts of the image.
patterns are often complex or expensive to produce [1]. The main contribution of this paper is a versatile calibration toolbox that allows to estimate the extrinsic parameters between LiDAR and RGB with minimal user intervention, using a simple calibration checkerboard target. We leverage a joint non-linear formulation of the problem to achieve high accuracy results even with a minimum of three measurements. The requirement for our method is to use a calibration pattern (e.g. Checkerboard, ChAruCO [5]) that must be observed by both sensors during the acquisition. We exploit the planarity of the target to find a common observation used to estimate the extrinsic parameters. Moreover, we release an open-source implementation of our toolbox.
## II Related Work
This paper's section delves into LiDAR-RGB calibration and explores the two main classes of approaches: _target-based_ and _target-less_. As the name suggests, target-based approaches require the user to place artificial markers that both the camera and LiDAR can easily detect. This contrasts with target-less methods that free the use from this task. The core idea of calibration is common in the two classes of approaches: computing common features between heterogeneous measurements and estimating the transformation that minimizes the distance between corresponding features.
First, an overview of _target-less_ approaches is presented: Pandey _et al._ presents an automatic data-driven approach based upon the maximization of mutual information between the sensor-measured surface intensities [13]. The authors exploits the correlation coefficient for the reflectivity and intensity values of many scan-image pairs using different calibration parameters. However, shadows of objects or colored surfaces that completely absorb infrared lights might result in weaker correlation between scan-image pairs. Yoon _et al._ proposes a calibration method using region-based object pose estimation. Objects are segmented in both measurements, then a 3D mesh is generated from the LiDAR measurements, while images are used to reconstruct the scene using Structure from Motion (SfM). The two models are then registered together to acquire an initial guess on the relative pose. The final solution is obtained iteratively by finding correspondences between the reconstructed objects from both measurements [22]. In recent years, the development of learning based methods have also spanned in this field: Lv _et al._ proposes a real-time self-calibration network that predict the extrinsic parameters by constructing a cost volume between RGB and LiDAR features [11], while Sun _et al._ first estimates an initial guess by solving an hand-eye calibration method [18]. Moreover, the guess is fine-tuned by segmenting the image-cloud pair and by aligning the distances between centroids. The advantage of target-less method is that they can be used without preparing the environment. This comes at a cost of a lower accuracy and robustness when compared to their target-based counterpart.
_Target-based_ methods estimate the relative pose using an observed known structure. Given the difference of resolution for the two sensors, it is high unlikely that correspondences within the measurements can be established directly. For this reason, _point-to-point_ methods tends to process LiDAR measurements to implicitly obtain virtual points1 easily detectable from an RGB sensor. For instance, Park _et al._ utilizes a specially designed polygonal planar calibration board with known lengths of adjacent sides [14]. By estimating the 3D corresponding points from the LiDAR, vertices of the board can be determined as the meeting points of two projected sides. The vertices, along with the corresponding points detected from the color image, are used for calibration. Pusztai _et al._ introduces a methodology that utilizes cubic objects with predetermined side lengths [15, 16]. The corners of the cubes are estimated by initially detecting each side of the box and subsequently determining their intersection points. Furthermore, the corners along with their corresponding RGB image are employed to calibrate the system by solving Iterative Corresponding Point (ICP). _Zhouet al._ proposes a single-shot calibration method requiring a checkerboard [23]. The target is detected both in the RGB image, and LiDAR measurement, using RANSAC[4] for the latter. Furthermore, the four edges of the checkerboard are estimated and aligned to compute the relative offset between the two sensors. Toth _et al._ introduces a fully automatic calibration technique that leverages the utilization of spheres, enabling accurate detection in both point clouds and camera images [19]. Upon successful detection, the algorithm aligns the set of sphere centers using SVD. Beltran _et al._ presents a methodology that utilizes a custom calibration target equipped with 4 holes and AruCO markers specifically designed for monocular detection [1]. The methodology employs a set of techniques for each sensor to estimate the center points of the holes. Subsequently, the relative offset between sensors are determined by aligning the set of centers obtained from each sensor, Li _et al._ adopt a similar approach while using a checkerboard with 4 holes [10]. Fan _et al._ propose a two-stage calibration method using an auxiliary device with distinctive geometric features [3]. The method extracts lines from images and LiDAR point clouds, providing an initial estimation of the external parameters. Nonlinear optimization is then applied to refine these parameters. In the work of Singandhupe _et al._, the authors first extract planar information from RGB and LiDAR measurements, then, two grid of points are extracted from the computed planar patches and aligned using a customized ICP algorithm [17].
Footnote 1: Points that are not explicitly detected, but estimated from the LiDAR measurement.
Albeit these approaches provides relatively accurate results with few measurements, care should be taken during the estimation of virtual correspondences, as they can cause significant errors in the estimation step. Moreover these custom targets often requires precise construction or expensive manufacturing.
Another group of approaches does not directly solve the calibration problem using point-to-point correspondences, but rather exploit the planarity of the target to reduce the
feasible set of solutions using _plane-to-plane_ constraints. Mirzaei _et al._ addresses the challenge of accurate initial estimates by dividing the problem into two sub-problems and analytically solving each to obtain precise initial estimates [12]. The authors then refine these estimates through iterative minimization. They also discuss the identifiability conditions for accurate parameter estimation. Finally, a method similar to our proposal, Kim _et al._ combine observed normals to first estimate the relative orientation with SVD and then iteratively estimates an initial guess of the relative translation by minimizing the pairwise planar distances between measurements [8]. Finally, the translation is refined using a non-linear optimization problem using Levenberg-Marquardt (LM). Despite its simplicity, this method decouples the estimation of orientation and translation, thus leading to potential losses in accuracy while also increasing the number of required measurements.
Compared with the state of the art, we propose:
* a formulation for joint nonlinear optimization that couples relative rotation and translation using a plane-to-plane metric;
* an extensible framework that decouples the optimization from target detection. Currently supports Checkerboard and _ChARuCO_ patterns of typical A3-A4 sizes, easily obtainable from commercial printers;
* the possibility to handle different camera models and distortion;
* an open-source C++ implementation.
## III Our Approach
In this section, we will provide a detailed and comprehensive description of our method. First we describe the preliminaries required to understand our approach, then every component of the pipeline is described, following the procedure from the acquisition of the measurements up to the computation of the relative poses between the two sensors (extrinsic parameter).
**Plane Representation:** Let \(\pi=(\hat{\mathbf{n}},d)\) be a 3D plane, where \(\hat{\mathbf{n}}\in\mathbb{S}^{2}\) represents the unit vector orthogonal to the plane and \(d\in\mathbb{R}\) is the shortest distance of the plane respect to the origin. Applying a transform \(\mathbf{X}\in\mathbb{SE}(3)\) to a plane \(\pi\) yields new coefficients \(\pi^{\prime}\): as follows:
\[\mathbf{X}\pi=\left\{\begin{array}{ll}\hat{\mathbf{n}}^{\prime}=&\mathbf{R} \mathbf{n}\\ d^{\prime}=&d+(\mathbf{R}\mathbf{n})^{t}\mathbf{t}\end{array}\right. \tag{1}\]
Here \(\mathbf{X}=\langle\mathbf{R};\mathbf{t}\rangle\) is represented by a rotation matrix \(\mathbf{R}\in\mathbb{SO}(3)\), and the translation vector \(\mathbf{t}\in\mathbb{R}^{3}\).
If the transformation is modified by a small local perturbation \(\mathbf{\Delta X}=(\mathbf{\Delta R}|\mathbf{\Delta t})\) then we can rewrite:
\[(\mathbf{X}\boxplus\mathbf{\Delta X})\pi=\left\{\begin{array}{ll}\tilde{ \mathbf{n}}=&\mathbf{\Delta R}\mathbf{R}\mathbf{n}\\ \tilde{d}=&d^{\prime}+\mathbf{n}^{t}\mathbf{R}^{t}\mathbf{\Delta R}^{t} \mathbf{\Delta t}\end{array}\right. \tag{2}\]
Deriving the result with respect to \(\mathbf{\Delta X}\) leads to the following Jacobian:
\[\frac{\partial(\mathbf{X}\boxplus\mathbf{\Delta X})\pi}{\partial\mathbf{ \Delta X}}=\left[\begin{array}{cc}&0_{3\times 3}&-[\mathbf{R}\mathbf{n}]_{ \times}\\ &\mathbf{n}^{t}\mathbf{R}^{t}&0_{1\times 3}\end{array}\right]_{4\times 6} \tag{3}\]
The distance between two planes depends both on the difference between their normals and the signed distance of the planes from the origin. These quantities can be captured by a 4D error vector \(e_{p}\) expressing the _plane-to-plane_ error metric:
\[\mathbf{p}(\pi_{k}) =-\mathbf{n}_{k}d_{k} \tag{4}\] \[e_{p}(\pi_{i},\pi_{j}) =\begin{bmatrix}\mathbf{n}_{i}^{t}(\mathbf{p}(\pi_{i})-\mathbf{p} (\pi_{j}))\\ \mathbf{n}_{j}-\mathbf{n}_{i}\end{bmatrix}. \tag{5}\]
Here \(\mathbf{p}(\pi_{k})\) is the point on the plane closest to the origin of the reference system, and it is obtained by taking a point along the normal direction \(\mathbf{n}\) at distance \(d\).
**Pinhole Model (RGB):** Let \(\mathbf{p}\) be a point expressed in camera frame and \(\mathbf{K}\) be t camera matrix. Assuming any lens distortion effect have been previously corrected, then the projection on the image plane of \(\mathbf{p}\) is computed as
\[\pi_{\mathrm{c}}(\mathbf{p}) =\phi(\mathbf{K}\mathbf{p}) \tag{6}\] \[\mathbf{K} =\begin{bmatrix}f_{x}&0&c_{x}\\ 0&f_{y}&c_{y}\\ 0&0&1\end{bmatrix}\] (7) \[\phi(v) =\frac{1}{v_{z}}\begin{bmatrix}v_{x}\\ v_{y}\end{bmatrix} \tag{8}\]
where \(\phi(v)\) represents the homogeneous division and \(\pi_{\mathrm{c}}(\mathbf{p})\) the pinhole projection function. For simplicity, we detail only the pinhole camera projection, however the same principle applies for more complex camera models.
**Projection by ID (LiDAR):** Let \(\mathbf{p}\) be a point detected by the LiDAR and expressed in its frame. Its projection is computed as:
\[\pi_{\mathrm{l}}(\mathbf{p}) =\mathbf{A}\psi(\mathbf{p})\] (9) \[\mathbf{A} =\begin{bmatrix}f_{x}&0&c_{x}\\ 0&1&0\end{bmatrix}\] (10) \[\psi(v) =\begin{bmatrix}\operatorname{atan2}(v_{y},v_{x})\\ \text{ \
A parametric circular patch around the user's selection is used to estimate a plane using RANSAC and, concurrently, the calibration target detection is attempted on the RGB image. Once the target is detected, the RGB plane is computed by solving the ICP. If the user is satisfied with both LiDAR and RGB planes, they are stored for processing.
Whereas a straightforward rank analysis of the Jacobians reveals that just 3 measurments are sufficient to constrain a solution, it is well known from the estimation theory that the accuracy grows with the number of measurements.
Once the set of measurements are acquired, we jointly estimate the relative orientation and translation of the LiDAR with respect to the RGB sensor \(\mathbf{X}\in\mathbb{SE}(3)\) by solving the following nonlinear minimization problem:
\[\mathbf{X}=\underset{\mathbf{X}\in\mathbb{SE}(3)}{\operatorname{argmin}}\sum_ {i\in\mathbf{Z}}\underbrace{\|\mathbf{X}\pi_{1}^{i}-\pi_{c}^{i}\|^{2}}_{e_{p}} \tag{12}\]
where \(e_{p}\) represent the plane-to-plane error.
During acquisition, it may happen that the user accept one or more wrongly estimated measurements. Due to the quadratic nature of the error terms, these _outliers_ are often over-accounted, resulting in wrong estimations. To account for this factor, as described in [6], we employ an Huber _M-estimator_\(\rho(\cdot)\) that treats differently measurements based on their error. We rewrite Eq. (12) as follows:
\[\mathbf{X}=\underset{\mathbf{X}\in\mathbb{SE}(3)}{\operatorname{argmin}}\sum _{i\in\mathbf{Z}}\rho(\|\mathbf{X}\pi_{1}^{i}-\pi_{c}^{i}\|). \tag{13}\]
To resolve Eq. (13) we employ the Gauss-Newton (GN) algorithm implemented in the srrg2_solver[6].
## IV Experimental Evaluation
In this section, we describe the experiments we conducted to establish the quality of our calibration toolbox. We perform quantitative experiments in the simulated environment provided by [1] to compare our estimates with groundtruth while we also conduct qualitative and quantitive experiments on real scenarios using our acquisition system. We directly compare our results with [8], as it is the work which is closest to ours. In addition, we compare to [1] that produce accurate results relying on a very complex target (CNC printed).
### _Synthetic Case_
We conducted experiments on _Gazebo_ simulator [9] to evaluate the accuracy and robustness of our approach, injecting different noise figures to the sensor measurements. We also experiment how the number of observations affect the final results. The setup of the scene includes a Velodyne HDL-64 LiDAR, a BlackFly-S RGB sensor and a \(6\times 8\) checkerboard target with corner size of \(0.2\) meters. We randomly generate and acquire \(53\) valid2 measurements.
Footnote 2: A valid measurement is one for which both LiDAR and RGB sensor are able to detect the target
To quantify the impact of the number of measurements on the accuracy of our approach, we run the calibration procedure with an increasing number of measurements \(w_{s}=[3\ldots 39]\) and at three different LiDAR noise levels \(\sigma_{1}\) (\(0\) mm, \(7\) mm and \(14\) mm). For every \(w_{s}\), we sample \(40\) sets of measurements.
From Tab. I, we observe a steady decrease of error for every noise level, reaching an average of \(2.6\)mm translation error in the intermediate noise case. In case of \(3\) measurements the high uncertainty is due to the potentially poorly conditioned system when using planes that have similar normals. Nonetheless, we compare our best result with \(3\) measurements against the best results of the methods presented in [1] and [8]. Tab. II shows the results.
### _Real Case_
In this section, we describe the experiments conducted on real measurements. We perform a quantitative test on our acquisition system shown in Fig. 2 that is equipped with a Ouster OS0-128 LiDAR with a resolution of \(128\times 1024\), a RealSense T-265 stereo camera and two Manta-G145 RGB cameras arranged in a wide horizontal stereo configuration.
Since no groundtruth information is available, we take advantage of the stereo extrinsics to provide an estimate of the calibration error. The offset between multiple camera is measured using optical calibration procedures which typically reach subpixel precision.
In the first experiment, we consider the LiDAR and the Realsense T-265 sensor which provides factory calibrated intrinsic/extrinsic parameters for both cameras. The task of the experiment is to demonstrate the accuracy of the calibrator in real case scenarios and to understand how the number of measurements considered affects the quality of the solution.
As for the synthetic case, we first acquire a set of \(17\) cloud-image LiDAR RGB measurements for both cameras. Moreover we perform \(40\) calibrations with \(w_{s}\) randomly selected measurements with \(w_{s}\in\{3,15\}\). Finally, for every \(w_{s}\), we combine the computed extrinsics for both cameras to obtain an estimate stereo transform. Assuming approximately symmetrical errors in the two cameras, Fig. 3 shows the results of this experiment. We were able to obtain at best
Fig. 2: Acquisition system used for the Real Case experiments.
an average error of \(7.1\) mm in translation and \(0.01\) rads in orientation.
The second experiment is conducted using the wide stereo setup for which we also calibrate the intrinsics and extrinsics of the cameras, providing expected results in a typical scenario. The acquisition procedure is the same as in the first experiment and Fig. 4 shows the experimental result, where we obtain the best solution with \(4.6\) mm and \(0.002\) rads in orientation.
Moreover, Fig. 1 and Fig. 5 show the reprojection onto the right camera respectively of the fisheye and wide baseline RGB sensor. In the latter, the large parallax between the sensors leads to strong occlusions effects, that have been mitigated with an hidden point removal algorithm [7].
In summary, our evaluation indicates that our method is capable of generating extrinsic estimates that are comparable or superior to those obtained using other state-of-the-art approaches. It is important to note that careful consideration is required when selecting a minimal number of measurements. However, our experiments clearly demonstrate that the accuracy of these estimates improves as the number of measurements increases.
## V Conclusion
In summary, our paper introduces a simple and effective method for accurately estimating extrinsic parameters between LiDARs and RGB sensors. By leveraging the inherent planarity of standard calibration patterns, we establish common observations between these sensors, greatly simplifying the calibration procedure. Our experiments show that planar features mitigate the LiDAR noise, leading to accurate results even with common A3/A4 calibration patterns. Finally, we also release an open source C++ implementation to benefit the community.
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & \(e_{t}\) (cm) & \(e_{r}\) (\(10^{-2}\) rad) \\ \hline _Beltran et al.[1]_ & 0.82 & 0.50 \\ _Kim et al.[8]_ & 10.2 & 129.56 \\ Ours & **0.11** & **0.25** \\ \hline \hline \end{tabular}
\end{table} TABLE II: Best solutions obtained by calibration using \(N=3\) measurements.
Fig. 4: Average camera-wise calibration error in the LiDAR-Manta case.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \(\sigma_{1}=0\) & \(\sigma_{1}=8\mathrm{e}^{-3}\) & \(\sigma_{1}=16\mathrm{e}^{-3}\) & \(\sigma_{c}=0\) & \(\sigma_{c}=14\mathrm{e}^{-3}\) \\ & mean & stdev & mean & stdev & mean & stdev \\ \hline \multicolumn{2}{l}{**No. Measurements**} & & & & & \\ \hline
**3** & 41.761 & 104.362 & 20.790 & 25.124 & 57.849 & 112.365 \\
**4** & 10.872 & 17.941 & 12.206 & 12.363 & 14.940 & 11.681 \\
**5** & 6.492 & 7.997 & 8.350 & 9.076 & 9.115 & 5.675 \\
**10** & 4.591 & 3.458 & 5.759 & 4.974 & 5.849 & 1.989 \\
**20** & 2.575 & 1.981 & 3.646 & 2.564 & 4.123 & 1.139 \\
**30** & 2.673 & 1.263 & 2.867 & 1.659 & 3.735 & 0.878 \\
**39** & 2.091 & 0.883 & 2.666 & 1.206 & 3.261 & 0.413 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Average translation error in millimeters with different noise levels and number of measurements.
Fig. 3: Average camera-wise calibration error in the LiDAR-T265 case.
## References
* [1] A. A. Barreira, A. A. Barreira, and A. A. Barreira. _et al._. _The r |
2310.20370 | Exotic Meson candidates in COMPASS data | One of the prime goals of the COMPASS experiment at CERN is the study of the
light meson spectrum, with a particular emphasis on the search for exotic
states. The focus of this paper is on signals of the lightest hybrid candidate
$\pi_1(1600)$ with spin-exotic quantum numbers $J^{PC}=1^{-+}$ in several decay
channels such as $\pi^-\pi^+\pi^-$, $\eta^{(\prime)}\pi^-$, $\omega\pi^-\pi^0$,
$\pi^-\pi^+\pi^-\eta$, and $K_S K_S \pi$. In addition, we highlight new results
for the $K^-\pi^+\pi^-$ final state, which indicate a supernumerary state with
respect to the constituent quark model with $J^{P}=0^-$. | David Spülbeck | 2023-10-31T11:28:11Z | http://arxiv.org/abs/2310.20370v2 | # Exotic Meson candidates in COMPASS data
###### Abstract
One of the prime goals of the COMPASS experiment at CERN is the study of the light meson spectrum, with a particular emphasis on the search for exotic states. The focus of this paper is on signals of the lightest hybrid candidate \(\pi_{1}(1600)\) with spin-exotic quantum numbers \(J^{PC}=1^{-+}\) in several decay channels such as \(\pi^{-}\pi^{+}\pi^{-}\), \(\eta^{(\prime)}\pi^{-}\), \(\omega\pi^{-}\pi^{0}\), \(\pi^{-}\pi^{+}\pi^{-}\eta\), and \(K_{S}K_{S}\pi\). In addition, we highlight new results for the \(K^{-}\pi^{+}\pi^{-}\) final state, which indicate a supernumerary state with respect to the constituent quark model with \(J^{P}=0^{-}\).
David Spulbeck
e-mail: [email protected]
## 1 Introduction
The constituent quark model (QM) describes a meson as a \(q\bar{q}^{\prime}\)-pair, where only the two quarks contribute to the total quantum numbers \(J^{PC}\). However, QCD predicts the existence of exotic, i.e. non-\(q\bar{q}^{\prime}\), mesons in the form of multiquark systems, glueballs or hybrids. The latter contain an excited gluonic field, which contributes to the total quantum numbers of the system. There are two sufficient signatures for the observation of exotic mesons: (i) its quantum numbers are forbidden within the QM, e.g. \(0^{--}\), \((odd)^{-+}\) and \((even)^{+-}\) (spin-exotic), or (ii) more states than predicted by the QM are observed (supernumerary). The broad and overlapping states in the light-quark sector require large data sets and state-of-the-art partial-wave analyses using large wave sets in order to identify the quantum numbers of the contributing states.
## 2 Meson spectroscopy at COMPASS
The COMPASS experiment is dedicated to investigate the structure and dynamics of light hadrons [1]. In the scope of its program for light meson spectroscopy, the world's largest data sample of diffractive dissociation reactions has been recorded using a negative hadron beam at \(190\,\mathrm{GeV}/c\), consisting of mainly \(\pi^{-}\) (\(96.8\%\)) and \(K^{-}\) (\(2.4\%\)), impinging on a liquid hydrogen target [2]. This allows for precision measurements of established resonances as well as the search for new states. Isovector resonances of the \(a_{J}\) and \(\pi_{J}\) families in the unflavoured sector and \(K_{J}^{(*)}\) states in the strange sector can be accessed.
To disentangle the underlying resonances for a selected final state, a partial-wave analysis (PWA) is performed, which can be separated into two analysis stages [3]. In the first stage, the partial-wave decomposition (PWD), the data is grouped into bins of \(m_{X}\), the invariant mass, and \(t^{\prime}\), the reduced four-momentum transfer to the target. For each bin the strength
and relative phase of each partial wave is determined using an extended log-likelihood fit. In order to do so, the full amplitude per partial wave is separated into the decay amplitude and a transition amplitude. The decay amplitude is calculated using the isobar model, in which the decay of a resonance \(X\) is described via a sequence of two-body decays, and the transition amplitudes are the fit parameters. In the second analysis stage, the resonance-model fit (RMF), the transition amplitudes are parameterized by the sum of a resonant and a non-resonant component. The former is usually approximated by either the sum of relativistic Breit-Wigner amplitudes or the K-matrix approach and depends only on the invariant mass whereas the latter takes the background-dynamics in \(t^{\prime}\) into account. A phenomenological parametrization is used for the non-resonant component.
### Exotics in the light unflavoured sector
Several models predict the lightest hybrid mesons to have quantum numbers \((0,\mathbf{1},2)^{-+}\) or \(1^{--}\)[4], which includes one spin-exotic configuration known as the \(\pi_{1}\). Since signals in this \(1^{-+}\) sector have been observed in several channels and experiments, recent lattice QCD simulations focused on the decay-channels of the \(\pi_{1}\). The result was a dominant branching to \(b_{1}(1235)\pi\) and comparatively suppressed branchings to \(f_{1}(1285)\pi\), \(\rho\pi\), \(\eta^{(\prime)}\pi\), \(f_{1}(1420)\pi\) and \(K^{*}\bar{K}\). With COMPASS data the \(\pi_{1}\) and its decay via these channels is currently being studied.
\(\boldsymbol{\rho(770)\pi}\) - The flagship is the \(\pi^{-}\pi^{+}\pi^{-}\) final state. Based on 46 M selected events the full PWA has been performed in 11 \(t^{\prime}\)-bins taking 88 waves into account [6]. The analysis of the \(\pi^{-}\pi^{+}\pi^{-}\) COMPASS data allowed for three important findings regarding the \(\pi_{1}(1600)\)-signal in the \(\rho(770)\pi\)-channel. Firstly, by covering a wide \(t^{\prime}\)-range and using a much larger wave set, COMPASS could reconcile apparent contradictions of previous experiments as analysis artefacts [7]. Secondly, by splitting the data into 11 \(t^{\prime}\)-bins, a separation of resonant and non-resonant components in this particular wave was possible, which is necessary for a reliable extraction of the resonance parameters. From a Breit-Wigner model, a mass of \(m_{\pi_{1}}=1600^{+110}_{-60}\) MeV/\(c^{2}\) (sys) and a width of \(\Gamma_{\pi_{1}}=590^{+100}_{-230}\) MeV/\(c^{2}\) (sys) was extracted. The result of the RMF for the lowest and highest \(t^{\prime}\)-bin is shown in Fig. 1 (left and middle, respectively) for spin projection quantum numbers \(M^{e}=1^{+}\)[6]. Thirdly, a freed-isobar PWA was performed for this spin-exotic wave by removing the isobar (\(\rho\)) line-shape from the decay amplitude and binning the isobar mass as well. For each \(m_{2\pi}\)-bin a complex fit parameter was
Figure 1: PWA result of the \(1^{-+}1^{+}\rho(770)\pi^{-}P\)-wave: Main analysis and RMF in the lowest (left) and highest (middle) \(t^{\prime}\)-bins [6], Argand diagram (right) showing an exemplary result of the freed-isobar (\(\rho\)) PWA [7].
introduced accounting for the dynamics of the isobar. The result for one (\(m_{3\pi}\),\(t^{\prime}\))-bin is shown in form of the Argand diagram in Fig. 1 (right) where three values of \(m_{2\pi}\) are given in \(\mathrm{GeV}/c^{2}\). It shows the clear signature for the decay via the \(\rho(770)\pi\) channel and by that the validity of the isobar model is demonstrated.
\(\boldsymbol{\eta^{(\prime)}\pi}\) - These two channels have been analysed by other experiments before with a surprising result: the presence of two spin-exotic states with rather close masses: the \(\pi_{1}(1400)\), observed only in the \(\eta\pi^{-}\)-channel, and the \(\pi_{1}(1600)\), observed only in the \(\eta^{\prime}\pi^{-}\)-channel. At COMPASS these two channels are measured via the decays \(\eta^{(\prime)}\to\pi^{-}\pi^{+}\pi^{0}(\eta)\) and \(\pi^{0}(\eta)\to\gamma\gamma\). The PWD was performed on 115 k (\(\eta\pi^{-}\)) and 40 k (\(\eta^{\prime}\pi^{-}\)) events without any \(t^{\prime}\)-binning [8]. The RMF of the COMPASS data, performed in [9] as a coupled channel fit using the K-Matrix approach, requires only one pole to describe the data. An analysis of Crystal Barrel data confirmed this finding [10]. The resulting resonance parameters (\(m_{\pi_{1}}=1564\pm 24\pm 86\) MeV\(/c^{2}\) and \(\Gamma_{\pi_{1}}=492\pm 54\pm 102\) MeV\(/c^{2}\)[9]) are compatible with the ones of the \(\pi_{1}(1600)\) extracted by COMPASS for the \(\pi^{-}\pi^{+}\pi^{-}\) final state. Currently the analysis is redone including the full data set, which doubles the number of exclusive events and allows us to take into account the \(t^{\prime}\)-dynamics. In addition, we will extend the mass range to higher masses, allowing us to apply constraints from Regge theory [11] in the analysis.
\(\boldsymbol{b_{1}(1235)\pi}\) - Based on much smaller data sets, previous experiments observed a clear signal in the spin-exotic sector for this channel. In [12] even a second state at higher masses (\(\sim\)2.0 GeV\(/c^{2}\)) was suggested which still needs confirmation. At COMPASS this channel is studied by performing the PWA on the \(\omega\pi^{-}\pi^{0}\) system. Experimentally the \(\omega\) is detected via the decay into \(\pi^{+}\pi^{-}\pi^{0}\). Based on 720 k exclusive events, which are grouped into four \(t^{\prime}\)-bins, the PWD has been performed resulting in clear signals in the spin-exotic sector and in the expected mass region of the \(\pi_{1}(1600)\). For the highest \(t^{\prime}\)-bin the \(b_{1}\pi\) S-wave intensity and relative phase with respect to the \(\rho\omega\) D-wave are shown in Fig. 2 (left) and (middle) for the isobar decay \(b_{1}\to\omega\pi\) via S-wave. For the same isobar but decay via D-wave, the \(b_{1}\pi\) S-wave intensity is shown in Fig. 2 (right). A RMF is currently being performed in order to extract resonance parameters.
\(\boldsymbol{f_{1}(1285)\pi^{-}\&K^{*}\bar{K}}\) - The \(f_{1}(1285)\pi^{-}\) channel is analysed at COMPASS via a four-body PWA of the \(\pi^{-}\pi^{+}\pi^{-}\eta\) system based on 620 k events. For the events with at least one \(\pi^{-}\pi^{+}\eta\) combination inside the \(f_{1}(1285)\)-range, the invariant mass is plotted in Fig. 3 showing a broad
and peaking structure in the resonant region. The \(K^{*}\bar{K}\) channel is analysed by performing a PWA of the \(K_{S}\,K_{S}\,\pi^{-}\) system where the two \(K_{S}\) are experimentally detected via the decay into \(\pi^{+}\pi^{-}\). We have selected 240 k exclusive events. The invariant mass distribution is shown in Fig. 5. Conclusions on the presence of the \(\pi_{1}(1600)\) shall be drawn based on the results of the PWA, which is currently work in progress for both systems.
### Exotics in the strange sector
Using the \(K^{-}\) component of the beam, the light strange sector can be accessed as well. Our new analysis of the \(K^{-}\pi^{+}\pi^{-}\) final state identifies eight resonance-like signals. The most interesting one is compatible with the not PDG-established \(K(1630)\) and has \(J^{P}=0^{-}\). In this sector only two excited states are predicted by the QM below \(2\,\mathrm{GeV}/c^{2}\), but three states are observed experimentally. The lightest \(K(1460)\) and the heaviest \(K(1830)\) can be assigned to ordinary \(q\bar{q}^{\prime}\) states, which makes the \(K(1630)\) a supernumerary candidate. In order to achieve a stable fit for the \(0^{-}\rho(770)KP\)-wave, which is the only robust one in this sector, the parameters of the well-known \(K(1460)\) were fixed to the PDG values. In a systematic study the third resonance was removed from the RMF, resulting in a worse \(\chi^{2}_{\mathrm{red.}}\). Translating this into a significance yields a value of 8.3\(\sigma\) for the presence of three states. The intensity distribution as well as the result of the RMF is shown in Fig. 5 for the \(0^{-}\rho(770)KP\)-wave in the lowest \(t^{\prime}\)-bin.
## 3 Conclusions
Recording the world's largest data set of diffractive dissociation reactions with a \(\pi^{-}\) beam, COMPASS entered the era of precision spectroscopy of light mesons. Including all data recorded allows us to include large wave sets and to perform a binning in both the invariant mass and squared momentum transfer. We report on new investigations on the spin-exotic \(\pi_{1}(1600)\) in different decay channels. The decay branching will be crucial for the model interpretation of this state. Using data taken with a \(K^{-}\) beam, we perform a first PWA of the strange meson spectrum in COMPASS and find evidence for a supernumerary state with \(0^{-}\) quantum numbers. |
2309.13801 | A Formalized Extension of the Substitution Lemma in Coq | The substitution lemma is a renowned theorem within the realm of
lambda-calculus theory and concerns the interactional behaviour of the
metasubstitution operation. In this work, we augment the lambda-calculus's
grammar with an uninterpreted explicit substitution operator, which allows the
use of our framework for different calculi with explicit substitutions. Our
primary contribution lies in verifying that, despite these modifications, the
substitution lemma continues to remain valid. This confirmation was achieved
using the Coq proof assistant. Our formalization methodology employs a nominal
approach, which provides a direct implementation of the alpha-equivalence
concept. The strategy involved in variable renaming within the proofs presents
a challenge, specially on ensuring an exploration of the implications of our
extension to the grammar of the lambda-calculus. | Maria J. D. Lima, Flávio L. C. de Moura | 2023-09-25T01:15:39Z | http://arxiv.org/abs/2309.13801v1 | # A Formalized Extension of the Substitution Lemma in Coq
###### Abstract
The substitution lemma is a renowned theorem within the realm of \(\lambda\)-calculus theory and concerns the interactional behaviour of the metasubstitution operation. In this work, we augment the \(\lambda\)-calculus's grammar with an uninterpreted explicit substitution operator, which allows the use of our framework for different calculi with explicit substitutions. Our primary contribution lies in verifying that, despite these modifications, the substitution lemma continues to remain valid. This confirmation was achieved using the Coq proof assistant. Our formalization methodology employs a nominal approach, which provides a direct implementation of the \(\alpha\)-equivalence concept. The strategy involved in variable renaming within the proofs presents a challenge, specially on ensuring an exploration of the implications of our extension to the grammar of the \(\lambda\)-calculus.
## 1 Introduction
In this work, we present a formalization of the substitution lemma [6] in a general framework that extends the \(\lambda\)-calculus with an explicit substitution operator using the Coq proof assistant [25]. The source code is publicly available at
[https://flaviomoura.info/files/msubst.v](https://flaviomoura.info/files/msubst.v)
The substitution lemma is an important result concerning the composition of the substitution operation, and is usually presented as follows in the context of the \(\lambda\)-calculus:
Let \(t,u\) and \(v\) be \(\lambda\)-terms, \(x\neq y\) and \(x\notin FV(v)\), where \(FV(v)\) is the set of free variables of \(v\).
Then \(\{y:=v\}\{x:=u\}t=\{x:=\{y:=v\}u\}\{y:=v\}t\).
This is a well known result already formalized in the context of the \(\lambda\)-calculus [8]. Nevertheless, in the context of \(\lambda\)-calculi with explicit substitutions its formalization is not trivial due to the interaction between the metasubstitution and the explicit substitution operator. Our formalization is done in a nominal setting that uses the MetaLib1 package of Coq, but no particular explicit substitution calculi is taken into account because the expected behaviour between the metasubstitution operation with the explicit substitutition constructor is the same regardless the calculus. The formalization was done with Coq (platform) version 8.15.2, which already comes with the Metalib package. The novel contributions of this work are twofold:
Footnote 1: [https://github.com/plclub/metalib](https://github.com/plclub/metalib)
1. The formalization is modular in the sense that no particular calculi with explicit substitutions is taken into account. Therefore, we believe that this formalization could be seen as a generic framework for proving properties of these calculi that uses the substitution lemma in the nominal setting [17, 21, 22];
2. A solution to a circularity problem in the proofs is given. It adds an axiom to the formalization that allow a rewrite step inside a let expression. Such a rewrite step is problematic and does not seem to have a trivial solution.
## 2 A syntactic extension of the \(\lambda\)-calculus
In this section, we present the framework of the formalization, which is based on a nominal approach [13] where variables use names. In the nominal setting, variables are represented by atoms that are structureless entities with a decidable equality:
Parameter eq_dec : forall x y : atom, {x = y} + {x <> y}.
therefore different names mean different atoms and different variables. The nominal approach is close to the usual paper and pencil notation used in \(\lambda\)-calculus, whose grammar of terms is given by:
\[t::=x\mid\lambda_{x}.t\mid t \tag{1}\]
where \(x\) represents a variable which is taken from an enumerable set, \(\lambda_{x}.t\) is an abstraction, and \(t\)\(t\) is an application. The abstraction is the only binding operator: in the expression \(\lambda_{x}.t\), \(x\) binds in \(t\), called the scope of the abstraction. This means that all free occurrence of \(x\) in \(t\) is bound in \(\lambda_{x}.t\). A variable that is not in the scope of an abstraction is free. A variable in a term is either bound or free, but note that a varible can occur both bound and free in a term, as in \((\lambda_{y}.y)\)\(y\).
The main rule of the \(\lambda\)-calculus, named \(\beta\)-reduction, is given by:
\[(\lambda_{x}.t)\ u\rightarrow_{\beta}\{x:=u\}t \tag{2}\]
where \(\{x:=u\}t\) represents the result of substituting all free occurrences of variable \(x\) in \(t\) with \(u\) in such a way that renaming of bound variable may be done in order to avoid the variable capture of free variables. We call \(t\) the body of the metasubstitution, and \(u\) its argument. In other words, \(\{x:=u\}t\) is a metanotation for a capture free substitution. For instance, the \(\lambda\)-term \((\lambda_{x}\lambda_{y}.x\ y)\)\(y\) has both bound and free occurrences of the variable \(y\), and in order to \(\beta\)-reduce it, one has to replace (or substitute) the free variable \(y\) for all free occurrences of the variable \(x\) in the term \((\lambda_{y}.x\ y)\). But a straight substitution will capture the free variable \(y\), _i.e._ this means that the free occurrence of \(y\) before the \(\beta\)-reduction will become bound after the \(\beta\)-reduction step. A renaming of bound variables may be done to avoid such a capture, so in this example, one can take an \(\alpha\)-equivalent2 term, say \((\lambda_{z}.x\ z)\), and perform the \(\beta\)-step correctly as \((\lambda_{x}\lambda_{y}.x\ y)\ y\rightarrow_{\beta}\lambda_{z}.y\ z\). Renaming of variables in the nominal setting is done via a name-swapping, which is formally defined as follows:
Footnote 2: A formal definition of this notion will be given later in this section.
\[(x\ y)\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
In the previous example, one could apply a swap to avoid the variable capture in a way that, a swap is applied to the body of the abstraction before applying the metasubstitution to it: \((\lambda_{x}\lambda_{y}.x\ y)\ y\rightarrow_{\beta}\{x:=y\}((y\ z)(\lambda_{y}.x\ y))=\{x:=y\}( \lambda_{z}.x\ z)=\lambda_{z}.y\ z\). Could we have used a variable substitution instead of a swapping in the previous example? Absolutely. We could have done the reduction as \((\lambda_{x}\lambda_{y}.x\ y)\ y\rightarrow_{\beta}\{x:=y\}(\{y:=z\}(\lambda_{y}. x\ y))=\{x:=y\}(\lambda_{z}.x\ z)=\lambda_{z}.y\ z\), but as we will shortly see, variable substitution is not stable modulo \(\alpha\)-equivalence, while the swapping is, thereby rendering it a more fitting choice when operating with \(\alpha\)-classes.
In what follows, we will adopt a mixed-notation approach, intertwining metanotation with the equivalent Coq notation. This strategy aids in elucidating the proof steps of the upcoming lemmas, enabling a clearer and more detailed comprehension of each stage in the argumentation. The corresponding Coq code for the swapping of variables, named _vswap_, is defined as follows:
Definition_vswap_ (\(x\):_atom_) (\(y\):_atom_) (\(z\):_atom_) := if (\(z\) == \(x\)) then \(y\) else if (\(z\) == \(y\)) then \(x\) else \(z\).
therefore, the swap \((\!(x\ y)\!)z\) is written in Coq as _vswap_\(x\ y\ z\). As a short example to acquaint ourselves with the Coq notation, let us show how we will write the proofs:
Lemma_vswap_id_: \(\forall\ x\ y\), _vswap_\(x\ x\ y=y\).
**Proof.** The proof is by case analysis, and it is straightforward in both cases, when \(x=y\) and \(x\neq y\). \(\Box\)
### An explicit substitution operator
The extension of the swap operation to terms require an additional comment because we will not work with the grammar (1), but rather, we will extend it with an explicit substitution operator:
\[t::=x\ |\ \lambda_{x}.t\ |\ t\ |\ [x:=u]t \tag{4}\]
where \([x:=u]t\) represents a term with an operator that will be evaluated with specific rules of a substitution calculus. The intended meaning of the explicit substitution is that it will simulate the metasubstitution. This formalization aims to be a generic framework applicable to any calculi with explicit substitutions using a named notation for variables. Therefore, we will not specify rules about how one can simulate the metasubstitution, but it is important to be aware that this is not a trivial task as one can easily lose important properties of the original \(\lambda\)-calculus [19, 15].
Calculi with explicit substitutions are formalisms that deconstruct the metasubstitution operation into finer-grained steps, thereby functioning as an intermediary between the \(\lambda\)-calculus and its practical implementations. In other words, these calculi shed light on the execution models of higher-order languages. In fact, the development of a calculus with explicit substitutions faithful to the \(\lambda\)-calculus, in the sense of the preservation of some desired properties were the main motivation for such a long list of calculi with explicit substitutions invented in the last decades [2, 24, 7, 11, 10, 18, 12, 9, 16].
The following inductive definition corresponds to the grammar (4), where the explicit substitution constructor, named \(n\_sub\), has a special notation. Instead of writing \(n\_sub\ t\ x\ u\), we will write \([x:=u]\ t\) similarly to (4). Accordingly, \(n\_sexp\) denotes the set of nominal \(\lambda\)-expressions equipped with an explicit substitution operator, which, for simplicity, we will refer to as just "terms".
Inductive_\(n\_sexp\) : Set :=
\(n\_var\) (\(x\):_atom_)
\(n\_abs\) (\(x\):_atom_) (\(t\):_n_sexp_)
\(n\_app\) (\(t\):_n_sexp_) (\(t\):_n_sexp_)
\(n\_sub\) (\(t\):_n_sexp_) (\(x\):_atom_) (\(t\):_n_sexp_).
The _size_ of a term, also written as \(|t|\), and the set \(\mathit{fv}\_nom\) of the free variables of a term are defined as usual:
Fixpoint \(\mathit{size}\)\((t:n\_sexp):nat:=\)
match \(t\) with
\(|\)\(n\_var\)\(x\)\(\Rightarrow\)\(1\)
\(|\)\(n\_abs\)\(x\)\(t\)\(\Rightarrow\)\(1\)\(+\)\(\mathit{size}\)\(t\)
\(|\)\(n\_app\)\(t\)\(t\)\(t2\)\(\Rightarrow\)\(1\)\(+\)\(\mathit{size}\)\(t\)\(2\)
\(|\)\(n\_sub\)\(t\)\(t2\)\(\Rightarrow\)\(1\)\(+\)\(\mathit{size}\)\(t\)\(2\)
end.
Fixpoint \(\mathit{fv}\_nom\)\((t:n\_sexp):atoms:=\)
match \(t\) with
\(|\)\(n\_var\)\(x\)\(\Rightarrow\)\(\{\{x\}\}\)
\(|\)\(n\_abs\)\(x\)\(t1\)\(\Rightarrow\)\(remove\)\(x\)\((\mathit{fv}\_nom\)\(t1)\)
\(|\)\(n\_app\)\(t1\)\(t2\)\(\Rightarrow\)\(\mathit{fv}\_nom\)\(t1\)\(\mathit{`union^{*}}\)\(\mathit{fv}\_nom\)\(t2\)
\(|\)\(n\_sub\)\(t1\)\(x\)\(t2\)\(\Rightarrow\)\((remove\)\(x\)\((\mathit{fv}\_nom\)\(t1))\)\(\mathit{`union^{*}}\)\(\mathit{fv}\_nom\)\(t2\)
end.
The action of a permutation on a term, written \((x\)\(y)t\), is inductively defined as in (3) with the additional case for the explicit substitution operator:
\[(x\)\(y)t:=\left\{\begin{array}{ll}(\!(x\)\)\(y)\), &if \(t\) is the variable \(v\);
\(\lambda_{(x\)\(y)z}\)\((x\)\(y)t_{1}\), &if \(t=\lambda_{c}\)\(t_{1}\);
\((x\)\(y)\)\(t_{1}\)\((x\)\(y)t_{2}\), &if \(t=t_{1}\)\(t_{2}\);
\([\!(x\)\)\(y)\(z:=(x\)\(y)t_{2}]\)\((x\)\(y)t_{1}\), &if \(t=[z:=t_{2}]\)\(t_{1}\).
The corresponding Coq definition is given by the following recursive function:
Fixpoint \(\mathit{swap}\)\((x\):\(atom)\)\((y\):\(atom)\)\((t\):\(n\_sexp):=\)
match \(t\) with
\(|\)\(n\_var\)\(z\)\(\Rightarrow\)\(n\_var\)\((\mathit{vswap}\)\(x\)\(y\)\(z)\)
\(|\)\(n\_abs\)\(z\)\(t1\)\(\Rightarrow\)\(n\_abs\)\((\mathit{vswap}\)\(x\)\(y\)\(z)\)\((\mathit{swap}\)\(x\)\(y\)\(t1)\)
\(|\)\(n\_app\)\(t1\)\(t2\)\(\Rightarrow\)\(n\_app\)\((\mathit{swap}\)\(x\)\(y\)\(t1)\)\((\mathit{swap}\)\(x\)\(y\)\(t2)\)
\(|\)\(n\_sub\)\(t1\)\(t2\)\(\Rightarrow\)\(n\_sub\)\((\mathit{swap}\)\(x\)\(y\)\(t1)\)\((\mathit{vswap}\)\(x\)\(y\)\(z)\)\((\mathit{swap}\)\(x\)\(y\)\(t2)\)
end.
The _swap_ function has many interesting properties, but we will focus on the ones that are more relevant to the proofs related to the substitution lemma. Nevertheless, all lemmas can be found in the source code of the formalization3. The next lemmas are simple properties that are all proved by induction on the structure of term \(t\):
Footnote 3: [https://flaviumoura.info/files/msubst.v](https://flaviumoura.info/files/msubst.v)
Lemma \(\mathit{swap}\_neqq\): \(\forall\)\(x\)\(y\)\(z\)\(w\), \(z\)\(\neq\)\(w\)\(\rightarrow\)\(\mathit{vswap}\)\(x\)\(y\)\(w\).
Lemma \(\mathit{swap}\_size\_eq:\)\(\forall\)\(x\)\(y\)\(t\), \(\mathit{size}\)\((\mathit{swap}\)\(x\)\(y\)\(t)\) = \(\mathit{size}\)\(t\).
Lemma \(\mathit{swap}\_symmetric:\)\(\forall\)\(t\)\(x\)\(y\), \(\mathit{swap}\)\(x\)\(y\)\(t\) = \(\mathit{swap}\)\(y\)\(x\)\(t\).
Lemma \(\mathit{swap}\_involutive:\)\(\forall\)\(t\)\(x\)\(y\), \(\mathit{swap}\)\(x\)\(y\)\((\mathit{swap}\)\(x\)\(y\)\(t)\) = \(t\).
Lemma_shuffle_swap_: \(\forall\ w\ y\ z\ t\), \(w\neq z\to y\neq z\to\) (_swap_\(y\ y\ (\)_swap_\(y\ z\ t)_) = (_swap_\(w\ z\ (\)_swap_\(w\ y\ t)_)).
Lemma_swap_equivariance_: \(\forall\ t\ x\ y\ z\ w\), _swap_\(x\ y\ (\)_swap_\(z\ w\ t)_) = _swap_\((\)_swap_\(x\ y\ z)\ (\)
2. If is an abstraction, say \(t=\lambda_{\varepsilon}.t_{1}\), then we have by induction hypothesis that if \(x^{\prime}\notin(x\ y)t_{2}\) then \((\!(x_{0}\ y_{0})\!)x^{\prime}\notin(x_{0}\ y_{0})(x\ y)t_{2}\) for any term \(t_{2}\) with the same size as \(t_{1}\), and any variables \(x,y,x_{0}\) and \(y_{0}\). At this point is important to notice that an structural induction would generate an induction hypothesis with \(t_{1}\) only, which is not strong enough to prove the goal \((\!(x\ y)\!)x^{\prime}\notin fv\_nom((x\ y)\lambda_{\varepsilon}.t_{1})\) that has \((x\ y)t_{1}\) (and not \(t_{1}\) alone!) after the propagation of the swap. In addition, we have by hypothesis that \(x^{\prime}\notin fv\_nom(t_{1})\backslash\{z\}\). This means that either \(x^{\prime}=z\) or \(x^{\prime}\notin fv\_nom(t_{1})\), and there are two subcases: 1. If \(x^{\prime}=z\) then the goal is \((\!(x\ y)\!)z\notin fv\_nom((x\ y)\lambda_{\varepsilon}.t_{1})\Leftrightarrow( \!(x\ y)\!)z\notin fv\_nom(\lambda_{(x\ y)\!)z}.(x\ y)t_{1})\Leftrightarrow(\!(x \ y)\!)z\notin fv\_nom((x\ y)t_{1})\backslash\{(\!(x\ y)\!)z\}\) we are done by lemma \(notin\_remove\_3\).4 Footnote 4: This is a lemma from Metalib library and it states that forall (x y : atom) (s : atoms), x = y -> y 'notin' remove x s. 2. Otherwise, \(x^{\prime}\notin fv\_nom(t_{1})\), and we conclude using the induction hypothesis taking \(x_{0}=x\), \(y_{0}=y\) and the universally quantified variables \(x\) and \(y\) of the internal swap as the same variable (it does not matter which one).
3. The application case is straightforward from the induction hypothesis.
4. The case of the explicit substitution, _i.e._ when \(t=[z:=t_{2}]t_{1}\), we have to prove that \((\!(x\ y)\!)x^{\prime}\notin fv\_nom((x\ y)([z:=t_{2}]t_{1}))\). We then propagate the swap over the explicit substitution operator and show, by the definition of \(fv\_nom\), we have to prove that both \((\!(x\ y)\!)x^{\prime}\notin(fv\_nom((x\ y)t_{1}))\backslash\{(\!(x\ y)\!)z\}\) and \((\!(x\ y)\!)x^{\prime}\notin fv\_nom((x\ y)t_{2})\). 1. In the former case, the hypothesis \(x^{\prime}\notin fv\_nom(t_{1})\backslash\{z\}\) generates two subcases, either \(x^{\prime}=z\) or \(x^{\prime}\notin fv\_nom(t_{1})\), and we conclude with the same strategy of the abstraction case. 2. The later case is straightforward by the induction hypothesis.
The other direction is also true, but we skip the proof that is also by induction on the size of term \(t\):
Lemma _notin_fv_nom_remove_swap: \(\forall\ t\ x^{\prime}\ x\ y\), _vswap_\(x\ y\ x^{\prime}\) '_notin_' _fv_nom_ (_swap_\(x\ y\ t\)) \(\to x^{\prime}\) '_notin_' _fv_nom_\(t\).
### \(\alpha\)-equivalence
As usual in the standard presentations of the \(\lambda\)-calculus, we work with terms modulo \(\alpha\)-equivalence. This means that \(\lambda\)-terms are identified up to renaming of bound variables. For instance, all terms \(\lambda_{x}.x\), \(\lambda_{y}.y\) and \(\lambda_{\varepsilon}.z\) are seen as the same term which corresponds to the identity function. Formally, the notion of \(\alpha\)-equivalence is defined by the following inference rules:
\[\begin{array}{cc}\frac{x\neq y}{x\notin fv(t_{2})}&\frac{t_{1}=_{\alpha}(y \ x)t_{2}}{\lambda_{x}.t_{1}=_{\alpha}\lambda_{y}.t_{2}}\ (\mathit{aeq}\_{\mathit{-abs} \_{\mathit{-}\mathit{diff}}})\\ \\ \frac{t_{1}=_{\alpha}t_{1}^{\prime}}{t_{1}\ t_{2}=_{\alpha}t_{1}^{\prime}\ t_{2}^{ \prime}}\ (\mathit{aeq}\_{\mathit{-app}})&\frac{t_{1}=_{\alpha}t_{1}^{\prime}}{[x:=t_{2 }]t_{1}=_{\alpha}[x:=t_{2}^{\prime}]t_{1}^{\prime}}\ (\mathit{aeq}\_{\mathit{-sub} \_{\mathit{-}\mathit{same}}})\\ \\ \frac{t_{1}=_{\alpha}t_{1}^{\prime}}{t_{1}\ t_{2}=_{\alpha}t_{1}^{\prime}\ t_{2}^{ \prime}}\ (\mathit{aeq}\_{\mathit{-app}})&\frac{t_{1}=_{\alpha}t_{1}^{\prime}}{[x:=t_{2 }]t_{1}=_{\alpha}[x:=t_{2}^{\prime}]t_{1}^{\prime}}\ (\mathit{aeq}\_{\mathit{-sub} \_{\mathit{-}\mathit{same}}})\\ \\ \frac{t_{1}=_{\alpha}t_{1}^{\prime}}{t_{1}\ t_{2}=_{\alpha}t_{1}^{\prime}\ t_{2}^{ \prime}}\ (\mathit{aeq}\_{\mathit{-app}})&\frac{t_{1}=_{\alpha}t_{1}^{\prime}}{[x:=t_{2 }]t_{1}=_{\alpha}[x:=t_{2}^{\prime}]t_{1}^{\prime}}\ (\mathit{aeq}\_{\mathit{-sub} \_{\mathit{-}\mathit{same}}})\\ \\ \frac{t_{1}=_{\alpha}t_{1}^{\prime}}{t_{1}\ t_{2}=_{\alpha}t_{1}^{\prime}\ t_{2}^{ \prime}}\ (\mathit{aeq}\_{\mathit{-app}})&\frac{t_{1}=_{\alpha}t_{1}^{\prime}}{[x:=t_{2 }]t_{1}=_{\alpha}[x:=t_{2}^{\prime}]t_{1}^{\prime}}\ (\mathit{aeq}\_{\mathit{-sub} \_{\mathit{-}\mathit{same}}})\\ \\ \frac{t_{1}=_{\alpha}t_{1}^{\prime}}{t_{1}\ t_{2}=_{\alpha}t_{1}^{\prime}\ t_{2}^{ \prime}}\ (\mathit{aeq}\_{\mathit{-app}})&\frac{t_{1}=_{\alpha}t_{1}^{\prime}}{[x:=t_{2 }]t_{1}=_{\alpha}[x:=t_{2}]t_{1}^{\prime}}\ (\mathit{aeq}\_{\mathit{-sub} \_{\mathit{-}\mathit{same}}})\\ \\ \frac{t_{1}=_{\alpha}t_{1}^{\prime}}{t_{1}\ t_{2}=_{\alpha}t_{1}^{\prime}\ t_{2}^{ \prime}}\ (\mathit{aeq}\_{\mathit{-app}})&\frac{t_{1}=_{\alpha}t_{1}^{\prime}}{[x:=t_{2 }]t_{1}=_{\alpha}[x:=t_{2}]t_{1}^{\prime}}\ (\mathit{aeq}\_{\mathit{-sub} \_{\mathit{-}\mathit{same}}})\\ \\ \frac{t_{1}=_{\alpha}t_{1}^{\prime}}{t_{1}\ t_{2}=_{\alpha}t_{1}^{\prime}\ t_{2}^{ \prime}}\ (\mathit{aeq}\_{\mathit{-app}})&\frac{t_{1}=_{\alpha}t_{1}^{\prime}}{[x:=t_{2 }]t_{1}=_{\alpha}[x:=t_{2}]t_{1}^{\prime}}\ (\mathit{aeq}\_{\mathit{-sub} \_{\mathit{-}\mathit{same}}})\\ \\ \frac{t_{1}=_{\alpha}t_{1}^{\prime}}{t_{1}\ t_{2}=_{\alpha}t_{1}^{\prime}\ t_{2}^{ \prime}}\ (\mathit{aeq}\_{\mathit{-app}})&\frac{t_{1}=_{\alpha}t_{1}^{\prime}}{[x:=t_{2 }]t_{1}=_{\alpha}[x:=t_{2}]t_{1}^{\prime}}\ (\mathit{aeq}\_{\mathit{-sub} \_{\mathit{-same}}})\\ \\ \frac{t_{1}=_{\alpha}t_{1}^{\prime}}{t_{1}\ t_{2}=_{\alpha}t_{1}^{\prime}\ t_{2}^{\prime}}\ (\mathit{aeq}\_{\mathit{-app}})&\frac{t_{1}=_{
\[\begin{array}{l}\underline{t_{2}=_{\alpha}t_{2}^{\prime}}\ \ \ \ \ \ x\neq y\ \ \ \ \ \ x\notin fv(t_{1}^{\prime})\ \ \ \ \ \ t_{1}=_{\alpha}(y\ x)t_{1}^{\prime}\\
If \(z\notin fv\_nom(t)\) and \(x\notin fv\_nom(t)\) then \((z\,x)(x\,y)t=_{\alpha}(z\,y)t\).
Initially, observe the similarity of the left hand side (LHS) of the \(\alpha\)-equation with the lemma _shuffle_swap_:
\(\forall w\ y\ z\ t,w\neq z\to y\neq z\rightarrow(w\ y)((y\ z)t)=(w\ z)((w\ y)t)\)
In order to use it, we need to have that both \(z\neq y\) and \(x\neq y\). We start comparing \(z\) and \(y\):
1. If \(z=y\) then the right hand side (RHS) reduces to \(t\) because the swap is trivial, and the LHS also reduces to \(t\) since swap is involutive.
2. When \(z\neq y\) then we proceed by comparing \(x\) and \(y\): 1. If \(x=y\) then both sides of the \(\alpha\)-equation reduces to \((z\,y)t\), and we are done. 2. Finally, when \(x\neq y\), we can apply the lemma _shuffle_swap_, and use lemma _aeq_swap_ to reduce the current goal to \((z\,x)t=_{\alpha}t\), and we conclude by lemma _swap_reduction_ since both \(z\) and \(x\) are not in the set of free variables of the term \(t\). \(\Box\)
## 3 The metasubstitution operation of the \(\lambda\)-calculus
As presented in Section 2, the main operation of the \(\lambda\)-calculus is the \(\beta\)-reduction (2) that expresses how to evaluate a function applied to an argument. The \(\beta\)-contractum \(\{x:=u\}t\) represents a capture free in the sense that no free variable becomes bound by the application of the metasubstitution. This operation is in the meta level because it is outside the grammar of the \(\lambda\)-calculus (and hence its name). In [6], Barendregt defines it as follows:
\[\{x:=u\}t=\left\{\begin{array}{ll}u,&\mbox{if $t=x$;}\\ y,&\mbox{if $t=y$ and $x\neq y$;}\\ \{x:=u\}t_{1}\ \{x:=u\}t_{2},&\mbox{if $t=t_{1}\ t_{2}$;}\\ \lambda_{y}.(\{x:=u\}t_{1}),&\mbox{if $t=\lambda_{y}.t_{1}$.}\end{array}\right.\]
where it is assumed the so called "Barendregt's variable convention":
If \(t_{1},t_{2},\ldots,t_{n}\) occur in a certain mathematical context (e.g. definition, proof), then in these terms all bound variables are chosen to be different from the free variables.
This means that we are assumming that both \(x\neq y\) and \(y\notin fv(u)\) in the case \(t=\lambda_{y}.t_{1}\). This approach is very convenient in informal proofs because it avoids having to rename bound variables. In order to formalize the capture free substitution, _i.e._ the metasubstitution, there are different possible approaches. In our case, we perform a renaming of bound variables whenever the metasubstitution is propagated inside a binder. In our case, there are two binders: abstractions and explicit substitutions.
Let \(t\) and \(u\) be terms, and \(x\) a variable. The result of substituting \(u\) for the free ocurrences of \(x\) in \(t\), written \(\{x:=u\}t\) is defined as follows:
\[\{x:=u\}t=\left\{\begin{array}{ll}u,&\mbox{if $t=x$;}\\ y,&\mbox{if $t=y$ (x\neq y);}\\ \{x:=u\}t_{1}\ \{x:=u\}t_{2},&\mbox{if $t=t_{1}\ t_{2}$;}\\ \lambda_{x\cdot I_{1}},&\mbox{if $t=\lambda_{x\cdot I_{1}}$;}\\ \lambda_{z\cdot}(\{x:=u\}((y\ z)t_{1})),&\mbox{if $t=\lambda_{y\cdot I_{1}},x \neq y,z\notin fv(t)\cup fv(u)\cup\{x\}$;}\\ [x:=\{x:=u\}t_{2}]t_{1},&\mbox{if $t=[x:=t_{2}]t_{1}$;}\\ [z:=\{x:=u\}t_{2}]\{x:=u\}((y\ z)t_{1}),&\mbox{if $t=[y:=t_{2}]t_{1},x \neq y,z\notin fv(t)\cup fv(u)\cup\{x\}$.}\end{array}\right. \tag{5}\]
and the corresponding Coq code is as follows:
\(\mbox{\tt Function}\)\(\mathit{subst\_rec\_fun}\) (\(\mathit{t\_in\_sexp}\)) (\(u\cdot n\_sexp\)) (\(x\cdot atom\)) {measure_size_\(t\)} : \(n\_sexp:=\)
match \(t\)with
\(\mid\)\(n\_var\)\(y\Rightarrow\)if (\(x==y\)) then \(u\)else \(t\)
\(\mid\)\(n\_abs\)\(y\)\(t1\Rightarrow\)if (\(x==y\)) then \(t\)elselet (\(z\_\_\)) :=
\(\quad\)\(atom\_fresh\) (\(fv\_nom\)\(u\) '\(\mathit{union}\)' \(fv\_nom\)\(t\) '\(\{\)\(\{x\}\}\)) in \(n\_abs\)\(z\) (\(\mathit{subst\_rec\_fun}\) (\(\mathit{swap}\ y\ z\ t1\)) \(u\)\(x\))
\(\mid\)\(n\_app\)\(t1\)\(t2\Rightarrow\)\(n\_app\) (\(\mathit{subst\_rec\_fun}\ t1\ u\ x\)) (\(\mathit{subst\_rec\_fun}\ t2\ u\ x\))
\(\mid\)\(n\_sub\)\(t1\)\(y\)\(t2\Rightarrow\)if (\(x==y\)) then \(n\_sub\)\(t1\)\(y\) (\(\mathit{subst\_rec\_fun}\ t2\ u\ x\))elselet (\(z\_\_\)) :=
\(\quad\)\(atom\_fresh\) (\(fv\_nom\)\(u\) '\(\mathit{union}\)' \(fv\_nom\) '\(\{\)\(\{x\}\}\)) in \(n\_sub\) (\(\mathit{subst\_rec\_fun}\) (\(\mathit{swap}\ y\ z\ t1\)) \(u\ x\))\(z\) (\(\mathit{subst\_rec\_fun}\ t2\ u\ x\))end.
Note that this function is not structurally recursive due to the swaps in the recursive calls, and that's why we need to provide the size of the term \(t\) as the measure parameter. Alternatively, a structurally recursive version of the function \(\mathit{subst\_rec\_fun}\) can be found in the file _nominal.v_ of the _Metalib_ library6. It has the size of the term as an explicit parameter in which the substitution will be performed, and hence one has to deal with the size of the term in each recursive call. We write \(\{x\cdot=u\}t\) instead of \(\mathit{subst\_rec\_fun}\)\(t\ u\ x\), and refer to it just as "metasubstitution".
Footnote 6: [https://github.com/plclub/metalib](https://github.com/plclub/metalib)
The following lemma states that if \(x\notin fv(t)\) then \(\{x:=u\}t=_{\alpha}t\). In informal proofs the conclusion of this lemma is usually stated as a syntactic equality, i.e. \(\{x:=u\}t=t\) instead of the \(\alpha\)-equivalence, but the function \(\mathit{subst\_rec\_fun}\) renames bound variables whenever the metasubstitution is propagated inside an abstraction or an explicit substitution, even in the case that the metasubstitution has no effect in the subterm it is propagated, as long as the variables of the metasubstitution and the binder (abstraction or explicit substitution) are different of each other. That's why the syntactic equality does not hold here.
\(\mbox{\tt Lemma}\)\(m\_subst\_notin\): \(\forall\ t\ u\ x,\ x\ '\mathit{notin}\)' \(fv\_nom\)\(t\rightarrow\{x:=u\}t\) =\(a\ t\).
**Proof.** The proof is done by induction on the size of the term \(t\) using \(n\_sexp\_induction\) defined above. The interesting cases are the abstraction and the explicit substituion. We focus in the abstraction case, _i.e._ when \(t=\lambda_{y\cdot}t_{1}\), where the goal to be proven is \(\{x:=u\}(\lambda_{y}.t_{1})=_{\alpha}\lambda_{y}.t_{1}\). We consider two cases:
1. If \(x=y\) the result is trivial because both LHS and RHS are equal to \(\lambda_{y}.t_{1}\)
2. If \(x\neq y\), we have to prove that \(\lambda_{z\cdot}\{x:=u\}(y\ z)t_{1}=_{\alpha}\lambda_{y}.t_{1}\), where \(z\) is a fresh name not in the set \(fv\_nom(u)\cup fv\_nom(\lambda_{y}.t_{1})\cup\{x\}\). The induction hypothesis express the fact that every term with the same size as the body \(t_{1}\) of the abstraction satisfies the property to be proven: \(\forall t^{\prime},|t^{\prime}|=|t_{1}|\rightarrow\forall u\ x^{\prime}\ x_{0}\ y_{0},x^{\prime}\notin fv((x_{0}\ y_{0})t^{\prime}) \rightarrow\{x^{\prime}:=u\}((x_{0}\ y_{0})t^{\prime})=_{\alpha}(x\ y)t^{\prime}\). Therefore, according to the definition of the metasubstitution (function [subst\_rec\_fun]), the variable \(y\) will be renamed to \(z\), and the metasubstitution is propagated inside the abstraction resulting in the following goal:
\(\lambda_{c}.\{x:=u\}((z\,y)t_{1})=_{\alpha}\lambda_{y}.t_{1}\). Since \(z\notin fv\_nom(\lambda_{y}.t_{1})=fv\_nom(t_{1})\setminus\{y\}\), there are two cases to consider, either \(z=y\) or \(z\in fv(t_{1})\): 1. \(z=y\): In this case, we are done by the induction hypothesis taking \(x_{0}=y_{0}=y\), for instance. 2. \(z\neq y\): In this case, we can apply the rule \(aei\_abs\_diff\), resulting in the goal \(\{x:=u\}((y\,z)t_{1})=_{\alpha}(y\,z)t_{1}\) which holds by the induction hypothesis, since \(|(z\,y)t_{1}|=|t_{1}|\) and \(x\notin fv\_nom((y\,z)t_{1})\) because \(x\neq z\), \(x\neq y\) and \(x\notin fv\_nom(t_{1})\).
The explicit substitution case is also interesting, _i.e._ if \(t=[y:=t_{2}]t_{1}\), but it follows a similar strategy used in the abstraction case for \(t_{1}\). For \(t_{2}\) the result follows from the induction hypothesis.
The following lemmas concern the expected behaviour of the metasubstitution when the metasubstitution's variable is equal to the abstraction's variable. Their proofs are straightforward from the definition \(subst\_rec\_fun\). The corresponding version when the metasubstitution's variable is different from the abstraction's variable will be presented later.
Lemma\(m\_subst\_abs\_eq\): \(\forall\ u\ x\ l\), \(\{x:=u\}(n\_abs\ x\ t)=n\_abs\ x\ t\).
Lemma\(m\_subst\_sub\_eq\): \(\forall\ u\ x\ l1\ t2\), \(\{x:=u\}(n\_sub\ l1\ x\ t2)=n\_sub\ t1\ x\ (\{x:=u\}t2)\).
We will now prove some stability results for the metasubstitution w.r.t. \(\alpha\)-equivalence. More precisely, we will prove that if \(t=_{\alpha}t^{\prime}\) and \(u=_{\alpha}u^{\prime}\) then \(\{x:=u\}t=_{\alpha}\{x:=u^{\prime}\}t^{\prime}\), where \(x\) is a variable and \(t,t^{\prime},u\) and \(u^{\prime}\) are terms. This proof is split in two cases: firstly, we prove that if \(u=_{\alpha}u^{\prime}\) then \(\{x:=u\}t=_{\alpha}\{x:=u^{\prime}\}t,\forall x,t,u,u^{\prime}\); secondly, we prove that if \(t=_{\alpha}t^{\prime}\) then \(\{x:=u\}t=_{\alpha}\{x:=u\}t^{\prime},\forall x,t,t^{\prime},u\). These two cases are then combined through the transitivity of the \(\alpha\)-equivalence relation. Nevertheless, this task was not straighforward. Let's follow the steps of our first trial.
Lemma\(aei\_m\_subst\_in\_trial\): \(\forall\ t\ u\ u^{\prime}\ x\), \(u=a\ u^{\prime}\rightarrow(\{x:=u\}t)=a\ (\{x:=u^{\prime}\}t)\).
**Proof.** The proof is done by induction on the size of term \(t\), and we will focus on the abstraction case, _i.e._\(t=\lambda_{y}.t_{1}\). The goal in this case is \(\{x:=u\}(\lambda_{y}.t_{1})=_{\alpha}\{x:=u^{\prime}\}(\lambda_{y}.t_{1})\).
1. If \(x=y\) then the result is trivial by lemma \(m\_subst\_abs\_eq\).
2. If \(x\neq y\) then we need two fresh names in order to propagate the metasubstitution inside the abstractions on each side of the \(\alpha\)-equation. Let \(x_{0}\) be a fresh name not in the set \(fv\_nom(u)\cup fv\_nom(\lambda_{y}.t_{1})\cup\{x\}\), and \(x_{1}\) be a fresh name not in the set \(fv\_nom(u^{\prime})\cup fv\_nom(\lambda_{y}.t_{1})\cup\{x\}\). After propagating the metasubstitution we need to prove \(\lambda_{x_{0}}.\{x:=u\}((y\ x_{0})t_{1})=_{\alpha}\lambda_{x_{1}}.\{x:=u^{ \prime}\}((y\ x_{1})t_{1})\), and we proceed by comparing \(x_{0}\) and \(x_{1}\): 1. If \(x_{0}=x_{1}\) then we are done by the induction hypothesis. 2. Otherwise, we need to apply the rule \(aei\_abs\_diff\) and the goal is \(\{x:=u\}((y\ x_{0})t_{1})=_{\alpha}(x_{0}\ x_{1})(\{x:=u^{\prime}\}((y\ x_{1})t_ {1}))\). But in order to proceed we need to know how to propagate the swap inside the metasubstitution, which is the content of the following lemma:
Lemma\(swap\_m\_subst\): \(\forall\ t\ u\ x\ y\ z\), \(swap\ y\ z\ (\{x:=u\}t)=a\ (\{(v\_swap\ y\ z\ x):=(swap\ y\ z\ u)\}(swap\ y\ z\ t))\).
**Proof.** We write the statement of the lemma in metanotation before starting the proof:
\(\forall t\ u\ x\ y\ z,(y\ z)(\{x:=u\}t)=_{\alpha}\{(y\ z)x:=(y\ z)u\}(y\ z)t\)
The proof is by induction on the size of the term \(t\), and again we will focus only on the abstraction case, _i.e._ when \(t=\lambda_{w}.t_{1}\). The goal in this case is \((y\ z)(\{x:=u\}(\lambda_{w}.t_{1}))=_{\alpha}\{(y\ z)x:=(y\ z)u\}((y\ z)\lambda_ {w}.t_{1})\), and we proceed by comparing \(x\) and \(w\).
1. If \(x=w\) the \(\alpha\)-equality is trivial.
2. If \(x\neq w\) then we need a fresh name, say \(w_{0}\), to be able to propagate the metasubstitution inside the abstraction on the LHS of the \(\alpha\)-equation. The variable \(w_{0}\) is taken such that it is not in the set \(fv\_nom(u)\cup fv\_nom(\lambda_{w}.t_{1})\cup\{x\}\), and we get the goal \(\lambda_{(y\;z)w_{0}}.(y\;z)(\{x:=u\}(w\;w_{0})t_{1})=_{\alpha}\{(\!(y\;z)\!)x :=(y\;z)u\}(\lambda_{(y\;z)\!)w_{*}}.(y\;z)t_{1})\). Now we propagate the metasubstitution over the abstraction in the RHS of the goal. Since \(x\neq w\) implies \((\!(y\;z)\!)x\neq(y\;z)w\), we need another fresh name, say \(w_{1}\), not in the set \(fv\_nom((y\;z)u)\cup fv\_nom(\lambda_{(y\;z)w_{*}}.(y\;z)t_{1})\cup\{(\!(y\;z)\!)x\}\), and after the propagation we need to prove that \(\lambda_{(\!(y\;z)\!)w_{0}}.(y\;z)(\{x:=u\}(w\;w_{0})t_{1})=_{\alpha}\lambda_{ w_{1}}.\{(\!(y\;z)\!)x:=(y\;z)u\}((w_{1}\;(\!(y\;z)\!)w)((y\;z)t_{1}))\). We consider two cases: either \(w_{1}=(\!(y\;z)\!)w_{0}\) or \(w_{1}\neq(\!(y\;z)\!)w_{0}\). In the former case, we apply the rule \(aeq\_abs\_same\) and we are done by the induction hypothesis. When \(w_{1}\neq(\!(y\;z)\!)w_{0}\), the application of the rule \(aeq\_abs\_diff\) generates the goal \[(w_{1}\;(\!(y\;z)\!)w_{0})(y\;z)(\{x:=u\}(w\;w_{0})t_{1})=_{\alpha}\{(\!(y\;z )\!)x:=(y\;z)u\}((w_{1}\;(\!(y\;z)\!)w)((y\;z)t_{1}))\] (6) We can use the induction hypothesis to propagate the swap inside the metasubstitution, and then we get an \(\alpha\)-equality with metasubstitution as main operation on both sides, whose corresponding components are \(\alpha\)-equivalent. In a more abstract way, we have to prove an \(\alpha\)-equality of the form \(\{x:=u\}t=_{\alpha}\{x:=u^{\prime}\}t^{\prime}\), where \(t=_{\alpha}t^{\prime}\) and \(u=_{\alpha}u^{\prime}\), but this is exactly what we were trying to prove in the previous lemma.
Therefore, we are in a circular problem because both \(aeq\_m\_subst\_in\_trial\) and \(swap\_m\_subst\) depend on each other to be proved!
Our solution to this problem consists in taking advantage of the fact that \(\alpha\)-equivalent terms have the same set of free variables (see lemma \(aeq\_fv\_nom\)), and noting that the external swap in the LHS of (6) was generated by the application of the rule \(aeq\_abs\_diff\) because the abstractions have different bindings. Let's go back to the proof of lemma \(aeq\_m\_subst\_in\): Lemma \(aeq\_m\_subst\_in\): \(\forall\;t\;u\;u^{\prime}\;x\), \(u\) =\(a\;u^{\prime}\) - \((\{x:=u\}t)\) =\(a\;(\{x:=u^{\prime}\}t)\).
**Proof.** We go directly to the abstraction case. When \(t=\lambda_{y}.t_{1}\), the goal is \(\{x:=u\}(\lambda_{y}.t_{1})=_{\alpha}\{x:=u^{\prime}\}(\lambda_{y}.t_{1})\). If \(x\neq y\) then the fresh name needed for the LHS must not belong to the set \(fv\_nom(u)\cup fv\_nom(\lambda_{y}.t_{1})\cup\{x\}\), while the fresh name for the RHS must not belong to \(fv\_nom(u^{\prime})\cup fv\_nom(\lambda_{y}.t_{1})\cup\{x\}\). These sets differ only by the subsets \(fv\_nom(u)\) and \(fv\_nom(u^{\prime})\). Nevertheless, these subsets are equal because \(u\) and \(u^{\prime}\) are \(\alpha\)-equivalent (see lemma \(aeq\_fv\_nom\)). Concretely, the current goal is as follows:
(let (z, _) := atom_fresh (union (fv_nom u) (union (fv_nom (n_abs y t1)) (singleton x))) in n_abs z (subst_rec_fun (swap y z t1) u x)) =a (let (z, _) := atom_fresh (union (fv_nom u') (union (fv_nom (n_abs y t1)) (singleton x))) in n_abs z (subst_rec_fun (swap y z t1) u' x))
where the sets \(fv\_nom(u)\) and \(fv\_nom(u^{\prime})\) appear in different _let_ expressions, each one is responsible for generating one fresh name. But since these sets are equal, if one could replace \(fv\_nom(u)\) by \(fv\_nom(u^{\prime})\) (or vice-versa) then only one fresh name is generated after evaluating the _atom_fresh_ function. Nevertheless, the only way that we managed to do such replacement was by adding the following axiom:
Axiom Eq_implies_equality: forall t1 t2, t1 =a t2 -> fv_nom t1 = fv_nom t2.
This axiom is similar to lemma \(aeq\_\)_fv_nom_ where the set equality [=] was replaced by the syntactic (Leibniz) equality =. Now, we can generate just one fresh name and propagate the metasubstitution on both sides of the goal, and we are done by the induction hypothesis. The case of the explicit substitution is similar, and with this strategy we avoid both the rules \(aeq\_\)_\(abs\_\)_diff_ and \(aeq\_\)_\(sub\_\)_diff_ that introduce swappings. \(\Box\)
The next lemma, named \(aeq\_\)_\(m\_subst\_\)_out_ will benefit the strategy used in the previous proof, but it is not straightforward.
Lemma\(aeq\_\)_m_subst_out: \(\forall\)_t t' u x, t =a t'_ \(\rightarrow\) (\(\{x:=u\}t\)) =a (\(\{x:=u\}t\)')._
**Proof.** The proof is by induction on the size of the term \(t\). Note that induction on the hypothesis \(t\) =\(a\)\(t\)' does not work due to a similar problem involving swaps that appears when structural induction on \(t\) is used. The abstraction and the explicit substitution are the interesting cases.
In the abstraction case, we need to prove that \(\{x:=u\}(\lambda_{y}.t_{1})=_{\alpha}\{x:=u\}t^{\prime}\), where \(\lambda_{y}.t_{1}=_{\alpha}t^{\prime}\) by hypothesis. Therefore, \(t^{\prime}\) must be an abstraction, and according to our definition of \(\alpha\)-equivalence there are two possible subcases:
1. In the first subcase, \(t^{\prime}=\lambda_{y}.t_{2}\), where \(t_{1}=_{\alpha}t_{2}\), and hence the current goal is \(\{x:=u\}(\lambda_{y}.t_{1})=_{\alpha}\{x:=u\}(\lambda_{y}.t_{2})\). We proceed by comparing \(x\) and \(y\): 1. If \(x=y\) then, we are done by using twice lemma \(m\_subst\_abs\_eq\). 2. When \(x\neq y\), then we need to propagate the metasubstitution on both sides of the goal. On the LHS, we need a fresh name that is not in the set \(fv(u)\cup fv(\lambda_{y}.t_{1})\cup\{x\}\), while for the RHS, the fresh name cannot belong to the set \(fv(u)\cup fv(\lambda_{y}.t_{2})\cup\{x\}\). From the hypothesis \(t_{1}=_{\alpha}t_{2}\), we know, by lemma \(aeq\_\)_-\(fv\_nom\)_, that the sets \(fv\_\)_nom\((t_{1})\) and \(fv\_\)_nom\((t_{2})\) are equal. Therefore, we can take just one fresh name, say \(z\), and propagate both metasubstitutions over abstractions with the same binding, and we conclude with the induction hypothesis.
2. In the second subcase, \(t^{\prime}=\lambda_{y_{0}}.t_{2}\), where \(t_{1}=_{\alpha}(y_{0}\ y)t_{2}\) and \(y\neq y_{0}\). The current goal is \[\{x:=u\}(\lambda_{y}.t_{1})=_{\alpha}\{x:=u\}(\lambda_{y_{0}}.t_{2})\] and we proceed by comparing \(x\) and \(y\): 1. If \(x=y\) then the goal simplifies to \(\lambda_{y}.t_{1}=_{\alpha}\{x:=u\}(\lambda_{y_{0}}.t_{2})\) by lemma \(m\_subst\_abs\_eq\), and we pick a fresh name \(x\), that is not in the set \(fv\_nom(u)\cup fv\_nom(\lambda_{y_{0}}.t_{2})\cup\{y\}\), and propagate the metasubstitution on the RHS of the goal, resulting in the new goal \(\lambda_{y}.t_{1}=_{\alpha}\lambda_{x}.\{y:=u\}((y_{0}\ x)t_{2})\). Note that the metasubstitution on the RHS has no effect in the term \((y_{0}\ x)t_{2}\) because \(y\neq y_{0}\), \(y\neq x\) and \(y\) does not occur free in \(t_{2}\) and we conclude by hypothesis. 2. If \(x\neq y\) then we proceed by comparing \(x\) and \(y_{0}\) on the RHS, and the proof, when \(x=y_{0}\), is analogous to the previous subcase. When both \(x\neq y\) and \(x\neq y_{0}\) then we need to propagate the metasubstitution on both sides of the goal \(\{x:=u\}(\lambda_{y}.t_{1})=_{\alpha}\{x:=u\}(\lambda_{y_{0}}.t_{2})\). We have that \(\lambda_{y}.t_{1}=_{\alpha}\lambda_{y_{0}}.t_{2}\) and hence the sets \(fv\_nom(\lambda_{y}.t_{1})\) and \(fv\_\)_nom\((\lambda_{y_{0}}.t_{2})\) are equal. Therefore, only one fresh name, say \(x_{0}\), that is not in the set \(x_{0}\notin fv\_nom(u)\cup fv\_nom(\lambda_{y_{0}}.t_{2})\cup\{x\}\) is enough to fulfill the conditions for propagating the metasubstitutions on both sides of the goal, and we are done by the induction hypothesis.
3. The explicit substitution operation is also interesting, but we will not comment because we are running out of space.
As a corollary, one can join the lemmas \(aeq\_m\_subst\_in\) and \(aeq\_m\_subst\_out\) as follows:
Corollary_aeq\_m\_subst\_eq_: \(\forall\ t\ t^{\prime}\ u\ u^{\prime}\ x\), \(t=a\ t^{\prime}\to u=a\ u^{\prime}\rightarrow(\{x:=u\}t)=a\ (\{x:=u^{\prime}\}t^{\prime})\).
Now, we show how to propagate a swap inside metasubstitutions using the decomposition of the metasubstitution provided by the corollary \(aeq\_m\_subst\_eq\).
Lemma_swap\(\_subst\_rec\_fun\)_: \(\forall\ x\ y\ z\ t\ u\), swap \(x\ y\ (\{z:=u\}t)=a\ (\{(vswap\ x\ y\ z):=(swap\ x\ y\ u)\}(swap\ x\ y\ t))\).
**Proof.** Firstly, we write the lemma in tranatonaton: \(\forall x\ y\ z\ t\ u,(x\,y)\{z:=u\}t=_{\alpha}\{(x\,y)z:=(x\,y)u\}(x\,y)t\). Next, we compare \(x\) and \(y\), since the case \(x=y\) is trivial. When \(x\neq y\), the proof proceeds by induction on the size of the term \(t\). The tricky cases are the abstraction and explicit substitution, and we comment just the former case. If \(t=\lambda_{y^{\prime}}.t_{1}\) then we must prove that \((x\ y)\{z:=u\}(\lambda_{y^{\prime}}.t_{1})=_{\alpha}\{(x\,y)\}(x\ y)u\}(x\ y)( \lambda y^{\prime}.t_{1})\). Firstly, we compare the variables \(y^{\prime}\) and \(z\) according to the definition of the metasubstitution:
1. When \(y^{\prime}=z\) the metasubstitution is erased according to the definition (5) on both sides of the goal and we are done.
2. When \(y^{\prime}\neq z\) then the metasubstitutions on both sides of the goal need to be propagated inside the corresponding abstractions. In order to do so, a new name need to be created. Note that in this case, it is not possible to create a unique name for both sides because the two sets are different. In fact, in the LHS the fresh name cannot belong to the set \(fv\_nom(\lambda_{y^{\prime}}^{\prime}t_{1})\cup fv\_nom(u)\cup\{z\}\), while the name of the RHS cannot belong to the set \(fv\_nom((x\ y)\lambda_{y^{\prime}}^{\prime}t_{1})\cup fv\_nom((x\ y)u)\cup\{(x \,y)z\}\). Let \(x_{0}\) be a fresh name that is not in the set \(fv\_nom(\lambda_{y^{\prime}}^{\prime}t_{1})\cup fv\_nom(u)\cup\{z\}\), and \(x_{1}\) a fresh name that is not in the set \(fv\_nom((x\ y)\lambda_{y^{\prime}}^{\prime}t_{1})\cup fv\_nom((x\ y)u)\cup\{(x \,y)z\}\). After the propagation of the metasubstitutions, we have to prove that \(\lambda_{(x\,y)y\in 0}.((x\ y)(\{z:=u\}((y^{\prime}\ x_{0})t_{1}))=_{\alpha} \lambda_{x_{1}}.(\{(x\,y)z:=(x\,y)u\}(((x\,y)y^{\prime})\ x_{1})((x\,y)t_{1}))\). We proceed by comparing \(x_{1}\) with \((x\ y)x_{0}\). 1. If \(x_{1}=(\!(x\ y)\!)x_{0}\) then we use the induction hypothesis to propagate the swap inside the metasubstitution in the LHS, and we get the goal \(\{(x\,y)\!z:=(x\ y)u\}((x\ y)((y^{\prime}\ x_{0})t_{1}))=_{\alpha}\{(x\ y)\} \!z:=(x\,y)u\}((((x\,y)y)^{\prime}\ )\ (((x\,y)\!)x_{0}))((x\,y)t_{1}))\) that is proved by the swap equivariance lemma \(swap\_equivariance\). 2. If \(x_{1}\neq(\!(x\ y)\!)x_{0}\) then by the rule \(aeq\_abs\_diff\) we have to prove that the variable \((\!(x\ y)\!)x_{0}\) is not in the set of free variables of the term \(\{(\!(x\ y)\!)z:=(x\ y)u\}(((x\,y)y^{\prime}\ x_{1})(x\ y)t_{1})\) and that \((x\,y)(\{z:=u\}((y^{\prime}\ x_{0})t_{1}))=_{\alpha}(x_{1}\ ((x\,y)x_{0}))((\{x\,y )z:=(x\,y)u\}((((x\,y)y^{\prime}\ x_{1})((x\,y)t_{1}))\). The former condition is routine. The later condition is proved using the induction hypothesis twice to propagate the swaps inside the metasubstitutions on each side of the \(\alpha\)-equality. This swap has no effect on the variable \(z\) of the metasubstitution because \(x_{1}\) is different from \((\!(x\,y)\!)z\), and \(x_{0}\) is different from \(z\). Therefore we can apply lemma \(aeq\_m\_subst\_eq\), and each generated case is proved by routine manipulation of swaps.
\(\Box\)
The following two lemmas toghether with lemmas \(m\_subst\_abs\_eq\) and \(m\_subst\_sub\_eq\) are essential in simplifying the propagations of metasubstitution. They are presented here because they depend on lemma \(swap\_subst\_rec\_fun\).
Lemma_m\(\_subst\_abs\_neqq\)_: \(\forall\ t\ u\ x\ y\ z\), \(x\neq y\to z\ \mbox{`}not\mbox{`}fv\_nom\ u\ \mbox{`}union^{\ast}\ fv\_nom\ (n\_abs\ y\ t)\ \mbox{`}union^{\ast}\ \{(x\}\}\rightarrow\{x:=u\}(n\_abs\ y\ t)\ \mbox{`}a\ n\_abs\ z\ (\{x:=u\}(swap\ y\ z\ t))\).
\(\texttt{Lemma }m\_subst\_sub\_neq:\forall\ t1\ t2\ u\ x\ y\ z,\ x\neq y\ z \ \textit{`notin}^{*}\textit{fv\_nom}\ u\ \textit{`union}^{*}\textit{fv\_nom}\ ([y:=t2]t1)\ \mbox{`union}^{*}\ \{\{\{\{\{\{\{\{\{x}\}\}\}\}\}\}}} \ \{\{\{\{\{\{\{\{\{\{\{\{\{\{\{\{\{x}}}}}}}}}}}}}}\ \rightarrow\{x:=u\}([y:=t2]t1)=\alpha\ ([z:=(\{x:=u\}t2)]\{ \{\{\{\{\{\{\{{\{\{\{{\{\{x}}}}}}}}}}}}:=u\}(\textit{swap}\ y\ z\ t1})))\).
In the pure \(\lambda\)-calculus, the substitution lemma is probably the first non trivial property. In our framework, we have defined two different substitution operators, namely, the metasubstitution denoted by \(\{x:=u\}t\) and the explicit substitution, written as \([x:=u]t\). In what follows, we present the main steps of our proof of the substitution lemma for \(n\_{\textit{-}}\textit{sexp}\) terms, _i.e._ for nominal terms with explicit substitutions.
\(\texttt{Lemma }m\_subst\_lemma:\forall\ t1\ t2\ t3\ x\ y,\ x\neq y\ x\ \mbox{`notin}^{*}\ \textit{(fv\_nom}\ t3)\rightarrow\)
\((\{y:=t3\}(\{x:=t2\}t1))=\alpha\ (\{x:=(\{y:=t3\}t2)\}(\{y:=t3\}t1))\).
**Proof.** The proof is by induction on the size of \(t1\). The interesting cases are the abstraction and the explicit substitution. We focus on the former, _i.e._\(t1=\lambda_{c}.t_{1}^{\prime}\), whose initial goal is
\(\{y:=t_{3}\}(\{x:=t_{2}\}(\lambda_{c}.t_{1}^{\prime}))=_{\alpha}\ \{x:=\{y:=t_{3}\}t_{2}\}(\{y:=t_{3}\}(\lambda_{c}.t_{1}^{ \prime}))\)
assuming that \(x\neq y\) and \(x\notin fv\_nom(t_{3})\). The induction hypothesis generated by this case states that the lemma holds for any term of the size of \(t_{1}^{\prime}\), _i.e._ any term with the same size of the body of the abstraction. We start comparing \(z\) with \(x\) aiming to apply the definition of the metasubstitution on the LHS of the goal.
1. When \(z=x\), the subterm \(\{x:=t_{2}\}\lambda_{x}.t_{1}^{\prime}\) reduces to \(\lambda_{x}.t_{1}^{\prime}\) by lemma \(m\_subst\_abs\_eq\), and then the LHS reduces to \(\{y:=t_{3}\}\lambda_{x}.t_{1}^{\prime}\). The RHS \(\{x:=\{y:=t_{3}\}t_{2}\}(\{y:=t_{3}\}\lambda_{x}.t_{1}^{\prime}\) also reduces to it because \(x\) does not occur free neither in \(\lambda_{x}.t_{1}^{\prime}\) nor in \(t_{3}\), and we are done.
2. When \(z\neq x\), then we compare \(y\) with \(z\). 1. When \(y=z\), the subterm \(\{y:=t_{3}\}(\lambda_{c}.t_{1}^{\prime})\) can be simplified to \(\lambda_{c}.t_{1}^{\prime}\), by lemma \(m\_subst\_abs\_eq\). On the LHS, we propagate the internal metasubstitution over the abstraction taking a fresh name \(w\) not in the set \(fv\_nom(\lambda_{c}.t_{1}^{\prime})\cup fv\_nom(t_{3})\cup fv\_nom(t_{2})\cup \{x\}\), where the goal is \(\{z:=t_{3}\}(\lambda_{w}.(\{x:=t_{2}\}(z\ w)t_{1}^{\prime}))=_{\alpha}\ \{x:=\{z:=t_{3}\}t_{2}\}(\lambda_{c}.t_{1}^{\prime})\). We proceed by comparing \(z\) and \(w\): 1. If \(z=w\) then the current goal simplifies to \(\{w:=t_{3}\}(\lambda_{w}.(\{x:=t_{2}\}t_{1}^{\prime}))=_{\alpha}\ \{x:=\{w:=t_{3}\}t_{2}\}(\lambda_{w}.t_{1}^{\prime})\) We can propagate the metasubstitution on the RHS and there is no need for a fresh name since the variable \(w\) fullfil the condition required by lemma \(m\_subst\_abs\_neq\). We conclude with lemmas \(\textit{aeq\_m\_subst\_in}\) and \(m\_subst\_not\in\). 2. If \(z\neq w\) then we can propagate the metasubstitutions on both sides of the goal taking \(w\) as the fresh name that fullfil the conditions of lemma \(m\_subst\_abs\_neq\). We proceed with \(\textit{aeq\_abs\_same}\), and conclude by the induction hypothesis. 2. If \(y\neq z\) then we follow a similar strategy that avoids unnecessary generation of fresh names. In this way, we take a fresh \(w\) that is not in the set \(fv\_nom(t_{3})\cup fv\_nom(t_{2})\cup fv\_nom(\lambda_{c}.t_{1}^{\prime})\cup \{x\}\cup\{y\}\), and propagate the metasubstitution inside the abstraction resulting in the goal \(\lambda_{w}.(\{y:=t_{3}\}(\{x:=t_{2}\}(z\ w)t_{1}^{\prime})=_{\alpha}\lambda_{w }.(\{x:=\{y:=t_{3}\}t_{2}\}(\{y:=t_{3}\}(z\ w)t_{1}^{\prime})\). We conclude by the induction hypothesis. \(\Box\)
## 4 Conclusion and Future work
In this work, we presented a formalization of the substitution lemma in a framework that extends the \(\lambda\)-calculus with an explicit substitution operator. Calculi with explicit substitutions are important frameworks to study properties of the \(\lambda\)-calculus and have been extensively studied in the last decades [2, 3, 4, 5, 10].
The formalization is modular in the sense that the explicit substitution operator is generic and could be instantiated with any calculi with explicit substitutions in a nominal setting. Despite the fact that our definition of metasubstitution, called \(subst\_rec\_fun\), performs a renaming with a fresh name whenever it is propagated inside a binding structure (either an abstraction or an explicit substitution in our case), we showed how to avoid unnecessary generation of fresh names that could result in a circular problem in the proofs. Several auxiliary (minor) results were not included in this document, but they are numerous and can be found in the source file of the formalization that is publicly available at [https://flaviumoura.info/files/msubst.v](https://flaviumoura.info/files/msubst.v)
As future work, we intend to get rid of the axiom \(Eq\_implies\_equality\). The natural candidate for this would be the use of generalized rewriting, _i.e._ setoid rewriting, but it not clear whether generalized rewriting allows a rewrite step in a let expression. Another possibility is the implementation of the metasubstitution using recursors [23, 14]. In addition, we plan to integrate this formalization with another one related to the Z property7 to prove confluence of calculi with explicit substitutions [21, 22], as well as other properties in the nominal framework [17].
Footnote 7: [https://cicm-conference.org/2021/cicm.php?event=fmm&menu=general](https://cicm-conference.org/2021/cicm.php?event=fmm&menu=general)
|
2310.20416 | Linear-nonlinear duality for circuit design on quantum computing
platforms | The unitary description of beam splitters (BSs) and optical parametric
amplifiers (OPAs) in terms of the dynamical Lie groups $SU(2)$ and $SU(1,1)$
has a long history. Recently, an inherent duality has been proposed that
relates the unitaries of both optical devices. At the physical level, this
duality relates the linear nature of a lossless BS to the nonlinear Parametric
Down-Conversion (PDC) process exhibited by an OPA. Here, we argue that the
duality between BS and PDC can instead be naturally interpreted by analyzing
the geometrical properties of both Lie groups, an approach that explicitly
connects the dynamical group description of the optical devices with the
aforementioned duality. Furthermore, we show that the BS-PDC duality can be
represented through tensor network diagrams, enabling the implementation of a
PDC as a circuit on a standard quantum computing platform. Thus, it is feasible
to simulate nonlinear processes by using single-qubit unitaries that can be
implemented on currently available digital quantum processors. | William E. Salazar, Omar Calderón-Losada, John H. Reina | 2023-10-31T12:45:22Z | http://arxiv.org/abs/2310.20416v1 | # Linear-nonlinear duality for circuit design on quantum computing platforms
###### Abstract
The unitary description of beam splitters (BSs) and optical parametric amplifiers (OPAs) in terms of the dynamical Lie groups \(SU(2)\) and \(SU(1,1)\) has a long history. Recently, an inherent duality has been proposed that relates the unitaries of both optical devices. At the physical level, this duality relates the linear nature of a lossless BS to the nonlinear Parametric Down-Conversion (PDC) process exhibited by an OPA. Here, we argue that the duality between BS and PDC can instead be naturally interpreted by analyzing the geometrical properties of both Lie groups, an approach that explicitly connects the dynamical group description of the optical devices with the aforementioned duality. Furthermore, we show that the BS-PDC duality can be represented through tensor network diagrams, enabling the implementation of a PDC as a circuit on a standard quantum computing platform. Thus, it is feasible to simulate nonlinear processes by using single-qubit unitaries that can be implemented on currently available digital quantum processors.
+
Footnote †: John H. Reina: Electronic Address: [email protected]
+
Footnote †: John H. Reina: Electronic Address: [email protected]
+
Footnote †: John H. Reina: Electronic Address: [email protected]
## 1 Introduction
Quantum technologies, in particular, quantum computing have seen a remarkable surge in recent years, and their potential to revolutionize various fields, including computing, communication, and sensing has garnered significant interest [1, 2, 3, 4, 5, 6]. Among all the different quantum-technological platforms, photonic-based ones have become crucial in the continuous advance of novel technologies mainly due to their high-fidelity, scalability, and low error rates [7, 8, 9, 10].
Two essential devices in any optical-based quantum platform are beam splitters and optical parametric amplifiers. Beam splitters create superpositions, whereas OPAs are used to create entangled photon pairs throughout squeezing [11, 12, 13, 14, 15]. Quantum technologies, particularly quantum computing, have extended and exploited both resources on equal footing to argue for a quantum advantage [16, 17, 18, 19, 20, 9]. However, although equally relevant, the underlying physical realization of both optical devices differs significantly. Lossless beam splitters are passive devices constructed using simple crystals, which conserve the total number of photons entering the modes. In contrast, the non-linear crystal used in parametric down-conversion is an active device that does not conserve the number of photons. The active nature of the latter provides the distinction in concrete physical nature between both optical devices.
In non-photonic quantum platforms [22, 23, 24, 25, 26, 27, 28], even if the underlying physical implementations are different, the role played by a 50:50 lossless BS can be identified with the one played by a Hadamard gate. This identification is in fact a bijection in the KML scheme of linear optical quantum computing [29, 30], where there is a one-to-one correspondence between \(Y\)-rotation unitary gates and beam splitters of varying transmittance. Alongside this observation for the identification of a beam splitter-like in non-photonic quantum computing platforms, we may also ask about the identification of a device that could act as a parametric amplifier in non-photonic platforms and its associated particular unitaries.
To address this question, we explore fundamental differences between photonic and non-photonic platforms, and how they utilize these essential devices. In photonic platforms, OPAs are used in the so-called PDC non-linear process, but such non-linear interactions are much less common in non-photonic systems. Instead, in this work we provide an alternative way that dispenses with the necessity of linking non-linear processes to PDC thus extending the notion of parametric amplifying to non-photonic quantum technological platforms. We present a physically-motivated argument relating the counting statistics of both an OPA (non-linear de
vice) and a lossless BS (linear device). Given the common use of beam splitters as rotation gates in quantum computing schemes [29, 30], we aim to extend the concept of a parametric amplifier beyond photonic-based qubit systems. Therefore, here we inquire whether such an optical non-linear process has an equivalent in the realm of practical integrated qubit systems.
The organization of this paper is as follows: In Sec. 2, we establish a connection, based on their common Lie-group structure, between the unitary associated with a two-mode lossless beam splitter of transmittance \(\eta\) and the unitary of a two-mode lossless parametric amplifier of gain \(g\). The connection uses a physically intuitive argument that leverages the well-known \(SU(2)\) and \(SU(1,1)\) group structures of the BS and PDC, respectively. In Sec. 3, we propose a circuit model protocol to simulate the action of a PDC in terms of the action of a BS, at the level of matrix elements. This protocol is constructed by encoding the bosonic occupation numbers and using a suitable teleportation protocol to swap the number of photons entering into one of the modes. Using this protocol, we introduce the concept of a parametric amplifier (up to \(q\))-gate in Sec. 4, which is defined as a unitary capable of simulating the generation of entangled pairs of photons binary encoded in qubits, up to \(q\)-order. By extrapolating the action of a PDC to non-photonic-based quantum platforms, this protocol enables the implementation of PDC-based entanglement generation on other quantum platforms. Finally, in Sec. 5, we summarize our main findings and provide an outlook for future research directions.
## 2 Beam splitters as Parametric-Down converters in Euclidean time
We begin by establishing some basic results of the representation theory of the Lie groups \(SU(2)\) and \(SU(1,1)\). In particular, we focus on their interpretation in the context of optical devices. To set the grounds, let \(a\) and \(b\) denote the annihilation operators of the bosonic modes (light beams) entering a two-mode optical device (see Fig. 1). At the operator level, the action of the particular optical device is to rotate the two modes, and for a BS of transmission \(\eta\),
\[\begin{pmatrix}a_{out}\\ b_{out}\end{pmatrix}\,\rightarrow\begin{pmatrix}\cos(\theta)&\sin(\theta)\\ -\sin(\theta)&\cos(\theta)\end{pmatrix}\begin{pmatrix}a\\ b\end{pmatrix}, \tag{1}\]
with \(\cos^{2}\theta=\eta\). On the other hand, for a PA of gain \(g\) one yields,
\[\begin{pmatrix}a_{out}\\ b_{out}\end{pmatrix}\,\rightarrow\begin{pmatrix}\cosh(\phi)&\sinh(\phi)\\ \sinh(\phi)&\cosh(\phi)\end{pmatrix}\begin{pmatrix}a\\ b\end{pmatrix} \tag{2}\]
with \(\cosh^{2}(\phi)=g\).
The two-mode bosonic algebra can be used to construct realizations of the \(\mathfrak{su}(2)\) and \(\mathfrak{su}(1,1)\) algebras via the Schwinger mapping [31, 32] (see Appendix A). In terms of \(\{J_{x},J_{y},J_{z}\}\) and \(\{K_{x},K_{y},K_{z}\}\), the generators for \(\mathfrak{su}(2)\) and \(\mathfrak{su}(1,1)\) respectively, the unitaries \(U_{\rm BS}^{\eta}\) describing a BS with transmittance \(\eta\), and \(U_{\rm PDC}^{g}\) describing a parametric amplifier of gain \(g\), are respectively given by the following one-parameter family of curves in the \(SU(2)\), and \(SU(1,1)\) group manifolds,
\[U_{\rm BS}^{\eta}=e^{2i\theta J_{y}}\,,\quad U_{\rm PDC}^{g}=e^{2i\phi K_{y}}. \tag{3}\]
The group-theoretic interpretation of optical devices has been used in quantum optics, especially in the description of \(SU(1,1)\) vs. \(SU(2)\) interferometers [33, 34, 35, 36]. On the algebraic level, \(\mathfrak{su}(2)\) and \(\mathfrak{su}(1,1)\) have the same complexification in \(\mathfrak{sl}(2,C)\), which means that their representation theory is identical. However, as groups, \(SU(1,1)\) is non-compact while \(SU(2)\) is compact. This difference between the dynamical groups of the two optical devices can be understood in terms of their respective group manifolds, since there is a local isomorphism between \(SU(1,1)\) and the group of Lorentz transformations in 2+1 dimensions \(SO(1,2)\). This means that the unitary of a parametric amplifier of gain \(g\) can be locally identified as a one-parameter curve in the \(SO(1,2)\) manifold (see Fig. 2(a)). After an imaginary time rotation, \(SO(1,2)\) transforms into \(SO(3)\) (see Fig. 2(b)), which is the isometry group of 3-dimensional Euclidean space. At the local level, \(SO(3)\) is again isomorphic to \(SU(2)\), which is the dynamical group of beam splitters. Within this interpretation, both \(U_{\rm BS}^{\eta}\) and \(U_{\rm PDC}^{g}\) are dual to each other via a Wick rotation. The Euclidean time rotation leads to the exact relation
\[\left\langle l,s\,|U_{\rm PDC}^{g}|\,n,m\right\rangle=\frac{1}{\sqrt{g}}\left \langle l,m|U_{\rm BS}^{1/g}|n,s\right\rangle, \tag{4}\]
between the matrix elements of both unitaries (see Appendix A.1). We note that while the previous connec
Figure 1: Schematics of a (a) two-mode beam splitter (BS) and an (b) optical parametric amplifier (PDC). The purple beam indicates the pump beam undergoing down-conversion. The bosonic operators of the input light beams are rotated by the action of each optical device as dictated by the unitaries in the equations (1) and (2), respectively. For a BS of transmittance \(\eta\), the input modes are rotated by a \(SU(2)\) unitary, while for a PDC of gain \(g\), the input modes are rotated by a \(SU(1,1)\) unitary that does not preserve the total number of photons entering the optical device.
tion between the BS and PDC transition amplitudes was presented in [37], we now propose that this duality arises naturally from the geometry of both \(SU(1,1)\) and \(SU(2)\) Lie groups.
Returning to the concept of duality, and since the transmittance of a lossless beam splitter ranges within \(0\leq\eta=g^{-1}\leq 1\), the relationship between matrix elements in Eq. (4) implies that the duality only relates the action of beam splitters of arbitrary transmittance with parametric amplifiers in the high gain regime, \(g\geq 1\).
## 3 Diagrammatic interpretation and composite encoding
The relationship between the BS and PDC one-parameter families of curves shown in the previous section (as a result of a Wick rotation) suggests the possibility of realizing the effects due to each of the optical devices in terms of the other, at least at the level of the matrix elements. In particular, by exchanging the number of photons injected in a mode, a beams plitter becomes a parametric amplifier with reciprocal transmittance (Fig. 3(a)). In a tensor network language, this observation may suggest that the identified duality can be represented graphically, as shown in Fig. 3(b), where the PDC box (left) represents a two-mode parametric amplifier of gain \(g\), and the BS box (right) denotes a lossless BS of transmittance \(1/g\). In both cases, each wire denotes a single mode, and the integer labels correspond to the number of photons either injected (\(n\), \(m\) and \(n\), \(s\), respectively; left wires) or measured (\(l\), \(s\) and \(l\), \(m\), respectively; right wires) at each box.
The concrete realization of the duality in Eq. (4) implies the exchange of the number of photons in the lower mode of the BS (see Fig. 3(b)). Physically, this swap operation, which acts directly on the occupation number representation, does not respect causality, i.e. we would have to measure even before preparing the initial state. However, we can get around this apparent limitation by encoding the bosonic occupations of both modes in qubits. Specifically, by performing a binary coding \(E_{B}\): \(\mathbb{N}\rightarrow\{0,1\}^{Q}\) over the number of photons entering each mode, where \(Q\) denotes the number of qubits, the "problematic" SWAP can be transformed into a teleportation, in the diagrammatic (categorical) sense [38], between the encoded qubits and some ancillae. This process is explicitly illustrated in Fig. 4. Note that the representation of the occupancy number is nothing more than a shorthand for vectors in the symmetric sector of the multiparticle Hilbert space. We cannot directly perform the SWAP on the occupation number states. Instead, we will encode these classical integers into the states of the qubits using a canonical binary coding \(E_{B}\), given by
\[|n\rangle\otimes|m\rangle\rightarrow|x_{N-1}x_{N-2}\ldots x_{0}\rangle \otimes|y_{M-1}y_{M-2}\ldots y_{0}\rangle \tag{5}\]
where \(N=\lceil\log_{2}(n)\rceil\) and \(M=\lceil\log_{2}(m)\rceil\) are the lengths of the binary strings representing the occupancy number on each mode. The number of qubits required for binary coding grows as \(\mathcal{O}(\log_{2}(nm))\). Note that for states with only one boson per mode, binary coding is optimal in the sense that the number of qubits is equal to the number of modes.
The action of a particular optical device cannot be performed directly on the binary encoded states, but only on the physically symmetrized states. As an example, consider the occupation number state \(|1,1\rangle\) and
Figure 3: BS-PDC duality: (a) The probability amplitude of measuring \(s\) and \(l\) photons at the output of a parametric amplifier of gain \(g\), given that \(n\) and \(m\) entering it, is dual to the probability amplitude of detecting \(m\), \(l\) photons, given \(n\), \(s\) entering a lossless beam splitter of transmittance \(1/g\). (b) Equivalence at the level of matrix elements, this implies a swap of the number of photons entering and leaving the lower mode. The equality in the diagram on panel (b) must be interpreted modulo the multiplication by \(1/\sqrt{g}\).
Figure 2: Beamsplitter as Euclidean Parametric Amplifier. A PDC of gain \(g\) is given by a point on the one-parameter curve over the group manifold \(SU(1,1)\). Locally, \(SU(1,1)\cong SO(1,2)\), and the parametric amplifier acts transitively on the two-dimensional hyperbolic space (as represented in panel (a)). After the wick rotation \(g\rightarrow\eta\), the one-parameter curve in \(SO(1,2)\) transforms into a curve in \(SO(3)\) (locally \(SU(2)\)) that acts transitively on the two-dimensional sphere (as represented in panel (b)). This identifies each beam splitter as a Euclidean parametric amplifier with “inverse temperature” \(\eta=g^{-1}\).
let \(\left|a\right\rangle\) denote a single particle state entering the first mode, analogously let \(\left|b\right\rangle\) denote a single particle state entering the second mode. For a BS, the action on each single particle state is equivalent to a \(R_{y}(\theta)\) rotation:
\[U_{BS}^{\eta}\left|1,1\right\rangle=R_{y}(\theta)\otimes R_{y}(\theta)\left| \psi^{+}\right\rangle, \tag{6}\]
with \(\left|\psi^{+}\right\rangle=\left(\left|ab\right\rangle+\left|ba\right\rangle \right)/\sqrt{2}\). We will refer to the symmetrization1 procedure as the physical encoding \(E_{P}\), and the binary-encoded states must be transformed through the composite encoding \(E_{P}\circ E_{B}^{-1}\) to enter the optical device (see Fig. 5). The difference and relevance between physical and binary encoding becomes clear in the second part of the algorithm. First, the occupancy states are encoded with the binary encoding to swap the in and out occupancy number in the second mode, then the physical encoding is used to pass these states into the optical device. We want to point out that although we use canonical binary encoding, this encoding could be replaced by any other suitable encoding [40] to improve the number of physical qubits needed for an experimental implementation of the protocol.
Footnote 1: The action of the symmetrizer \(S\) on a product of single-particle states.
## 4 The q-PDC concept
There is a fundamental constraint for a possible simulation of a parametric amplifier in terms of a circuit model, namely the non-particle conserving (active) nature of the latter. For an arbitrary initial state with \(n\) and \(m\) photons entering the first and second modes, respectively, the action of a lossless parametric amplifier produces a superposition of states with a constant imbalance between the number of photons, i.e. \(n-m\) is constant. This implies that we cannot directly construct a unitary \(U_{\mathrm{PDC}}\) capable of reproducing the exact matrix elements of a PDC for arbitrary initial two-mode photonic states. The reason is simple: since \(U_{\mathrm{PDC}}\) does not preserve the total number of photons, any representation must have strictly infinite dimensions. Even with this limitation, on the physical level we are not interested in arbitrary matrix elements, but rather in the pair production of entangled photons from the vacuum state, that is
\[U_{\mathrm{PDC}}^{g}\left|0,0\right\rangle=\sum_{l\in\mathbb{N}}c_{l}(g)\left| l,l\right\rangle, \tag{7}\]
where the \(c_{l}(g)\) transition amplitudes are given by \(c_{l}(g)=\tanh\left(\cosh^{-1}\sqrt{g}\right)^{l}/\sqrt{g}\), and their profile is shown in Fig. 6. As can be seen in such a figure, for a fixed gain \(g\), the probability of pair production of multi-photon entangled states decreases rapidly with the number of photons. Motivated by this observation, we define the parametric amplifier up to \(q\) gate \(U_{PDC,q}^{g}\) as the unitary which is able to reproduce the vacuum probability amplitudes of a lossless parametric amplifier of gain \(g\) up to \(q\) entangled multi-photon states, that is
\[U_{\mathrm{PDC},q}^{g}\left|0,0\right\rangle=\sum_{l=0}^{q}\tilde{c}_{l}(g) \left|l,l\right\rangle, \tag{8}\]
with \(|\tilde{c}_{l}(g)|^{2}=|c_{l}(g)|^{2}\). From our definition, there exists a hierarchy of \(q\)-parameteric amplifiers, given by \(U_{\mathrm{PDC},1}^{g}\subseteq U_{\mathrm{PDC},2}^{g}\subseteq U_{\mathrm{ PDC},3}^{g}\subseteq\cdots\subseteq U_{\mathrm{PDC},q}^{g}\) where
Figure 4: Circuit model scheme for the composite encoding in Fig. 5. Different states in the occupation basis will contribute with different numbers of qubits to the binary encoding, this number difference is represented by the color code: (blue) for the number of photons entering the second mode and (red) for the number of photons entering the first mode. To make the composite encoding a unitary gate, we have included a number of ancilla qubits that are not affected by the action of the optical device. To exchange the number of occupants in the lower mode, we need to introduce two additional quantum registers of \(\left\lceil\log m\right\rceil\) qubits each. The two additional quantum registers are used to create \(\left\lceil\log m\right\rceil\) EPR pairs represented by the cups at the beginning of the circuit (see [39]) and later to measure the \(\left\lceil\log m\right\rceil\) caps at the end of the circuit, _i.e._\(\left\lceil\log m\right\rceil\) measurements in the Bell basis.
Figure 5: Composite encoding scheme of the occupation number representation into the physical qubit states. Binary encoded states are transformed into physical states by the composite map \(E_{p}\circ E_{B}^{-1}\colon\mathcal{H}^{O(\log_{2}(nm))}\to\mathcal{S}( \mathcal{H}^{\otimes(n+m)})\).
the \(q\rightarrow\infty\) limit coincides with the complete infinite-dimensional unitary representation of the parametric amplifier \(U_{\text{PDC}}^{g}\). Note that our previous definition of the parametric amplifier up to order \(q\) can be extended to arbitrary initial states other than vacuum by replacing the sum in the superposition with states with non-zero imbalance, namely
\[U_{\text{PDC},q}^{g}\ket{n,m}=\sum_{l=0}^{q}\tilde{c}_{l}(g)\ket{n-m+l,l} \tag{9}\]
where \(\ket{\tilde{c}_{l}(g)}^{2}=|\bra{n-m+l,l}U_{\text{PDC}}^{g}|n,m\rangle\ket{2}\).
The effect of a parametric amplifier compared to that of a lossless beam splitter can be better interpreted from Fig. 7. For a given initial state, the parametric amplifier creates a superposition of states that preserve the photon imbalance between the two modes. The total number of states with fixed photon imbalance is not limited, and therefore the superposition created by the amplifiers consists of an infinite number of terms (see the diagonal lines with the blue dots in Fig. 7(b)). On the other hand, for a lossless beam splitter, the total number of photons is conserved and the generated superposition has a finite number of states (see Fig. 7(a)). By defining a parametric amplifier up to order \(q\), we impose a cutoff to make the superposition finite.
Note that under these conditions, \(U_{PDC,q}^{g}\) clearly admits a finite-dimensional matrix representation. What is not so clear is that this representation can be built from a universal gate set (in other words, that it is unitary). To argue for unitarity, we will use the BS-PDC duality between matrix elements in Eq. (4). Specifically, over the multi-particle photonic Fock space, the BS unitary is decomposed as the direct sum of finite dimensional unitaries \(U_{\text{BS},[n]}^{\eta}\) (one for each fixed total number of photons),
\[U_{\text{BS}}^{\eta}=\bigoplus_{n=0}U_{\text{BS},[n]}^{\eta}. \tag{10}\]
For a given state \(\ket{l,l}\) with total photon number \(2l\), the BS-PDC duality in Eq. (4) implies that the transition amplitudes for a parametric amplifier with gain \(g\) can be evaluated from the matrix elements of \(U_{\text{BS},[2l]}^{\eta}\). However, since each term in the superposition given by Eq. (8) has a different total number of photons, we have to switch from different \(U_{\text{BS},[2l]}^{\eta}\) depending on the initial input state. In practice, this can be done by adding an ancilla that controls the respective representation of the BS action. Once this process is complete, we can lift the unitarity in Eq. (8) by the BS-PDC duality result.
For the simplest but non-trivial case, the parametric amplifier up to the generation of a single pair of entangled photons \(U_{\text{PDC},1}^{g}\), the implementation in terms of a circuit model is shown in Fig. 8. In this case we need a total of five qubits: \(q_{1}\) and \(q_{2}\) are the main ones used to encode the vacuum and the one photon per mode state, _e.g._\(\ket{0,0},\ket{1,1}\); \(q_{3}\) and \(q_{4}\) are necessary to create the EPR pair for the teleportation part of the algorithm, and \(q_{0}\) is ancilla qubit used, as mentioned before, to guarantee the unitarity of \(U_{\text{PDC},q}^{g}\) by controlling the transition from the zero particle \(U_{\text{BS},[0]}^{\eta}\cong 1\) to the two particle \(U_{\text{BS},[2]}^{\eta}\cong R_{y}(\theta)\otimes R_{y}(\theta)\) action of the BS.
Figure 6: Probability of pair-creation of multi-photon entangled states for a parametric down-conversion device in the high-gain \(g>1\) regime. The probability of multi-photon creation from the vacuum always decreases with the number of entangled photons. However, as the parametric gain increases, the probability distribution becomes broader.
Figure 7: Non-vanishing transition amplitudes between different photon number states in a (a) lossless Beam splitter and a (b) parametric amplifier. For a different number of photons \((n,m)\) entering each mode we obtain a different realization of a PDC and BS. For a BS, the Casimir element \(J^{2}=\frac{N}{2}\left(\frac{N}{2}+1\right)\), and transition amplitudes that do not conserve the total number of photons \(N=n+m\) are forbidden. For a PDC, the realization of the Casimir element of \(SU(1,1)\), \(K^{2}=\frac{n-m}{2}\left(\frac{n-m}{2}+1\right)\), and transition amplitudes which do not conserve the imbalance of photons between the first and second modes are forbidden instead (Blue lines). The PDC/BS duality relates both representations by swapping the Casimirs, i.e. by reflecting the lines in the finite-dimensional case (panel a) through the vertical Axis (panel b).
It is worth noting that for higher values of \(q\), the number of additional ancilla needed for the circuit model implementation of the algorithm grows in two different ways. First, the symmetrization procedure for \(N\) qubits requires a number of ancilla that grows quadratically with \(N\) (see for example [41]). In addition, for larger numbers of qubits we need a larger number of ancilla to control the finite-dimensional representation of the beam splitter action.
By simulating the circuit model implementation of the parametric amplifier up to \(q=1\) shown in Fig. 8, we were able to evaluate different parametric amplifier transition probabilities for different values of the gain \(g\) using only the unitary of a BS. Figure 9 shows the main results in good agreement with the exact analytical values for the transition amplitudes given by the BS-PDC duality in Table. 1. As an interesting result, note in Fig. 9 that for the \(\langle 1,1|U_{\mathrm{PDC},1}^{g}|1,1\rangle\) case there is a dip in the probability amplitude at \(g=2\). From the BS perspective, this corresponds to the Hong-Ou
\begin{table}
\begin{tabular}{|c|c|} \hline Beam splitter & Parametric amplifier \\ \hline \(\langle 0,0|U_{\mathrm{BS}}^{g}|0,0\rangle=1\) & \(\langle 0,0|U_{\mathrm{PDC}}^{g}|0,0\rangle=1/\sqrt{g}\) \\ \(\langle 1,1|U_{\mathrm{BS}}^{\eta}|1,1\rangle=(2\eta-1)\) & \(\langle 1,1|U_{\mathrm{PDC}}^{g}|1,1\rangle=\frac{2-g}{g^{\frac{g}{2}}}\) \\ \(\langle 0,1|U_{\mathrm{BS}}^{\eta}|0,1\rangle=\sqrt{\eta}\) & \(\langle 0,1|U_{\mathrm{PDC}}^{g}|0,1\rangle=1/g\) \\ \hline \end{tabular}
\end{table}
Table 1: Exact transition amplitudes for a parametric amplifier device of gain \(g\) compared to the corresponding transition amplitudes for a beamsplitter of transmittance \(\eta=g^{-1}\). The equalities between these transition amplitudes are provided by the duality presented between both optical devices in terms of the Wick rotation, as mentioned in Sec. 2.
Figure 8: Circuit model implementation of the parametric amplifier with \(q=1\). We use barriers to divide the entire circuit into steps corresponding to each part of the algorithm discussed in Section 3. In the first step we initialize an EPR pair on the qubits \(q_{3}\) and \(q_{4}\), and swap the states for the qubits \(q_{3}\) and \(q_{2}\), these two operations are necessary for the teleportation part of the protocol. Then we couple the main qubits (\(q_{1}\) and \(q_{2}\)) to the ancilla \(q_{0}\) via a CNOT. The ancilla controls the corresponding action of the BS in the composite encoding for each fixed number of photons entering the device, i.e. for \(|0,0\rangle\) we have the identity, while for \(|1,1\rangle\) we have the double RY rotation like in Eq. (6). Finally, we perform a Bell basis measurement on qubits \(q_{2}\) and \(q_{3}\), which correspond to the caps in the algorithm diagram (see Fig.4).
Figure 9: Probability amplitudes for \(U_{\mathrm{PDC},1}^{g}\) obtained by simulation with the quantum circuit in Fig. 8 and compared with the exact analytical values computed with \(U_{\mathrm{PDC}}^{g}\) and reported in Table. 1. Each curve shows the calculated or simulated value for different values of the parametric gain \(g\). Initial state \(|1,1\rangle\) (blue curve), \(|2,0\rangle\) (green curve), and \(|10\rangle\) (orange curve). For \(|\left<11\right>|U_{\mathrm{PDC}}|1,1\rangle\left|{}^{2}\right.\) and the same number of shots, the fluctuations around the theoretical value are damped. Note that except for the case \(|1,1\rangle\), the \(U_{\mathrm{PDC},q}^{g}\) acts over initial states with non-vanishing imbalance, this is done as a sanity check for the validity of our algorithm. The simulation was done with the Qiskit module [42] using the Aer simulator, for a total of 2000 shots.
Mandel (HOM) interference dip caused by the indistinguishability of the two photons entering the BS with transmission \(\eta=1/2\)[13, 11, 43]. This result is particularly relevant because it links the active nature of the parametric down-conversion process dip with the passive nature of the BS dip. From the perspective of the parametric amplifier, the dip is characterized by a reduction in the two-photon coincidence rate due to interferometric suppression. The connection between these two phenomena highlights the essential role of indistinguishability in quantum interference and entanglement, and underscores the importance of understanding the fundamental principles of quantum optics.
## 5 Conclusions
In this study, we provide a novel interpretation of an existing finding regarding the Lie group structure of parametric amplifiers and beam splitters. We identify BSs with transmittance \(\eta\) as the Euclidean time variant of parametric amplifiers with gain \(g=1/\eta\), using the imaginary time (Wick) rotation typically employed in quantum field theories. This discovery establishes a connection at the matrix element level for the relevant unitaries. Moreover, we have used tensor diagrammatic methods to interpret this duality and developed an algorithm for calculating the matrix elements of both optical devices. The algorithm encodes photonic occupation numbers and uses an advanced teleportation protocol to exchange the number of photons in one of the modes. According to our algorithm, we have devised the notion of a PDC gate, with precision level \(q\). This unitary operation can replicate the first \(q\) terms of the transition probabilities of a real parametric amplifier, which operates on any number state.
Our findings offer a novel method of creating PDC-like gates for non-photon-based computing devices. This opens up opportunities for investigating algorithm implementations that were previously limited to photon-based platforms. We suggest further investigating the relationship between the Wick rotation argument and the phenomenon of multi-mode beam splitter interference, focusing on transition amplitudes in parametric amplifiers.
For the multi-mode scenario, there exist superselection rules that can nullify transition amplitudes based on the principle of permutation symmetry in the input ports (see, e.g. [44]). This implies an inherent symmetry for the interferometric cancellation of PDC amplitudes in the parametric amplifier perspective. Multi-mode transition amplitudes are important due to their direct relationship to evaluating permanents, a well-known classical NP-complete computational problem (see, e.g. [45, 46]). In addition to using duality to connect the transition amplitudes of a single parametric amplifier to those of a single beam splitter, it would be interesting to establish a direct connection between the counting statistics of beam splitter networks such as the Zeilinger architecture [47] with the analogous networks of parametric amplifiers. Clarifying these correlations could yield valuable insights in this field.
## Acknowledgments
We acknowledge partial support from the Norwegian Ministry of Education and Research through the QTECNOS consortium (NORPART 2021-10436/CI 71331). We also acknowledge the use of open-source SDK for quantum development (Qiskit) [48].
|
2305.00570 | How to Build an Optical Filter with an Atomic Vapor Cell | The nature of atomic vapors, their natural alignment with interatomic
transitions, and their ease of use make them highly suited for spectrally
narrow-banded optical filters. Atomic filters come in two flavors: a filter
based on the absorption of light by the Doppler broadened atomic vapor, i.e., a
notch filter, and a bandpass filter based on the transmission of resonant light
caused by the Faraday effect. The notch filter uses the absorption of resonant
photons to filter out a small spectral band around the atomic transition. The
off-resonant part of the spectrum is fully transmitted. Atomic vapors based on
the Faraday effect allow for suppression of the detuned spectral fraction.
Transmission of light originates from the magnetically induced rotation of
linear polarized light close to an atomic resonance. This filter constellation
allows selective acceptance of specific light frequencies. In this manuscript,
we discuss these two types of filters and elucidate the specialties of atomic
line filters. We also present a practical guide on building such filter setups
from scratch and discuss an approach to achieve an almost perfect atomic
spectrum backed by theoretical calculations. | Denis Uhland, Helena Dillmann, Yijun Wang, Ilja Gerhardt | 2023-04-30T20:37:06Z | http://arxiv.org/abs/2305.00570v1 | # How to Build an Optical Filter with an Atomic Vapor Cell
###### Abstract
The nature of atomic vapors, their natural alignment with interatomic transitions, and their ease of use make them highly suited for spectrally narrow-banded optical filters. Atomic filters come in two flavors: a filter based on the absorption of light by the Doppler broadened atomic vapor, i.e., a notch filter, and a bandpass filter based on the transmission of resonant light caused by the Faraday effect. The notch filter uses the absorption of resonant photons to filter out a small spectral band around the atomic transition. The off-resonant part of the spectrum is fully transmitted. Atomic vapors based on the Faraday effect allow for suppression of the detuned spectral fraction. Transmission of light originates from the magnetically induced rotation of linear polarized light close to an atomic resonance. This filter constellation allows selective acceptance of specific light frequencies. In this manuscript, we discuss these two types of filters and elucidate the specialties of atomic line filters. We also present a practical guide on building such filter setups from scratch and discuss an approach to achieve an almost perfect atomic spectrum backed by theoretical calculations.
## 1 Introduction
One of the first experiments in atomic physics was the observation of atomized alkali salts in flames [1]. Over time, evacuated glass cylinders filled with a small amount of an atomic element such as sodium, potassium, or rubidium replaced those open flame experiments. The advantages of vapor cells are their robustness, the freedom to manufacture them in almost arbitrary geometries, the flexible choice of the atomic species, and their convenient handling. All this makes them a great tool for a wide range of applications. To name a few, they can be put into use as magnetic [2, 3, 4] and electric field sensors [5], as storage media in quantum optics [6, 7], and also as atomic line filters [8, 9, 10], which is the topic of this tutorial article.
Experiments have shown that atomic vapors effectively block light on specific lines [1]. Such effects occur when optical resonant light gets absorbed by the atom. In contrast to solids or liquids, atoms in their vapor phase act as an absorbing medium and do not experience the same spectral broadening. The broadening is usually limited by the atom's velocity distribution, also known as Doppler broadening. The underlying Maxwell-Boltzman velocity distribution ranges from a few hundred to a few thousand meters per second. The resulting GHz wide bandwidth is at least two orders of magnitude narrower than good commercial dichroic filters. Subsequently, vapor cells present a useful instrument to block unwanted spectral ranges. These "Doppler filters" act as narrow-band notch filters, which only block the light near and on an atomic transition, but are transparent to all other spectral components, limited solely by the residual absorption of the vapor cell's windows.
The optical properties of atomic vapor cells change under the influence of an external magnetic field. The Zeeman effect [11] splits the atomic energy structure into sub-levels with different optical transition frequencies. Augusto Righi used this effect and observed that individual polarization components of the light are differently affected by the magnetically shifted sub-levels [12, 13]. This narrow-band notch filter became known as the "Righi-filter". Since it compares well to the other filters described in this manuscript, we will not dive into any detailed discussions.
In 1898, Macalouso and Corbino studied polarized light from the sun. In their experiments, the light passed a flame that included some amount of sodium. A second linear polarizer placed behind the sodium flame suppressed ("crossed-out") the light effectively. They realized that the light overcame the blockade of the second polarizer when a magnetic field was introduced [14, 15]. That was only possible if the linear polarized light performed a rotation. This effect became known as the "Macaluso-Corbino effect". In the 1950s, this effect allowed Yngve Ohman to implement a narrow-band atomic filter [8], surpassing the quality figures of the earlier developed Lyot-filters [16], which was commonly used for astronomic observations 1. In an independent report later, Kohler and Novick linked the Macaluso-Corbino filter [18] to the Faraday effect. Unlike the Macaluso-Corbino effect, which describes the rotation
of the polarization in the vicinity of an atomic transition (resonant), the Faraday effect is not necessarily based on the presence of atomic resonances but rather describes the effective rotation due to a longitudinal magnetic field in any medium. The Faraday effect can also appear without external magnetic fields as long as the interaction induces some magnetic effects on, e.g., the surface of a ferromagnet. Due to the more general definition, the term "Faraday filter" gained popularity. Those Faraday filters suppress all frequencies of light except the ones in proximity to an optical atomic transition. Therefore, such filters are suited for GHz-wide band-pass filters. Research on Faraday filters became more popular in the 1970s and 1980s [19, 20, 9, 21]. Later, novel names such as the "Faraday anomalous dispersion optical filter" (FADOF) were coined [10, 22].
For any atomic filter arrangement, the orientation of the external magnetic field can be arbitrary. Common choices are a _longitudinal_ magnetic field along the laser axis or a _transverse_ magnetic field, where the magnetic field is orthogonal to the propagation of the light. In both cases, the excitation light experiences resonant birefringence effects [23]. A _transverse_ magnetic field leads to the Voigt effect, discovered by Woldemar Voigt in 1887 [24]. This yields linear birefringence, which makes the vapor act like a resonant lambda half waveplate. Filters based on the Voigt effect were realized by using alkali metals in vapor cells like cesium [25] or rubidium [26, 27]. On the other hand, a _longitudinal_ magnetic field causes circular birefringence, which rotates the polarization of the light in the vicinity of an atomic resonance. This manuscript focuses on circular birefringence and shows how vapor cells can act as filters when utilizing the Faraday effect for optical filtering.
Atomic filters have a wide range of applications. Starting from astronomical, mostly solar, observations [8, 28], the field went over laser locking applications [29] to optical communication [30]. In a chain of experiments, filters with different types of metals were under study like absorptive sodium filters [31], filters around the sodium doublet [19], or filters based on rubidium [10], mercury [32], potassium [33] and cesium [34, 35]. Several non-alkali atoms like bismuth [20] or samarium [21] also found their way into the vapor cell for filtration purposes. Research attempts on excited atomic state Faraday filters were also reported [36, 37, 38, 39], partly utilizing so-called "see-through hollow cathode lamps".
To date, the range of applications extends to fields where atomic vapor cells perform light imaging detection and ranging (LIDAR) [40, 41, 42, 43] and terrestrial seawater observations by the Brillouin-lidar project [38, 39]. Groups like the Solar Activity Magnetic Monitor (SAMM and SAMNet) were founded for space weather research and forecast and developed a flare warning system based on vapor filters [44]. Filter combinations are also possible, which can eliminate different spectral bands [45]. A model for maximizing the transmission with arbitrary magnetic field directions was recently reported [46]. All alkali atoms except atomic lithium were used as atomic line filters. Studies of magneto-rotational effects in lithium were so far only performed with cold atomic ensembles [47].
This manuscript starts with an introduction to Doppler filters. We first discuss the
basic principles for absorptive filters and show the impact of external parameters like the temperature to tweak the filtration ability of the vapor cell. We then introduce a longitudinal external magnetic field which leads to the Faraday effect. We then show a theoretical analysis of the Faraday effect for sodium vapor and discuss the fundamentals of an atomic Faraday filter. The reason why the theory part revolves around sodium vapor is because of its descriptive spectrum. In the last part, we apply those theoretical methods and show an exemplary setup of a Doppler and a Faraday filter based on rubidium. Rubidium is the medium of choice due to its experimental significance.
## 2 The Absorptive Filter
This section focuses on the Doppler filter and discusses how external parameters affect the vapor's filtering ability. To set up the Doppler filter, one only needs to put a hot vapor cell in the pathway of the laser beam. The nature of atoms and their ability to absorb light allows hot atomic vapors to act like a notch filter for specific frequencies. It suppresses spectral components around an atomic transition while transmitting far-detuned frequencies. Figure 1a) shows the fundamental absorption for a single atom represented by the two-level system.
The filtration effect then results from the transfer of atoms from the ground state \(|g\rangle\) to the excited state \(|e\rangle\). The energy states result due to the interplay of the momentum
Figure 1: **Two types of atomic filters.** The Doppler filter absorbs light on and near an atomic transition, and acts as a notch filter. The Faraday filter suppresses all light except a small range around the atomic transitions and acts as a bandpass filter. **a)** Atomic notch filter with an atomic vapor cell with length \(L\) and at a temperature \(T\). The scanning range of the laser is within a range of several tenths of GHz across the atomic resonance. For far-detuned laser light, the atomic vapor is transparent. **b)** In the Faraday filter, the atomic vapor cell is placed between two crossed polarizers. An applied longitudinal magnetic field \(B\) rotates linear polarized light by the atomic vapor (arrows). Thereby, the light can traverse both crossed polarizers. Resonant light gets transmitted while far detuned light is suppressed by the polarizers.
of the sodium valence electron and nuclear spin momentum, known as the hyperfine structure. The quantum numbers for the sodium ground state \(|g\rangle\) are \(l=0\) and \(s=1/2\) for the electron (j = l + s), and \(i=3/2\) for the nucleus. This adds to \(f=i\pm j=3/2\pm 1/2=1\) and 2 for the whole atom. The same calculation applies for the excited state \(|e\rangle\), where \(l=1\) leads to \(f=0,1,2,3\) for the sodium D\({}_{2}\) line. The excited atoms relax over time and emit light at all solid angles. A spatially limited detection in the forward direction ensures that this spontaneous emission does not significantly affect the detection of the atomic spectrum.
Several variables influence the role of the atoms as an active optical filter medium, like the choice of the atomic species, the length of the vapor cell \(L\), and the temperature \(T\). A temperature change affects the number of atoms in the vapor phase in a defined volume and their temperature-dependent velocity. With the temperature, the number of atoms in the vapor phase of the active medium changes in an almost exponential fashion [48] and follows the Antoine equation [49]. Therefore, it is possible to change the number density of accessible atoms by several orders of magnitude when changing the temperature only a bit. We will see below that this change in the density immediately affects the absorbance of the medium. At the same time, the average velocity of the atoms increases, which leads to a broadened absorption spectrum. This so-called Doppler broadening of the atomic vapor is a dominant feature for the edge-steepness of the absorption feature, given by the Maxwell-Boltzmann distribution \(f(v)\ \mathrm{d}^{3}v=\left(\frac{m}{2\pi kT}\right)^{3/2}\,\exp\left(-\frac{mv^{2} }{2kT}\right)\,\mathrm{d}^{3}v\).
Following up on the Doppler broadened absorption feature, we can describe the intensity loss of the light which passes through the vapor cell by Beer-Lambert's law. When the incident light in front of the cell shows an intensity of \(I_{\mathrm{in}}\), the light intensity right after the vapor cell drops to \(I_{\mathrm{out}}\). The laser intensity reduces exponentially while traversing the cell due to the interaction with the atomic vapor. Each atom in the optical beam path can be associated with an atom-specific and wavelength-dependent extinction cross-section \(\sigma\). The number of accessible atoms in the optical beam path is given by the atomic density \(\rho\) and the optical path length \(d\). Beer-Lambert's law then reads:
\[log_{10}\left(\frac{I_{\mathrm{out}}(d)}{I_{\mathrm{in}}(0)}\right)=-\rho \cdot\sigma\cdot d \tag{1}\]
We represent the exponential suppression of light in the course of the cell with the decadic intensity ratio (effective transmission), commonly known as the "optical density" \(OD\).
\[log_{10}\left(\frac{I_{\mathrm{in}}(0)}{I_{\mathrm{out}}(d)}\right)=OD \tag{2}\]
Figure 2 shows examples of the sodium absorption features for different vapor temperatures and fixed cell length (100 mm) around the sodium D\({}_{2}\) line (\(\approx 589.0\) nm). The spectrum shows two major absorption lines. Each of the two broadened absorption features includes three transitions around the D\({}_{2}\) line, which can not be resolved for
high temperatures (splitting is below 100 MHz). For low temperatures (100-140\({}^{\circ}\)C), the sodium ground state splitting (hyperfine splitting) is \(\Delta\nu\approx 1.77\) GHz apart. Those features will merge for higher temperatures. The dots at the bottom of the plot indicate the positions of the hyperfine transition frequencies. The corresponding edge-steepness (10-90%) is around 875 MHz for atomic sodium at 150\({}^{\circ}\)C as given by the Doppler broadening of the sodium vapor.
Along with the increase in temperature and the velocity of the atoms, the vapor pressure also increases. The vapor pressure and the suppression of the transmitted light follow exponential laws. Minor temperature changes impact the vapor's transmittance due to this "double-exponential" temperature dependence significantly. For example, the optical density of a sodium vapor cell at 100 \({}^{\circ}\)C is less than 0.3, at 150 \({}^{\circ}\)C is 11.3, and at 200 \({}^{\circ}\)C is 215.7 Another increase in temperature by just 5 \({}^{\circ}\)C, to 205 \({}^{\circ}\)C, increases the decadic absorbance to 279.6.
At temperatures below 200 \({}^{\circ}\)C, the spectral width of the sodium absorption line is dominated by the Doppler broadening, following a Gaussian profile. Any increase in temperature "pulls" the spectrum down. Therefore, in a linear representation, the absorption spectrum gets broader, with virtually no change in the edge steepness. This intervenes with the natural linewidth of the atomic transition, which underlies
Figure 2: **Temperature dependency of the sodium absorption features around the D\({}_{2}\) transition.** The dots at the bottom denote the frequency of the hyperfine transitions. All six hyperfine transitions are visualized and color coded. The splitting of these six transitions is in the MHz range and is barely distinguishable by the dots. Due to the typical Gaussian lineshape caused by the Doppler broadening, the line width increased due to the linear scale. At elevated temperatures, the wings of the spectrum are dominated by the natural line width of the intrinsic atomic transition.
a Lorentzian shape. For higher temperatures, e.g. above 200 \({}^{\circ}\)C for sodium, the absorbance in the cell is so high that the underlying fundamental Lorentzian transition of sodium starts to alter the wings of the spectrum exceeding the Doppler broadening. Those features scale proportional to \(1/\Delta^{2}\), where \(\Delta\) is the detuning. The lineshape is the convolution of the Gaussian and Lorenzian, known as the Voigt line profile.
Atomic filters in the Doppler configuration have been part of numerous research projects on light filtration. An early achievement was the filtration of light using spectral lamps [50]. Here, the authors used atomic vapor cells to filter the spectral components for the two isotopes of naturally abundant rubidium, \({}^{85}\)Rb and \({}^{87}\)Rb. The approach was to set up a vapor cell with the isotope \({}^{85}\)Rb, which suppressed the spectral response of this isotope, allowing the observation of the isotope \({}^{87}\)Rb. The development of comparable methods allowed the usage of vapor cells for light imaging detection and ranging (LIDAR). The establishment of such filters with barium and other systems began in the early 1980s [51]. Other applications, like Raman spectroscopy, desired narrow-band rejection filters, such that low Raman shifts were accessible [52, 53]. In microscopy, atomic Doppler filters allow the detection of the luminescence of DNA strains by suppressing the excitation laser light and even enhancing the photon detection efficiency by 15% against the best available commercial filter [54].
This summary of the Doppler filter and absorption lines should have brought the general concept closer to the reader. In the following section of this tutorial, we manipulate the Doppler spectrum by external magnetic influences, which inevitably introduces the Faraday filter.
## 3 The Faraday Filter
As for the Doppler filter, the central element of the Faraday filter is a hot vapor cell. Figure 1b shows the fundamental principle of a Faraday filter. Two linear polarizers sandwich the vapor cell. The polarization axes are aligned orthogonal to each other. This configuration ensures the suppression of the linear polarized light, which gets blocked by the second polarizer. If a longitudinal magnetic field is applied, it rotates the linear polarized light when it travels through the atomic vapor. This effect has a narrow-banded nature, which entitles atomic vapor filters as competitive candidates for bandpass filters.
### Theory of the Faraday Filter - from Absorption to Rotation
We suppose the light with frequency \(\omega\) travels along the z-axis. That means the polarization of the light is in the \(x\) and \(y\) plane, which means for the initial wave:
\[\vec{E}(z,\omega)=E_{0,x}\ \exp[i(\omega t-kz)]\hat{e}_{x}+E_{0,y}\ \exp[i( \omega t-kz)]\hat{e}_{y} \tag{3}\]
Now, the light passes the atomic vapor. The response of the absorbing atomic vapor to the light is known as susceptibility. Resonant light gets absorbed by the optically active medium. We should therefore define the complex susceptibility \(\chi_{\pm}=\chi^{\prime}_{\pm}+i\chi^{\prime\prime}_{\pm}\)
The complex susceptibility acts on each circular component of the linear polarized light individually. We can immediately link the response of the atomic vapor to the refractive index with \(n=\sqrt{1+\chi}\). The refractive index also appears in the expression of the wavevector, which reads \(|\vec{k}|=n\omega/c\). Similar to the susceptibility, the expression of the refractive index consists of a real and imaginary part that acts on the different circular parts individually, \(n_{\pm}=\nu_{\pm}+i/2\kappa_{\pm}\). We can modify the expression of the incident light (equation 3) to:
\[\vec{E}(z,\omega)=E_{0,x}\ \exp\left[i\left(\omega t-\frac{n_{x}\omega}{c}z \right)\right]\hat{e}_{x}+E_{0,y}\ \exp\left[i\left(\omega t-\frac{n_{y}\omega}{c}z\right)\right]\hat{e}_{y} \tag{4}\]
Consider the beginning of the cell at \(z=0\), which defines the incoming wave \(\vec{E}_{in}\) as:
\[\vec{E}_{in}(0,\omega)=E_{0,x}\ \exp\left[i\omega t\right]\hat{e}_{x}+E_{0,y} \ \exp\left[i\omega t\right]\hat{e}_{y} \tag{5}\]
If we align the first polarizer into the x-plane, equation 5 simplifies to:
\[\vec{E}_{in}(0,\omega)=E_{0,x}\ \exp\left[i\omega t\right]\hat{e}_{x} \tag{6}\]
The same idea applies to the light, which leaves the vapor cell after length \(L\). Despite the polarization before the cell, we do have to consider that a rotation occurred. So the polarization of the light after the vapor cell \(\vec{E}_{out}(L,\omega)\) projects again somewhere to the \(x\) and \(y\) plane. Since it is still a planar wave, it shows the same form as in equation 4 but for \(z=L\). Furthermore, we can express the field by its circular components, \(\hat{e}_{x}=-1/\sqrt{2}\left(\hat{e}^{+}-\hat{e}^{-}\right)\) and \(\hat{e}_{y}=i/\sqrt{2}\left(\hat{e}^{+}+\hat{e}^{-}\right)\). In the end, we measure the effective transmission after the second polarizer, which has to be aligned in the \(y\) direction. Therefore, we project \(\vec{E}_{out}\) to \(\hat{e}_{y}\). We use the identity \(e^{-ix}+e^{+ix}=2\cos x\), the sum of the the absorption coefficients \(\Delta\kappa=1/2(\kappa_{+}+\kappa_{-})\) and the sum of the refractive indeices \(\Delta n=n_{+}+n_{-}\) for the circular light components. The expression of the effective transmission after the second polarizer is then:
\[T=\left|\frac{\vec{E}_{out}\hat{e}_{y}}{\vec{E}_{in}}\right|^{2}=\frac{1}{4} \bigg{(}\underbrace{\frac{e^{-\kappa_{+}L}+e^{-\kappa_{-}L}}{\mbox{Righi effect}}}_{\mbox{Macaluso-Corbino effect}}-\underbrace{2\cos\left(\frac{\omega}{c}\Delta nL\right)\cdot e^{-\Delta \kappa L}}_{\mbox{Macaluso-Corbino effect}}\bigg{)} \tag{7}\]
This result shows two terms, the Righi and the Macalouso-Corbino effect. The Righi effect describes the absorption (\(\kappa_{\pm}\)) of the circular light components by the Zeeman levels. The Marculoso-Corbino effect, however, includes an angle in the cosine term, which is the rotation angle of the polarized light for a given frequency \(\omega\).
\[\Phi(\omega)=\frac{\omega}{c}\Delta nL \tag{8}\]
The difference of the refractive indices on the circular components is proportional to the rotation angle \(\Phi\). The circular light components of the light interact differently with the atomic vapor. This leads to an overall shift resulting in a net rotation of the polarization. Figure 3 visualizes this. The calculation above follows [55] and is accessible as an open-source program called ElecSus [56, 57].
The peak transmission, \(T_{\rm max}\), is one of the crucial parameters for optimizing the filter. The simple optimization of the peak transmission often also increases the side wings of the filter. At the same time, the rejection ratio for such unwanted spectral components gets reduced. Subsequently, the "equivalent noise bandwidth" (ENBW) was termed [10], which describes the achievable signal-to-noise ratio for white incident light. The ENBW is defined as:
\[\rm ENBW=\frac{1}{T_{\rm max}}\int_{-\infty}^{\infty}T(\omega)d\omega \tag{9}\]
However, this is not sufficient for the optimal operation of the filter. The ENBW can be optimized, while the maximum transmission is on small and negligible values. For that, an additional optimization parameter has been established [58]. The so-called "figure of merit" (FOM) maximizes the peak transmission while reducing the ENBW and can be calculated by taking the ratio of the peak transmission and ENBW. The higher this value is, the better the overall performance of the filter. Maximizing this value is equivalent to the best operation point for the filter.
\[\rm FOM=T_{\rm max}/ENBW \tag{10}\]
### The Impact of the Magnetic field - the Zeeman Effect
Filters in the Faraday configuration use a magnetic field \(\vec{B}\) parallel to the wavevector \(\vec{k}\) of the excitation light. The latter defines the quantization axis. With the interaction Hamiltonian, we can calculate how an external magnetic field splits and shifts the atomic energy states.
\[H=H_{0}+H_{\rm HFS}+H_{\rm Zeeman} \tag{11}\]
The Hamiltonian \(H_{0}\) describes the unperturbed case and consists of a kinetic and a potential term. The hyperfine Hamiltonian (\(H_{\rm HFS}\)) describes the interaction between the momenta of the nucleus and the electron \(\vec{i}\) and \(\vec{j}\) times a coupling constant \(a_{hf}\). Also, consider that external influences, like a magnetic field, can decouple the spin interaction. In such a case, each spin contribution of the atom, which carries a magnetic moment, couples individually to the magnetic field, denoted by the magnetic quantum number \(m\). The multiplicity of the splitting of \(f\) states scales like \(2f+1\). An example of the sodium ground state would be the \(F=1\) state, which splits into three sub-states with the magnetic quantum numbers \(m_{f}=-1,0,1\), known as the Zeeman splitting, illustrated in figure 3 f). Summarized, the interaction between the nucleus, represented by \(i\), and the electron, represented by \(j\), and finally, the projection of the momenta to a magnetic field defines the interaction Hamiltonian:
\[H_{int}=H_{HFS}+H_{Zeeman}=\frac{a_{hf}}{\hbar^{2}}\vec{i}\cdot\vec{j}+\mu_{B} \vec{B}\cdot\frac{g_{s}\vec{s}+g_{l}\vec{l}+g_{i}\vec{i}}{\hbar} \tag{12}\]
Here, \(g\) represents the Lande factors of the spins, and \(\mu_{B}\) is the Bohr magneton. Each of the Zeeman levels experiences a shift proportional to the magnetic field \(\propto\mu_{B}m_{f}g_{f}B\)
The transition between these hyperfine states follows momentum conservation and shows a dependence on the polarization of the excitation light. If the light shows a right-handed circular polarization, the electron's momentum gets "kicked" by the absorbed photon, which increases the magnetic quantum number from \(m_{f}\to m_{f}+1\). The same idea applies to left circular polarized light, which would decrease the magnetic quantum number \(m_{f}\to m_{f}-1\). Linear polarized light, which oscillates perpendicular to the quantization axis, would not be absorbed. Therefore, the Zeeman splitting of the absorption Doppler spectrum only shows an impact on the circular transitions for longitudinal magnetic fields.
### Working Principle of the Faraday Filter
The example of a sodium Faraday filter is very suitable for a theoretical discussion and explanation of the fundamental working principle of a Faraday filter. The Doppler broadened absorption line splits into the two Zeeman components. When the field is sufficiently large, it opens a window in which the absorption is low, but the optical rotation simultaneously amounts to \(90^{\circ}\). Therefore, no Righi effect limits the transition of the filter. Therefore, the spectrum appears as a peak on a pedestal. Furthermore, the left-most and right-most side displays a higher transmission than the 50% limited Righi-type side-plateaus. Due to the large splitting and the opened transparancy window, where no Doppler absorption occurs, the transmission in the center exceeds the side wings. This operation is called the center operation of the Faraday filter.
For higher temperatures and low magnetic fields, the rims of the Doppler absorption spectrum exhibit strong optical rotation. Then the so-called wing operation of a Faraday filter occurs. It is evident, that in this case the anomalous dispersion of the Faraday filter does not show any relevance for most of the vapor filters. In the past, the misconception that anomalous dispersion are the dominant effects of those filters found approval, testified by the name "FADOF". Nowadays, this concept is disproved [59].
Figure 3b shows the Doppler broadened absorption spectrum for sodium at 153 \({}^{\circ}\)C. An external magnetic field (here 2500 G) splits the hyperfine states into Zeeman components, which shift towards higher and lower frequencies. As mentioned earlier, the transitions between ground and excited Zeeman states are polarization dependent. The orientation of the atomic dipole vs. the amplitude of the linear polarized excitation light is orthogonal and therefore suppresses any interatomic \(\pi\) transitions. However, excitations involving circularly polarized light are allowed. As linear polarized light is the superposition of the two circular components \(\pi=\sigma^{+}+\sigma^{-}\), it can drive the transitions which require circular polarizations. The even split between the circular components explains why the two broad Zeeman features in figure 3b) drop to roughly 50% transmission when linear polarized light traverses the cell.
Absorption features relate to the light's dispersion given by the Kramers-Kronig relation, which brings the refractive index into play (as shown in figure 3c). When near resonant linear polarized light traverses the vapor cell, each circular light component (
Figure 3: **General working principle of Faraday filters. a) adapted from [47]**
and \(\sigma^{-}\)) experiences a different refractive index (\(n^{+}\) and \(n^{-}\)). The dispersion relations of these two circular components show a frequency gap proportional to the Zeeman splitting \(\sim\mu B\). The difference between the two dispersion curves represents the optical rotation in the cell, as figure 3d visualizes. The resulting sign of the difference in the dispersion curve indicates the direction of the polarization rotation. Of course, the sign changes when reversing the magnetic field direction.
The highest rotation angle does not automatically mean the highest transmission through the second polarizer. The effective rotation depends on the magnetic field strength and the length of the vapor cell. The transmission through the second polarizer is maximum when the polarization rotation is \(\pi/2\) (or integer multiples of this value).
The Faraday transmission spectrum in figure 3e) shows some interesting aspects. We first focus on the point where the effective rotation of the polarized light seems to be zero, see zero crossing in figure 3d). One might think that when no polarization rotation occurs, the second polarizer blocks all the light. However, we can see a transmission of roughly 25% in figure 3e). What is happening here? To answer this, we first have to clarify the structure of the Zeeman splitting. The two Zeeman features gather either \(\sigma^{+}\) transitions for positive detunings or \(\sigma^{-}\) transitions for negative detunings. For a detuning around -3 GHz, there would not be a net rotation because the \(\sigma^{-}\) light gets completely absorbed. The light then consists solely of \(\sigma^{+}\) light, which does not interact with the atomic medium since the Zeeman levels talking to \(\sigma^{+}\) light shifted away to the positive side of the detuning. Therefore, the \(\sigma^{+}\) portion of the light reaches the second polarizer unperturbed. Due to its circular nature, only half of it traverses the linear polarizer. As initially explained, the initial linear polarized light consists of an equal ratio of both circular polarizations. Subsequently, only a quarter of the initial light passes the second polarizer. That is why the transmission only shows 25% at -3 GHz detuning in the Faraday transmission spectrum and also explains the 1/4 prefactor in equation 7.
Another interesting point is the slightly increased transmission features at the side wings of the Faraday transmission spectrum, marked by the dotted line (around \(\pm 7\) GHz). At those points, the Zeeman spectrum in 3 a) shows that neither of the \(\sigma^{+}\) or \(\sigma^{-}\) components are fully absorbed (only roughly half of it). That means only half of the \(\sigma^{\pm}\) light interacts with the Zeeman states. The other half of the light which hits the second polarizes is still the initial \(\sigma^{\pm}\) light and undergoes a polarization rotation, following equation 8. That portion of light tops up the initial 25% of the transmitted light by an additional 20%.
## 4 Experimental Guide
### The Doppler Configuration
We now continue with the experimental implementation of an atomic vapor filter. In the previous section, we discussed filters based on sodium. While the sodium filter is
an excellent example to introduce the fundamental working principles, filters based on rubidium and cesium are significantly easier to implement in the lab. The reason for this is the higher vapor density at low temperatures for higher-ordered alkali metals. Therefore, these alkali metals allow for building a more dominant absorbing filter at convenient temperatures. For the rest of this manuscript, we exemplary discuss the implementation of a rubidium Doppler and Faraday filter. Compared to other alkali atoms, rubidium allows the use of lower temperatures and magnetic fields to build an efficient atomic vapor filter. Moreover, rubidium resonant lasers are common these days. Dick and Shay reported such a rubidium-based filter in 1991 [10].
Experiments in the Doppler configuration represent one of the simplest setups for atomic spectroscopy equipment-wise. Such experiments only need three parts: the vapor cell, the light source, and the detector. This straightforward setting is part of the following paragraphs, in which we also discuss further experimental subtleties to circumvent complications.
Laser diodes are massproduced in spectral regions around 785 nm. These diodes can be pre-selected and fine-tuned to the spectral region of interest. For atomic rubidium, these are the typical D-line transitions around 794.978 nm and 780.242 nm. We selected the rubidium D\({}_{2}\)-line since it is closer to the characteristic operation wavelength of the laser diodes. Another advantage of the D\({}_{2}\)-line is its absorption strength, which is compared to the D\({}_{1}\)-line twice as high due to the higher absorption oscillator strength [60]. The atomic spectra in this manuscript are recorded far below the optical saturation of the atoms. The probing light intensity has to be so weak that no multiple excitations or higher non-linear effects can occur. A few \(\mu\)W of optical laser power are sufficient for a beam radius of roughly 3.5 mm.
The laser source continuously scans across the rubidium D\({}_{2}\) line in a sawtooth shape with 10 Hz. The width of the absorption spectrum varies with the temperature and magnetic field settings. In our case, the width of the spectrum and, therefore, the mode-hop-free tuning range comes down to 10-15 GHz. This scanning range is hard to achieve with a bare laser diode. It is advantageous to use an external cavity diode laser in the Ricci-Hansch design [61]. By carefully adjusting the operating temperature and the current feed-forward, it is simple to achieve a mode-hop-free tuning range exceeding 30 GHz. This type of laser can be either self-build or commercially bought. The frequency scanning speed is 11 Hz.
The robustness and easy handling of such tunable lasers does not automatically shoot for a successful recording of an atomic spectrum. There are two problems with such laser systems: the non-linearity of the frequency scan and power fluctuations due to the change of frequency and polarization drifts. It is required to record a "ruler" along with the spectrum. We decided to build a free space Fabry-Perot etalon and record its transmission spectrum along with the atomic absorption spectrum. To linearize the absorption spectrum, we apply a peak finding routine to the recorded cavity peaks and fit an Airy-transmission function to the peaks' displacement. D. Pizzey and coworkers [62] published a more extensive explanation of this method. The second problem is power
fluctuation induced by the frequency change of the scanning laser or due to polarization drifts in fibers. One reason why polarization drifts occur is when the adjustment of the optical axis set by the fiber dock is slightly off. In this case, even the slightest
Figure 4: **Setup and Aquisition: Setup and Data Acquisition: Basic setup with intensity stabilization by a feedback loop. The photodiode (PD) records the background intensity and corrects via a PID controller the RF amplitude of an acousto-optical modulator with a 350 MHz input. For the excitation beam, we utilize the -1 order. The free spectral range (FSR) of a cavity is the reference to even out the hysteresis of the laser frequency scan. Two photodiodes record the spectra for both polarization, \(I_{x}\) and \(I_{y}\). \(I_{y}\) becomes important for the Faraday filter. \(I_{x}+I_{y}\) reproduces the Doppler spectrum.**
temperature change impacts the intensity of the light coming out of the fiber. A polarimeter helps to adjust the fiber dock and minimize those intensity fluctuations. There are several methods to even out the intensity fluctuations caused by the laser frequency scan. One is to even out the intensity fluctuations of the spectrum by the evaluation algorithm, explained in [62]. Our approach was to use a feedback loop to an intensity controller of an acousto-optical modulator. Figure 4 shows a general scheme for the data acquisition and evaluation.
The continuously scanning and stabilized laser light then passes the vapor cell. There are several companies and optic suppliers which manufacture such glass cells. For simplicity, we suggest using a 75 mm to 100 mm long cylindrical cell with an excess amount of rubidium of natural abundance. A simple cell out of borosilicate glass and no buffer gas (vacuum) is sufficient. Under ambient conditions (around 20 \({}^{\circ}\)C), the vapor density is already sufficiently high when using the rubidium and a short vapor cell (around 50 mm). The highest absorbance under such conditions could achieve 60% for selected atomic transitions. At elevated temperatures around 50\({}^{\circ}\)C, a 50 mm long vapor cell shows an absorbance of around 90-99%. Such conditions make the observations of an absorption spectrum easy. We settle for a 100 mm long evacuated rubidium vapor cell. For convenient laser alignment, one can observe the scattered light of the cell with an infrared viewer. The cell lights up when the laser scans over an atomic resonance.
Such simple experiments already work under ambient conditions. We already discussed the interplay of temperature and atom density and its importance to optimal settings of the atomic filter. Those factors can be controlled by heating the vapor cell to its desired temperature. Here, there are a few things one should pay attention to. One of them is the heat gradient inside the vapor cell. Most atoms gather at the coldest section of the vapor cell and get stuck at the cell wall. Therefore, it is good practice to heat the cell homogeneously but keep the vapor cell's windows slightly warmer. Otherwise, the rubidium starts to condense on the cell windows, which leads to variations of the beam shape and, in the worst case, blocks the laser completely.
For our experiments, we have chosen to directly heat the optical windows of the cell with electric heat cartridges. Copper blocks, which surround both ends of the vapor cell, heat the area around the window in a homogenous manner. These blocks also feature sockets for temperature sensors. A second pair of windows attached directly to the copper block prevents unwanted airflow and cooling effects of the cell windows. The rest of the cell, including the cell's filling stem, is thermally isolated against the environment. It can take up to a couple of hours to reach thermal equilibrium, which implies a small temperature gradient across the cell. When working with higher temperatures, the air convection around the outer windows might lead to intensity fluctuations. Therefore, we decided to build an even larger enclosure around the vapor cell assembly, which is required to record accurate spectra.
The electric heaters can induce an unwanted magnetic field in the vapor cell. Therefore, we perform heating with an alternating current. We decided to use a simple heater with AC-heat cartridges (230 V), tens of watts of electrical power, and a control
transformer. Another possibility to reduce the influence of magnetic fields is thick film resistors (Vishay), which operate up to 100 \({}^{\circ}\)C. A control loop, ideally connected to a PID controller (in our case, Model CN8EPT-330-EIP, Omega), performs a continuous temperature readout and stabilizes it.
The laser light traverses the atomic vapor cell through its optical windows. The wider the beam waist is, the more atoms get excited by the resonant light and contribute to the measured spectrum. The power of the laser should not saturate the sample and should be below the saturation intensity \(I_{\rm sat}\approx\)3.6 mW/cm\({}^{2}\). After the cell, a large-area-amplified silicon photodiode records the transmission. No special modifications of the diode in terms of noise properties or speed is required. Any standard model photodetector from any optics supplier is sufficient. Depending on the beam size and intensity distribution of the beam profile, it might be worthwhile to access different gain settings on the photodiode. Our photodiodes feature different gain settings up to 70 dB (our gain setting was 40 dB).
An oscilloscope records the absorption spectrum. We record three channels simultaneously: the absorption spectrum, the background fluctuations, and the Fabry-Perot's transmission. The trigger signal comes from the laser when each frequency scanning loop restarts. We average over eight recordings to reduce the noise level of the absorption spectra. Figure 4 shows a screenshot of the oscilloscope and its settings. The laser intensity and the spacing of the Fabry-Perot comb-like peaks change in the course of the recording. We correct for non-linear frequencies and intensity fluctuations by introducing the above-mentioned schemes and demonstrated in figure 4.
### Observations - Doppler Filter
Rubidium consists of two different isotopes (\({}^{85}\)Rb and \({}^{87}\)Rb). Due to the ground state splitting (known as the "clock-transition"), both isotopes have two spectral components. That is why we observe the transitions of one isotope on the inner two dips (\({}^{85}\)Rb) and the other on the outer two dips (\({}^{87}\)Rb). Those four absorption dips inherit three hyperfine transitions for the rubidium D\({}_{2}\) line. There are indistinguishable due to the weak splitting of the excited state, which is on the order of hundreds of MHz, compared to the splitting of the two ground states, which is 6.8 GHz for \({}^{87}\)Rb and 3.0 GHz for \({}^{85}\)Rb. The origin of this splitting lies in the addition of quantum numbers of each transition.
For our rubidium vapor cell of 100 mm length, a temperature around 50 \({}^{\circ}\)C is sufficient to reach an absorbance above 90%. At elevated temperatures of 80 \({}^{\circ}\)C, we observe that the two isotopes \({}^{87}\)Rb and \({}^{85}\)Rb can not be resolved anymore in the left part of the spectrum on a linear scale. For temperatures above 145 \({}^{\circ}\)C, the spectral components merge into a 10 GHz wide single absorption feature. In comparison to the transmission spectra of the rubidium \(D_{1}\) line, the temperatures have to be increased by roughly 10\({}^{\circ}\)C to end up in similar situations.
As outlined above, we even out the spectra for the fluctuating changing light intensity and the non-linear frequency scan range. As the recording is straightforward,
we assume the spectra to be theoretically perfectly approximated. A convenient tool for this is the simplified version of the atomic line spectra in the Wolfram Research demonstrations project 2 or the Python package ElecSus 3. With such tools, it is easy to extract the cell temperature of a recorded spectrum. In our experiment, we observe roughly an \(10^{\circ}\)C deviation from the temperature of the copper blocks and the atomic vapor temperature determined by the fit. Such deviations are often reported in the literature, also for Faraday filters (e.g. [10]).
Footnote 3: [https://demonstrations.wolfram.com/SpectraOfTheDLinesOfAlkaliVapors/](https://demonstrations.wolfram.com/SpectraOfTheDLinesOfAlkaliVapors/)
Footnote 4: [https://github.com/durham-qlm/ElecSus](https://github.com/durham-qlm/ElecSus)
The linearization of the absorption spectrum shows good results when we compare the theoretical model with the data. The calculated residuals are all in the lower percentage range when we focus on the position and width of the absorption spectrum. The steeper slopes are naturally more sensitive and show high deviations from the theory. We account for those deviations due to residual noise and averaging of the spectra.
Figure 5: **Spectrum of a Doppler filter on the rubidium D\({}_{2}\) line:** The frequency range of the laser spans roughly 20 GHz wide. For the 100 mm cell and a laser power of 15 \(\mu\)W, the fitted model suggests a temperature of \(T=79.11\pm 0.02^{\circ}\)C. The calculated residuals go as low as \(\pm 0.2\%\) for the plateaus and are below \(\pm 1\%\) at the steeper edges.
### Introducing a Magnetig Field: Towards the Faraday Filter
Up to this point, the focus of this section was on the Doppler spectrum. We have not strictly defined the incident polarization of the excitation light, nor did we account for residual magnetic fields. The incident light polarization was of no urgent importance for the Doppler case. In the absence of a magnetic field, no further splitting of the hyperfine states occurs, and therefore, the initial energy states are all degenerated. Also, the earth's magnetic field does not have a significant impact on the spectrum. This makes the absorption features in the Doppler configuration polarization independent. The vapor cell sits in a solenoid with around 3000 windings of 0.75 mm enameled copper wire. This coil is able to provide a field of approximately 500 G. This magnetic field is more than sufficient for rubidium (or cesium) with a cell length of 10 mm or larger. In comparison, sodium requires magnetic fields up to 2000-2500 G for an optimal working Faraday filter caused by the lower vapor density. In the past, experiments with permanent magnets were performed [63, 64, 65]. Such attempts are limited due to two factors: a) the magnetic field is limited for large cells, and b) since the cell is heated, one has to account for the magnet's Curie temperature. A good amount of the heat does not reach the atomic vapor cell but gets distributed over the entire experimental configuration, including the magnet.
### The Laser
In the experiment, the laser scans across the rubidium D\({}_{2}\) line. Compared to the Doppler spectrum, the scan range should be larger for the Faraday filter. The wings of the Faraday spectrum show a more shallow slope, unlike the sharp Doppler-limited components. Ideally, the mode-hope-free detuning range of the laser exceeds 16 GHz for the rubidium D\({}_{2}\) Faraday filter.
### The Influence of Polarization
To define the polarization of the light before the vapor cell, we apply a polarizing beam splitter on a fixed optical mount. That way, we ensure that only linear polarized light enters the vapor cell.
It is convenient to monitor both linear components of the excitation light behind the vapor cell. Therefore, a second polarizing beam splitter is placed right behind the vapor cell. Two photodiodes record both outputs of the polarized beam splitter simultaneously. That way, we can reconstruct the Doppler signal by adding them together. The alignment of both polarizers has to be such that one photodiode does not detect any light without an external magnetic field (polarizers cross out the light). Ideally, one performs this mechanical alignment with far-off resonant light so that small residual magnetic fields do not have any influence. The crossed-polarizers are also an indicator to catch and prevent saturation effects. If the laser intensity is too high, one observes some polarization changes close to an atomic transition. In this case,
unexpected light breaks through to the photodiode, characterized by small revival peaks in the absorption spectrum. When reducing the intensity below saturation, the revival peaks vanish. Ideally, the extinction ratio of the two polarizers reaches up to five orders of magnitude. Often, this is limited by some residual birefringence of the vapor cell's windows. The extinction ratio is also commonly diminished by a large beam due to lateral variations of the window's retardance.
Sometimes it is worthwhile to exchange the polarized beam splitter with a calcite beam displacer. Those elements split the incident polarization into two different linear components. Behind the cell, the resulting spectra can be analyzed independently of the incident polarization of light. This is an interesting option for experiments in quantum optics where e.g. single photons are encoded on a polarization basis [66].
The theory section already pointed out that the polarization of the incident light is a critical point for the Faraday filter. Little variations of the polarization result in changes in the recorded spectrum. Such little variations can occur due to the residual birefringence of the vapor cell windows. A \(\lambda/2\) and a \(\lambda/4\) waveplate in the right setting corrects for the birefringence effects. This becomes important when the spectrum should match the theoretical predictions.
### Observations - The Faraday Filter
The measured Faraday spectrum corresponds closely to the observed spectrum by Dick and Shay [10]. Unlike the Doppler absorption spectrum, the Faraday spectrum consists of four different components. These are simply the rims of the Doppler spectrum. The closer the spectral components are to the spectral line, the higher the optical rotation. Note that both, the Doppler and Faraday spectrum, were recorded on two different days and can show minor deviations for the set temperature.
## 5 Conclusion
In conclusion, we have outlined a hands-on guide on how to understand and apply the foundation of optical filters based on the interaction of light and hot atomic vapors. The focus was on the Doppler and the Faraday filter and highlighted their narrow-banded character, the implementation of such filters, and a guide on how to build such a setup that follows theoretical predictions closely.
Both filters are easy to implement. Most notably, the Doppler filter is considered a fundamental atomic physics experiment. This interlinks our article to the more fundamental view of Pizzey and coworkers [62].
Our view on atomic filters was limited to a small number of magneto-optical effects. A large collection of these effects has been outlined before [15]. Still, we hope that the experimental description facilitates more experiments that utilize atomic vapor filters in a larger variety. We see a large number of experimental implementations in quantum optics reaching up to microbiology, where such filters might lead to a crucial advantage
like increasing the photon detection efficiency by a margin [54], such that the laser light intensity can be decreased and therefore makes biomatter longer sustainable. Space weather applications, in particular the prediction of solar flares, use atomic vapor filters in telescopes to measure the magnetic and Doppler fields of such events [67, 44]. Another interesting application could be the analysis of Stokes parameters, which can be used to build an effective polarization measurement tool [68].
## 6 Acknowledgments
The project was funded by the Deutsche Forschungsgemeinschaft with the project GE 2737/5-1 and the Bundesministerium fur Bildung und Forschung (13N15972).
|
2309.03384 | Measuring Website Password Creation Policies At Scale | Researchers have extensively explored how password creation policies
influence the security and usability of user-chosen passwords, producing
evidence-based policy guidelines. However, for web authentication to improve in
practice, websites must actually implement these recommendations. To date,
there has been limited investigation into what password creation policies are
actually deployed by sites. Existing works are mostly dated and all studies
relied on manual evaluations, assessing a small set of sites (at most 150,
skewed towards top sites). Thus, we lack a broad understanding of the password
policies used today. In this paper, we develop an automated technique for
inferring a website's password creation policy, and apply it at scale to
measure the policies of over 20K sites, over two orders of magnitude (135x)
more sites than prior work. Our findings identify the common policies deployed,
potential causes of weak policies, and directions for improving authentication
in practice. Ultimately, our study provides the first large-scale understanding
of password creation policies on the web. | Suood Alroomi, Frank Li | 2023-09-06T22:09:32Z | http://arxiv.org/abs/2309.03384v1 | # Measuring Website Password Creation Policies At Scale
###### Abstract.
Researchers have extensively explored how password creation policies influence the security and usability of user-chosen passwords, producing evidence-based policy guidelines. However, for web authentication to improve in practice, websites must actually implement these recommendations. To date, there has been limited investigation into what password creation policies are actually deployed by sites. Existing works are mostly dated and all studies relied on manual evaluations, assessing a small set of sites (at most 150, skewed towards top sites). Thus, we lack a broad understanding of the password policies used today. In this paper, we develop an automated technique for inferring a website's password creation policy, and apply it at scale to measure the policies of over 20K sites, over two orders of magnitude (\(\sim\)135x) more sites than prior work. Our findings identify the common policies deployed, potential causes of weak policies, and directions for improving authentication in practice. Ultimately, our study provides the first large-scale understanding of password creation policies on the web.
Online Authentication; Password Policies; Account Creation; Authentication Guidelines +
Footnote †: journal: Research Article
counter to modern standards, acceptance of short passwords is widespread, with over half of sites allowing passwords of six characters or shorter, and an unexpected 12% lacking any minimum length requirements. Furthermore, 30% of sites do not support certain recommended characters in passwords, including spaces and special characters. We also observe only about 12% of sites using password blocklists, resulting in the majority of sites being vulnerable to password spraying attacks (Sutton et al., 2017; Wang et al., 2018). Overall, only a minority of sites fully adhere to common guidelines, with most sites adhering to more dated guidelines. We also observe that top-ranked sites tend to support stronger policy parameters. Through case studies of weak policy parameters, we identify how web frameworks and default configurations may be driving factors.
Ultimately, our study illuminates the state of modern password creation policies at scale for the first time, while also highlighting authentication security and usability problems requiring attention and identifying directions for improving authentication in practice.
## 2. Related Work
Here we summarize prior work measuring real-world password policies and studies that relied upon automated account creation.
### Password Policy Measurements
Over the past 15 years, multiple studies have manually investigated the password policies used by real-world websites. Several initial studies (Kohn et al., 2007; Wang et al., 2018; Wang et al., 2018) were very limited in scale (considering up to 10 sites). At a larger scale, Kuhn et al. manually surveyed the password policies of 69 domains in 2007 and then again in 2009 (Wang et al., 2018). The authors noted that 45% of the websites changed their password policy in the two-year span. These changes included more widely imposing password complexity and length requirements, although policies on many sites remained weak. Similarly, in 2010, Florencio et al. explored the factors that influence the password policies employed by websites (Kohn et al., 2007). The authors manually characterized the password policies of 75 US websites, finding that factors related to monetization seemingly correlated inversely with policy strength. The study was replicated seven years later in 2017 by Mayer et al., using the same set of websites along with 67 additional German websites (Mayer et al., 2018). This work replicated the earlier observations, and observed that overall, password policies on US websites had increased in strength over time, and were stronger than those on German sites. In 2015, Wang et al. also compared the password policies between 30 Chinese websites and 20 English-language sites (Wang et al., 2018). They observed several Chinese websites requiring digit-only passwords, and policies on English sites were overall more stringent.
At the largest scale, in 2010, Bonneau et al. conducted an extensive manual evaluation of the password policies on 150 domains chosen from the Alexa Top 500 sites (Banneau et al., 2010). They found that half of the websites enforced a minimum password length of 6, and 18% had no length restrictions. Furthermore, few sites disallowed common dictionary words for passwords. Due to password reuse by users across websites, the authors also highlighted the potential negative externalities caused by websites with weaker password policies, impacting the passwords chosen by users even on sites employing more secure policies. This concept was empirically explored further by Preibusch and Bonneau through a game-theoretic model using the same dataset (Piez et al., 2017). In 2017, Seitz et al. also characterized the potential for password reuse across sites by contrasting the password policies across 100 German sites (Seitz et al., 2018), finding that the policies were not diverse enough to mitigate the risk of password reuse. They were able to construct passwords that could be accepted across 99% of the sites. Most recently, Lee et al. (Lee et al., 2018) manually investigated 120 top English websites, finding that over half did not blocklist common passwords. Overall, less than a quarter of the sites followed security and usability password policy recommendations.
A primary limitation of these studies is that they manually analyzed website password policies. As a consequence, the studies were small-scale, with the largest involving only 150 sites, and the characterized sites heavily skewed towards top sites (summarized in Table 8 of Appendix A). Furthermore, most studies were over a decade ago, making their observations dated. The web has expanded significantly since then, and our understanding of secure password policies has also substantially evolved (including updates to modern authentication recommendations, such as NIST's latest password policy guidance released in 2017 (Kohn et al., 2007)). Thus, a more modern view of website password policies is needed. Our study leverages automation to provide the largest-scale picture of web password creation policies today, encapsulating a diverse population of websites across different rankings.
### Account Creation Studies
Several studies have used automated account creation for different measurements. DeBlasio et al. automatically created honey accounts on websites to detect potential credential theft (DeBlasio et al., 2018). They successfully created accounts across 2.3K websites, detecting 19 potential cases of website credential compromise. Recently, Drakonakis et al. investigated how websites handle cookies during authentication workflows (Drakonakis et al., 2018). They attempted automated account creation and login across 1.5M domains, successfully creating accounts on 25K domains in total. They found half of the domains vulnerable to cookie-hijacking attacks. While our automated account creation process shares similarities with the prior work, we designed our method from the ground up, as our end-to-end empirical method required overcoming distinct challenges, such as more extensive account creation activity and inferring password policies.
## 3. Method and Implementation
Here, we describe our method for automatically inferring password policies. At a high level, we attempt multiple account signups on a website using different passwords, observing which accounts are successfully created to identify password policy parameters. As shown in Figure 1, we first discover a website's account signup workflow. To do so, we search for account signup forms (Section 3.2) across a website's pages to detect an account signup page (Section 3.3). Then, we execute our policy inference process, which attempts multiple account signups with different passwords (Section 3.4) while evaluating whether the signup is successful (Section 3.5). Based on which signup attempts (and the associated passwords) succeed, we infer the password policy parameters (Section 3.6). To conduct our measurements, we train two machine learning classifiers, one for signup form detection (Section 3.2) and another for classifying signup attempt success (Section 3.5). Other
components of our method rely on keyword-based heuristics (Section 3.1 and Appendix A), particularly for identifying potential account signup URLs and form fields. We will share our measurement data and code to vetted researchers upon request, as otherwise these could potentially be used in online abuse.
### Ground-Truth Analysis
Modern websites and their authentication workflows are diverse, in both design and implementation. As a consequence, we require heuristics throughout our method for discovering and analyzing website account creation (as have prior work conducting similar automated account creation (Han et al., 2017; Chen et al., 2018)). These heuristics include keywords for classifying webpages and HTML elements. We additionally train machine learning classifiers for complex labeling tasks.
To identify keywords for our specific method in a systematic, language-agnostic, and data-driven fashion, as well as to train our classifiers, we manually analyzed 2800 domains randomly sampled from the Tranco Top 1M (Tranco, 2017) (from June 6, 2021). We identified whether each domain supports account creation (26% did), and if so, we analyzed the characteristics of its account signup workflow (including the location of its signup pages and forms). We refer to this dataset as our _ground-truth data_. For extracting relevant keywords, we applied keyword ranking algorithms to identify the top keywords prevalent in positive cases but uncommon in negative cases, agnostic to any specific language (details in Appendix A). We discuss training our classifiers in the following sections.
### Detecting Account Signup Forms
To assess a site's password policies, we first identify its signup page and form. To distinguish account signup forms from others (e.g., login, newsletter), we use a binary SVM classifier. For its features, we use the presence of signup-related keywords (chosen from our ground-truth data, discussed in Appendix A) in the HTML form's title, ID, class, and action, as well as the numbers of form inputs in total and password-type inputs. For training data, we manually labeled the HTML forms in our ground-truth data. We trained our model using Python's sklearn (Payr et al., 2015), selecting hyperparameters using grid search. Evaluating our model with 10-fold cross-validation, we observe an average accuracy of 94.7% (errors discussed in Appendix C.1). Note that while false negatives will cause us to skip evaluating sites, false positives will result in unsuccessful attempts to evaluate them (which we detect and filter out).
### Discovering Account Signup Pages
Given a domain, our method starts by searching for its signup page, identified by the presence of a signup form (from Section 3.2). This process proceeds as follows until a signup page is found.
1. We search for a signup form on the domain's landing page.
2. We next crawl URL links found on the landing page that contain common keywords for account signup or login URLs. We call these candidate URLs as they likely contain an account signup or login form. Keywords are selected using ground-truth data (see Appendix A), with separate keywords for signup and login URLs. We use login URLs as they often contain links to a signup page (for users without an account). On login URLs, we attempt to detect a signup form, otherwise we collect further candidate signup URLs (now ignoring candidate login URLs). For each page, we visit at most four candidate URLs to avoid excessively crawling a domain. (In our ground-truth data, we observed that this threshold was sufficient for discovering signup URLs, as most pages had few, if any, candidate signup or login URLs.)
3. Finally, we query the Google search engine for the domain's account signup pages (using ScraperAPI (Sci et al., 2017)). Our search query includes the domain along with "account OR register OR sign+up OR create", constructed using the most frequent keywords in the HTML titles of real signup pages in our ground-truth data. Given the search results, we again consider candidate signup and login URLs, crawling up to 4 candidate URLs in search of a signup page/form (using the same method for identifying candidate URLs and processing them as done with URL links on the domain's landing page). (We observed that this crawling threshold was sufficient on our ground-truth dataset.)
4. Here, we record the domain as lacking a signup page.
We note that our crawler is non-interactive and does not simulate user actions on a page. Some sites require an action for the signup form to fully appear (e.g., clicking a "signup" button, or clicking through multi-page forms). However, in our ground-truth data, this behavior is not widespread, and automating it would be challenging.
### Attempting Account Signups
With a domain's signup page, we next fill out and submit the signup form. By testing different passwords across multiple signup attempts, we will infer the domain's password policy (discussed in
Figure 1. Illustration of the stages of our password policy measurement method.
Section 3.6). Automatically filling and submitting a signup form encounters two key challenges.
First, we must identify signup form fields and provide acceptable values/actions. We classify them based on the HTML input element's name, class, and ID, using relevant keywords identified in our ground-truth data (see Appendix A). For common form fields (e.g., name, email), we use either pre-selected values (not real user data) or the Faker Python library (Faktoris et al., 2017) to generate synthetic data. We handle the password field specifically, as discussed in Section 3.6. For unrecognized fields, we generate a random string as a last resort. Some forms offer multiple button elements (e.g., signup and single sign-on buttons). We identify the account signup button using keywords derived from our ground-truth data (see Appendix A).
A second challenge is that many signup workflows require completing a CAPTCHA. In our ground-truth data, we identified CAPTCHAs on at least 49% of signup forms. We aimed to overcome CAPTCHAs to significantly increase our likelihood of successfully assessing sites. Given our measurement's scale and ethical concerns1 with human-driven CAPTCHA solvers (discussed in Section 3.10), we opted to rely on an automated CAPTCHA solver, AZcaptcha2(Faktoris et al., 2017). We identify CAPTCHAs during the signup process through fingerprinting the HTML/JavaScript code used by the CAPTCHA implementations supported by AZcaptcha, and pass the extracted CAPTCHAs to AZcaptcha to solve. (During our full measurement, AZcaptcha correctly solved 94% of all CAPTCHAs we encountered, with failure cases discussed in Appendix C.1.)
Footnote 1: Prior automated account creation work skipped sites with CAPTCHAs (Kraemer et al., 2017) or used human CAPTCHA-solvers at a small scale (Kraemer et al., 2017).
Footnote 2: AZcaptcha (Faktoris et al., 2017) advertised an automated OCR-based method. We note that AZcaptcha’s price point is also significantly lower than human-driven CAPTCHA solvers, reinforcing AZcaptcha’s automation claims.
### Determining Signup Success
Websites vary widely in response to submitting an account signup form, and behavior differs depending on the signup success. For example, some sites redirect to another page, while others display a message. To determine if a signup attempt is successful, we develop an ensemble decision tree classifier that operates on features of the webpage returned upon form submission. We collected training data from signup attempts on 160 domains in our ground-truth data. Our features include the presence of a signup form (detected as in Section 3.2), keywords in the page and URL, and the similarity of the page and its URL with those before form submission. We then trained an XGBoost decision tree ensemble model with 100 trees, selecting hyperparameters using grid search. Evaluated using 4-fold cross-validation, we observe a 91.3% accuracy. Note that classification errors primarily result in consistent successes or failures across all attempts for a domain, which we detect and filter out.
### Inferring Password Policies
The prior sections discussed our method for finding signup pages, as well as completing, submitting, and determining the submission outcome for the signup forms. To infer the password policy, we perform multiple signup attempts where we provide consistent signup information except we vary the passwords provided systematically, allowing us to determine the password policy parameters based on which passwords are accepted or rejected. We determine whether a password is accepted based on the form submission outcome. However, form submission may fail due to other information we provide, rather than just the passwords. In such cases, as we provide consistent signup information across signup attempts, we will observe consistent signup failures for a domain, independent of the passwords tested, and we can subsequently filter out such domains from our analysis. Also, a successful account signup results in a created account. To minimize the account-related resources we require of domains, we constructed our method to reduce the number of accounts created, as discussed further in Section 3.10.
#### 3.6.1. Password Policy Parameters
We evaluate the following password creation policy parameters, which encapsulate all policy parameters investigated by prior work (Faktoris et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017), which fall into three classes. The first class involves password **lengths**:
* **Length** (\(I_{min}\), \(I_{max}\)): The minimum and maximum password lengths allowed, respectively. We conservatively consider \(L_{min}\in[0,32]\) and \(L_{max}\in[6,128]\).
The second class of parameters is **restrictive**, as they require that all passwords exhibit certain character structure.
* **Digits** (\(DIG_{min}\)): The minimum number of digits required. We consider \(DIG_{min}\in[0,2]\).
* **Uppercase Letters** (\(Upper_{min}\)): The minimum number of uppercase letters required. We consider \(Upper_{min}\in[0,2]\).
* **Lowercase Letters** (\(LOW_{min}\)): The minimum number of lowercase letters required. We consider \(LOW_{min}\in[0,2]\).
* **Special Symbols** (\(SPS_{min}\)): The minimum number of special symbols required. We consider \(SPS_{min}\in[0,2]\).
* **Combination, 3 out of 4** (\(R_{cmb34}\)): Passwords must exhibit 3 out of 4 character classes (_Digits_, _Uppercase Letters_, _Lowercase Letters_, _Special Symbols_).
* **Combination, 2 out of 4** (\(R_{cmb24}\)): Passwords must exhibit 2 out of 4 classes (same classes as \(R_{cmb34}\)).
* **Combination, 2 out of 3** (\(R_{cmb23}\)): Passwords must exhibit 2 out of 3 classes (_Digits_, _Letters_, _Special Symbols_).
* **Combination of Words** (\(R_{2\_word}\)): Passwords must have multiple words, where a word is defined as a string of three or more letters (any case), delimited by digits or special symbols3. Footnote 3: We assume that if \(R_{2\_word}=True\), then \(L_{min}\geq 10\) (whereas in theory, \(L_{min}\) could be between 7 and 9). We argue that this is a reasonable assumption as requiring such word structure without allowing longer passwords would overly constrain user password selection, especially as the average word is 4.7 characters (Krizhevsky et al., 2017).
* **No Arbitrary Special Symbols** (\(R_{no\_a\_sps}\)): Passwords cannot have arbitrary special characters (considering less popular special characters not accounted for by parameters \(P_{spn1}-P_{spn4}\)).
* **Letter Start** (\(R_{listart}\)): A password must start with a letter. (Prior work observed such positional restrictions (Kraemer et al., 2017).)
A final class of parameters is **permissive**, allowing certain password characteristics without requiring them.
* **Dictionary Words** (\(P_{dict}\)): Common dictionary words (e.g., apple) are permitted within the password, where a word is at least 3 letters.
* **Sequential Characters** (\(P_{seq}\)): Logical sequences of 3+ characters (e.g., 123, abc) are permitted in the password.
* **Repeated Characters** (\(P_{rep}\)): 3+ consecutively repeated characters are permitted in the password.
* **Long-Digit Passwords (\(P_{longd}\))**: All-digit max-length passwords are permitted (observed before on Chinese websites (Shan et al., 2017).)
* **Short-Digit Passwords (\(P_{shortd}\))**: All-digit min-length passwords are permitted (used along with \(P_{longd}\) to determine length's role in accepting digit-only passwords)
* **Personal Information/Identifiers (\(P_{id}\))**: Personal information (e.g., username) is permitted in the password.
* **Space (\(P_{space}\))**: Whitespaces are permitted in the password.
* **Emojs (\(P_{emoil}\))**: Emojis are permitted in the password.
* **Uncode Letters (\(P_{unied}\))**: Unicode characters (e.g., accented characters) are permitted in the password.
* **Popular Special Symbols (\(P_{spn1}\)-\(P_{spn4}\))**: The four most popular special symbols (\(\stackrel{{-}}{{\sim}}\), \(\stackrel{{+}}{{\sim}}\), \(\stackrel{{-}}{{\sim}}\), and \(\stackrel{{-}}{{\sim}}\), respectively) are permitted in the password. We derive this list of top special symbols by analyzing 10M passwords in a popular password dataset (Bahdan et al., 2017).
* **Breached Passwords (\(P_{br}\))**: A common password from a known password leak is permitted.
#### 3.6.2. Inference Algorithm
With many parameters to infer, we require an efficient algorithm that evaluates a limited number of test passwords. We describe our algorithm here, with further details (including correctness and efficiency) and an example in Appendix B
**Algorithm Steps.** At a high level, our inference algorithm operates by first finding one acceptable password (chosen in a specific fashion). Then, we evaluate each policy parameter one by one, testing passwords that are modifications of the original admissible password where only the specific parameter's dimension is changed, to determine that parameter's value. The order of parameter evaluation is specifically chosen to isolate the impact of just that parameter and minimize the number of successful account signups. Concretely, our algorithm operates in five steps.
**Step 1. Admissible Password:** First we must find an admissible password to seed our exploration, which satisfies the restrictive parameters (e.g., minimum class requirements) and all permissive parameters (e.g., avoiding the relevant password characteristic such as repeated letter and number sequences.)
For a given length \(l\), we identify that there exists only a small set of passwords (which we call the _safe set_) for which one password will satisfy any possible parameter combination. If a website accepts passwords of length \(l\), then the safe set must contain at least one acceptable password.
While we consider a variety of parameters, the safe set is small because a password can satisfy multiple restrictive parameters simultaneously (e.g., contain multiple characters of all classes, satisfying all minimum class and class combination parameters), and also satisfy all permissive parameters by avoiding the relevant password characteristic (i.e., avoiding certain characters and sequences).
We manually construct the safe sets for lengths \(l\in[6,32]\), shown in Table 1, covering the range of lengths that we conservatively assume a site must accept (based on our \(L_{min}\) and \(L_{max}\) assumptions). As seen in Table 1, the safe set for a given length contains passwords covering all restrictive parameter combinations, while also satisfying all permissive parameters. Note that for short lengths, fewer restrictive parameters can be concurrently satisfied, so the safe set is larger. The largest safe set contains 10 passwords (for \(l=6\)), while for lengths 8 or larger, the safe set consists of only two passwords (with and without special characters).
We search for an admissible password through the safe sets in increasing length order, first testing passwords with special characters within each safe set. Whether the admissible password found contains a special character already determines our first restrictive parameter \(R_{no\_a\_sps}\) (if arbitrary special characters are disallowed). In subsequent steps, we modify this admissible password along a single parameter's dimension and identify whether the modified password remains accepted, revealing the parameter's value.
**Step 2. Restrictive Parameters:** With an admissible password of length \(l\) (and \(R_{no\_a\_sps}\) determined, which indicates whether arbitrary special characters are allowed), we then evaluate the restrictive parameters first, as determining these reveal the constraints enforced on any further tests. To determine the value of a restrictive parameter, we modify the admissible password to only violate that parameter, observing whether the modified password is accepted. If so, then the restrictive parameter is in effect.
_1) Combination of Words (\(R_{2\_word}\))_: If \(R_{2\_word}=True\), the admissible password must contain a two-word structure, delimited by a non-letter character (if not, then we already know \(R_{2\_word}=False\)). To test \(R_{2\_word}\), we modify the admissible password by moving the non-letter delimiter to the password end, eliminating the two-word structure (e.g., Admissible Password: MxT7zc54-@, Modified Password: MxTzc54-@7). If this modified password is no longer accepted, \(R_{2\_word}=True\), otherwise \(False\). This modification does not affect other parameters as the length and character composition remain identical, and there are no other positional restrictions on middle-of-password characters. Permissive parameters are also not affected as the modification does not introduce a character sequence related to a permissive parameter (e.g., sequential/repeated characters, dictionary word).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Password & \(L\) & \(R_{no\_a\_sps}\) & \(LOW\) & \(UPP\) & \(DIG\) & \(SPS\) \\ \hline \hline M-7c4@ & & & 1 & 1 & 2 & 2 \\ \hline M-7c5@ & & & 1 & 2 & 1 & 2 \\ \hline Mx-7c4@ & & & 2 & 1 & 1 & 2 \\ \hline Mx7c4@ & & & 2 & 1 & 2 & 1 \\ \hline Mx7c5@ & & & 2 & 2 & 1 & 1 \\ \hline M-7c54 & & & 1 & 2 & 2 & 1 \\ \hline M-754@ & & & 0 & 2 & 2 \\ \hline x-7c4@ & & & 2 & 0 & 2 & 2 \\ \hline Mx-5@ & & 2 & 2 & 0 & 2 \\ \hline Mx7c54 & & & 2 & 2 & 0 & 2 \\ \hline Mx7c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx7c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx7c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx7c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx7c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx7c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx7c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx7c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx7c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx7c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx7c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx7c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx7c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx7c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx7c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx7c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx7c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx77c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx77c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx77c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx77c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx77c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx77c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx77c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx77c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx77c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx77c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx77c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx77c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx77c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx77c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx77c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx77c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx77c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx77c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx77c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx77c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx77c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx77c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx77c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx77c54\& & & 2 & 2 & 0 \\ \hline Mx77c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx77c54\& & & 2 & 2 & 2 & 0 \\ \hline Mx77c54\& & & 2 & 2 & 2 & 0 \\ \hline \end{tabular}
\end{table}
Table 1. The safe set of passwords for different lengths \(L\). For each password, we indicate which restrictive parameter configurations are satisfied. Note that all passwords satisfy the class combination parameters, \(R_{listar}\), and \(R_{2word}\) (if \(L\geq 10\)). Permissive parameters are also all inherently satisfied. For \(L>10\), the safe set is identical as with \(L=10\), except with passwords padded with arbitrary letters and digits to length.
2) Letter Start (\(R_{1start}\))_: All our admissible passwords begin with a letter. To assess \(R_{1start}\), we move the first non-letter character in the admissible password to the start (e.g., Admissible Password: Mx7-c54@, Modified Password: 7Mx-c54@). If accepted, \(R_{1start}=False\), otherwise \(True\). If \(R_{2word}=True\), we take care to avoid moving the two-word delimiter (e.g., Admissible Password: Mx7?zcS4t1, Modified Password: 4MxT7zcSt1), as all admissible passwords have multiple non-letter characters (see Table 1). This modification does not affect other parameters as the length and character composition remain identical, and the only other positional restriction remains satisfied. Also, moving the non-letter characters does not introduce a character sequence affecting a permissive parameter.
#### Iv-B3 Character Class Minimums (\(Dig_{min}\), \(Upt_{min}\), \(Low_{min}\), \(Sps_{min}\))
To find the character class minimum for class \(C\) (where \(C\) is either digits, uppercase letters, lowercase letters, or special symbols), we modify the admissible password to contain no \(C\) characters, by replacing \(C\) characters with characters of other classes (e.g., if \(C=LOW\), Admissible Password: Mx7-c54@, Modified Passwords: MX7-C54@). If accepted, \(C_{min}=0\). Otherwise, we modify the admissible password to contain only one \(C\) character (e.g., if \(C=LOW\), Admissible Password: Mx7-c54@, Modified Passwords: MX7-c54@). If accepted, \(C_{min}=1\), otherwise \(C_{min}=2\).
To avoid conflicting with other restrictive parameters, our default replacement policy is to swap between lowercase and uppercase characters (to not impact \(R_{2word}\) and \(R_{1start}\)), and between digits and special symbols (to not affect \(R_{2word}\)). If \(R_{no\_a\_sps}=True\) (no special characters allowed), digits are instead replaced with any letters (note here that if \(R_{2word}=True\), then \(DIG_{min}\geq 1\)).
In most cases, all class combination parameters (\(R_{cmb23}\), \(R_{cmb24}\), \(R_{cmb54}\)) remain satisfied without further consideration. As seen in Table 1, most admissible passwords already have four character classes, so three classes remain after eliminating one class in the admissible password. A few admissible passwords have only three character classes (none have fewer classes), either because they are short (specifically, \(l=6\)) or because \(R_{no\_a\_sps}=True\) (so only three classes are allowed). For \(l=6\) admissible passwords, there are two characters of each class, and we can replace the second \(C\) character with one from the missing class, following the default replacement policy for the first character (e.g., if \(C=UPP\), Admissible Password: Mx-c5@, Modified Password: mx-c1@). This preserves \(R_{1start}\) while maintaining 3 distinct classes. When \(R_{no\_a\_sps}=True\), the class combination parameters either implicitly imply class minimums which we will correctly infer (e.g., \(R_{cmb34}=True\) means there needs to be one character of each class), or will remain satisfied (the modified password still has two classes).
#### Iv-B4 Combinations Requirements (\(R_{cmb23}\), \(R_{cmb24}\), \(R_{cmb34}\))
To evaluate the final set of restrictive parameters, the class combination requirements, we modify the admissible password to have fewer classes and test for acceptance.
We start by identifying required character classes based on the other restrictive parameters. \(R_{2word}\) and \(R_{1start}\) both require letters; we select the required case based on class minimums, selecting lowercase letters by default. Similarly, \(R_{2word}\) requires either digits or special characters; we select which based on class minimums and \(R_{no\_a\_sps}\), selecting digits by default.
For modifying our admissible password, we replace all characters of non-required classes with those of a required class (replacing with lowercase letters if no class is required). If letters are required at certain positions, we replace any letters of a non-required class with letters of the required class (likewise between digits and special characters). This modified password has the minimum number of classes while adhering to other restrictive parameters, without impacting length or permissive parameters (e.g., if \(UPP_{min}\geq 1\), Admissible Password: Mx7-c54@, Modified Password: MXZNCSZA). If the modified password is accepted, we can determine the class combination parameters given the required classes in the password (in the prior example, there are no class combination requirements).
However, if not accepted, then an explicit class combination requirement is in effect. We determine its configurations based on the properties of the rejected modified password, as follows:
* _All non-letters of one class (e.g., if \(DIG_{min}\geq 1\), \(R_{1start}=False\), Admissible Password: Mx7-c54@, Rejected Modified Password: 32729041)_. Here, the other restrictive parameters require a single non-letter class. We test a new modification of the admissible password with only that non-letter class and letters of one case, using lowercase by default (e.g., New Modified Password: a2729041). If this new password is accepted, \(R_{cmb23}=R_{cmb24}=True\) (and \(R_{cmb34}=False\)), otherwise only \(R_{cmb34}=True\).
* _All non-letters of both classes (e.g., if \(DIG_{min}\geq 1\), \(SPSmin\geq 1\), \(R_{start}=False\), Admissible Password: Mx7-c54@, Rejected Modified Password: 157-824@)_. Here, we can immediately infer that only \(R_{cmb34}=True\) as a two-class password was rejected.
* _All letters of one class/case (e.g., if \(UPP_{min}\geq 1\), Admissible Password: Mx7-c54@, Rejected Modified Password: MXZNCSZA)._ We test a new modified password with letters of both cases (e.g., New Modified Password: MxZNCSZA). If accepted, only \(R_{cmb24}=True\). If not, move to the following case.
* _All letters of both classes/cases (e.g., if \(UPP_{min}\geq 1\), Admissible Password: Mx7-c54@, Rejected Modified Password: Mx7-c54@, Rejected Modified Password: MxZNCSZA)._ If both letter cases are required, we know \(R_{cmb23}=R_{cmb34}=True\). Otherwise, we test a new modified password with letters of only one case (whichever is required, defuulting to lowercase letters) and digits (e.g., New Modified Password: M32NCSZA). If accepted, only \(R_{cmb23}=True\), otherwise only \(R_{cmb34}=True\).
* _Contains one non-letter class and one letter-class (e.g., if \(UPP_{min}\geq 1,DIG_{min}\geq 1\), Admissible Password: Mx7-c54@, Rejected Modified Password: MX7-c54@, Rejected Modified Password: MX71CS41)_. Here, we can immediately infer that only \(R_{cmb34}=True\) as the two-class password was rejected.
**Step 3. Length Parameters:** Having now determined the restrictive parameter values that constrain password structure, we can construct passwords of different lengths that satisfy the restrictive parameters (while implicitly satisfying all permissive parameters by avoiding associated characters and sequences). We can then determine the password length minimum and maximum through using binary search to test the acceptance of passwords of varying length (within the ranges \(L_{min}\in[0,32]\) and \(L_{max}\in[6,128]\)). For example, to evaluate \(L_{max}\), we first construct and test a password of length 67 (halfway point of our range). If accepted, we recursively explore \(L_{max}\) within the upper half [68, 128], otherwise we explore the lower half [66, 66]), following the logic of binary search.
We detail our password construction algorithm in Appendix B. At a high-level, the restrictive parameters provide a set of required characters and positional constraints, and we satisfy these constraints first before adding additional characters to construct a
password of an evaluated length \(I\). We start constructing a password using characters required by the class minimums, then using characters of other not-yet-used classes to satisfy class combination requirements (adhering to \(R_{no\_a\_spsps}\)). If \(R_{lstart}\) and/or \(R_{2word}\) are true, we satisfy these positional constraints at the start of the password, again first using allowed characters of classes required by the class minimums and combination requirements (and any remaining required characters are added after the positional constraints). At this point, our partially-constructed password is the shortest that satisfies all restrictive parameters. If its length already exceeds the evaluated length \(I\), we consider \(I\) an unacceptable length. Otherwise, we pad the password with arbitrary letters and digits to length \(I\) (e.g., if \(UPP_{min}=DIG_{min}=1\) and other restrictive parameters are false, Constructed Length-11 Password: M74k3jCbE43).
**Step 4. Permissive Parameters:** Next, we determine the permissive parameters (i.e., what is allowed in passwords). To do so, we inject the character(s) associated with a permissive parameter (e.g., emoji, dictionary word) into an admissible password, while still satisfying restrictive, length, and other permissive parameters, and test if the modified password is accepted. If so, then the permissive parameter is true, and the associated characters are permitted.
_1) Permitted Characters (\(P_{space}\), \(P_{unicd}\), \(P_{emoj}\), \(P_{spmi}\) - \(P_{spm4}\)):_ We first generate an admissible password of maximum length (described in Step 3). We then test a modified password where a non-essential character (i.e., one not used to satisfy a restrictive parameter) is replaced with the evaluated character (if not possible, then the parameter value is inherently false) (e.g., for \(P_{spm4}\), Generated Password: Mx7-a1p5b2, Modified Password: Mx7-a1p5b). For \(P_{space}\), we require that the whitespace character is not at the start or end of the password. This modified password remains adherent to restrictive, length, and other permissive parameters. If accepted, the permissive parameter value is true.
_2) Permitted Sequences (\(P_{rep}\), \(P_{seq}\), \(P_{dict}\), \(P_{id}\)):_ Here, we construct a password with the evaluated sequence and test for acceptance. For repeated characters (\(P_{rep}\)) the sequence is three repeating consecutive characters (\(e.g.,\) 111, aaa, or AAA), and for sequential characters (\(P_{seq}\)) it is abc, 123, or ABC. For both parameters, we select one as permitted by other policy parameters.
For dictionary words (\(P_{dict}\)), we identify the longest word (up to 8 characters) permitted in a password as constrained by other policy parameters, and test the inclusion of the most common English word (Kumar et al., 2018). For personal identifiers (\(P_{id}\)), the evaluated sequence is a subset of the username used during account creation. We choose our usernames to be a 3-letter names followed by 5 random digits, and the sequence is the 3-letter portion of the username (e.g. if the registered username is joe31426, we evaluate the acceptance of the sequence "joe" in the password)
We first construct the shortest password \(P\) that satisfies the restrictive requirements (as done in Step 3). If the evaluated sequence can be added to the end of \(P\) while remaining within \(L_{max}\), we simply test this augmented password, padding if necessary to reach \(L_{min}\) (e.g., to test for \(P_{seq}\) if \(L_{min}=6\), \(L_{max}=64\) and the shortest password satisfying restrictive parameters is: AQ16-@, Modified Password: AQ16-@abc). This augmentation does not affect restrictive parameters (nor length and other permissive constraints).
However, it is possible that appending the sequence to \(P\) does not fit within \(L_{max}\). In such cases, \(P\) must already be near \(L_{max}\)-length (as we only require appending 3 characters). Instead, we must construct the evaluated sequence using characters already existing in \(P\). We find the most common class \(C\) in \(P\) amongst lowercase letters, uppercase letters, and digits (for \(P_{dict}\) and \(P_{id}\), we only consider the two letter classes). We then rearrange the characters in \(P\) to cluster \(C\) characters together. If three (or more) \(C\) characters are consecutive, we replace them with the evaluated sequence. Otherwise, we add the \(C\) characters necessary to form a 3-\(C\)-character substring, again replacing this with the evaluated sequence. By using the most common class, we minimize the additional characters that may need to be added (e.g., to test for \(P_{seq}\) if \(L_{max}=8\) and the shortest password satisfying restrictive parameters is: AQ16-@, Modified Password: ABC16-@). If the password cannot be constructed within length \(L_{max}\), it is inherently false.
If restrictive parameters do not specify positional constraints, the rearrangement of \(P\)'s characters does not violate any restrictive parameters (nor length or other permissive parameters). If \(R_{lstart}\) or \(R_{2word}\) specify positional constraints, we handle each specifically. We ensure that the rearranged password starts with a letter if \(R_{lstart}=True\). If \(R_{2word}=True\), then \(P\) contains a two-word structure, which must have at least 3 characters of one letter class. We cluster three letters of this class as one of the 3-letter words, and replace it with the evaluated sequence.
_3) Long and Short Digit Passwords (\(P_{longd}\), \(P_{shortd}\):_) We generate digit-only passwords of lengths \(L_{max}\) and \(L_{min}\), respectively (without sequences/repetition). Here, restrictive parameters are ignored to explore exceptions for all-digit passwords (e.g., if \(L_{min}=6\), Attempted Password: 147036).
_4) Breached Passwords (\(P_{br}\)):_ With other parameters determined, we test the highest-ranked breached password (Beck et al., 2018) satisfying them (e.g. if \(DIG_{min}\geq 1\), \(R_{lstart}=False\), \(L_{min}\geq 6\), all other restrictive parameters are false and permissive parameters are true, Attempted Password: 123456, the most popular password in (Beck et al., 2018) satisfying policy parameters). If accepted, \(P_{br}=False\), otherwise true.
**Step 5. Sanity Check:** Given an inferred policy, we test one final password that should not succeed (e.g., too short, violates restrictive parameters), as a sanity check. A detected success indicates a policy inference error, which we can filter out. (We also filter out other errors, where all attempts are successes or failures, and those where trailing attempts all fail, as discussed later.)
**Algorithm Efficiency.** Our algorithm systematically evaluates a website's password policy in an efficient fashion that avoids brute-force guessing passwords. As we can pre-compute the safe sets for our full range of explored lengths, and all policy parameters have a limited range of values (including length, which is efficiently investigated through binary search), we can determine the bounds on the number of passwords tested, as well as the bounds on the number of successful passwords accepted by a website. Table 2 depicts these bounds for each step of our inference algorithm, as well as for the entire algorithm. In the worst-case, our method will create up to 37 accounts on a website, with at most 105 account signup attempts (in most cases, the number of attempts and accounts created is significantly lower). We note that we prioritized fewer accounts created, as the impact of a failed account signup attempt on a website is much lower. Also, there is precedence in the research community
for creating test accounts for measurement purposes; existing studies on password policies also created multiple accounts to evaluate policy parameters, but did so manually (Beng et al., 2015; Chen et al., 2016; Chen et al., 2017; Chen et al., 2018; Chen et al., 2019; Chen et al., 2019; Chen et al., 2019).
**Algorithm Correctness.** Appendix B describes how each parameter is correctly evaluated in isolation. To further ensure correctness, we tested our inference algorithm on a thousand randomly-generated valid policies, observing only correct inferences.
### Measurement Implementation
We implement our measurement method using Selenium browser automation (Beng et al., 2015) with headless Chrome instances4 To minimize the computational load we induce on websites, as well as avoid triggering anti-bot detection, we rate limit our crawling of a domain to at most one page load every 30 seconds, and at most one account signup attempt every 30 minutes. We also use a pool of 14 proxies, switching to a new proxy for each signup attempt to provide IP diversity. Given the rate limiting, we highly parallelize our analysis across sites, such that sites are assessed in a round-robin fashion.
Footnote 4: When crawling with a headless browser, websites may detect and block such a crawler. However, when debugging our method, we tested full browser instances and did not observe higher crawling success, likely because many sites either do not block crawlers or apply anti-bot techniques that are similarly effective on full browsers.
### Limitations
Our measurement method is best-effort, relying on multiple heuristics. It can exhibit false negatives, missing some sites with account signups, such as those with complex workflows (e.g., multi-page forms), user verification (email or phone) prior to signup form submission, registration fees, or offline membership (details in Appendix C.1). Furthermore, our evaluation may fail on sites that can detect our measurements (e.g., sites deploying anti-bot defenses) or where our machine learning models misclassify. However, as our method follows a consistent workflow for account signup attempts, we can filter out errors where all attempts are detected as successful or failures, which is infeasible, as well as those where trailing attempts are all failures (as this is highly unlikely, as discussed in Section 4). Also, our final method step involves testing the inferred policy, further reducing the likelihood of false positives.
Our measurements also assume static policy parameters, rather than dynamic rules, such as if a site were to enforce password strength requirements. To evaluate whether password strength enforcement occurs at scale, we calculated the strength of all accepted passwords on successfully evaluated sites using password strength estimator zxcvbn (Zhu et al., 2019). We observe that for 94% of sites, the weakest accepted password was rated 2 or lower (out of 4), which is considered a relatively weak password (ranging from "too guessable" to "somewhat guessable"). Thus, it is unlikely that most sites are enforcing high password strength requirements.
Due to our method limitations, our evaluated sites may skew from domains with complex or unique workflows, as our analyzed domains use single-step account creation workflows, specific common keywords, and do not require verification or payment for signups. While our work does not comprehensively evaluate all sites (similar to all prior automated account creation works, including those investigating authentication (Han et al., 2016; Chen et al., 2017)), our dataset (discussed in Section 4) is still orders of magnitude larger and more diverse (including across rankings) than prior studies, serving as more generalizable empirical grounding. Furthermore, as detailed in Appendix C.3, we manually investigated the password policies of a random sample of domains that our method does not handle, and found that our study's core findings generalize to these domains.
### Alternative Measurement Approaches
While our automated account creation process is similar to prior work (Chen et al., 2017), our task involves distinct challenges (e.g., password policy inference), so we designed our method in a data-driven fashion from scratch. In comparison, while prior work applied rule-based heuristics for keyword selection, form detection, and verification, we applied machine learning techniques for such tasks. Our signup discovery process also uses search engine results to improve discovery. Our efforts resulted in effective account creation automation, even compared to prior work (see Appendix C.2).
We initially explored non-blackbox methods for assessing password policies, which could reduce website interactions. However, we manually evaluated a random sample of 200 signup websites and identified significant limitations.
**Mining Textual Policy Descriptions:** Only 25% of sampled sites provided policy descriptions (prior work observed 22% (Beng et al., 2015), as well as inconsistencies between policies and their descriptions (Kang et al., 2017)). Such descriptions are also diverse, often displayed only upon user action, and require natural language processing, yet often still do not describe all policy parameters (e.g., password blocklisting).
**Inspecting Client-Side Policy Checks:** Only 10% of sampled sites had client-side JavaScript password policy checks, which were custom implemented per site, inhibiting automated analysis.
**Analyzing Strength Meters:** Only 11% of sampled sites displayed password meters (recent work found only 19% on top English sites (Kang et al., 2017)). Prior work has also observed widespread custom meter designs (Zhu et al., 2019), inhibiting automated analysis. Furthermore, sites typically use password meters as nudges instead of enforcing strength requirements (Chen et al., 2017; Chen et al., 2019), and various policy facets (e.g., blacklisting, allowed characters) may not be factored into strength meters.
**Using Password Resets:** One might assess password policies through password reset workflows. However, we did not log into accounts to avoid account activity (as discussed in Section 3.10). Furthermore, many sites prevent choosing a new password similar to previous ones, which would interfere with policy inference. Finally, sites exhibit diverse password recovery workflows, often requiring user verification, complicating automated analysis.
\begin{table}
\begin{tabular}{l c c} \hline \hline Algorithm Step & \# Attempts & \# Successes \\ \hline Step 1: Admissible password & [1, 65] & [1, 1] \\ Step 2: Restrictive parameters & [4, 13] & [0, 9] \\ Step 3: Length parameters & [11, 12] & [0, 12] \\ Step 4: Permissive parameters & [2, 14] & [0, 14] \\ Step 5: Sanity check & [1, 1] & [0, 1] \\ \hline Total: Whole algorithm & [19, 105] & [1, 37] \\ \hline \hline \end{tabular}
\end{table}
Table 2. Bounds on the number of account signup attempts and successes required by our method, per domain.
### Ethics
As our study involves evaluating a large number of websites, there are several important ethical considerations. It is impractical to obtain consent from all sites. Furthermore, obtaining consent could negatively impact the scientific validity of our study, as websites may opt-out in a biased manner, may change their policies in light of our investigation, or may specifically block our measurements. Thus, we do not seek consent from the studied sites, and must carefully design our measurement methods. We extensively explored various measurement methods (as detailed in Section 3.9). Here, we discuss the concerns with our resulting approach, the potential harm associated with our study, and our mitigations.
To assess the password policies on websites, we attempt multiple account signups in an automated fashion, succeeding for some attempts. Prior studies have performed similar automated account creation (Han et al., 2017; Han et al., 2018), and we draw inspiration from their ethical considerations in designing our method. The potential harm that this activity causes for websites includes the computational resources incurred by the website in processing our signup attempts and created accounts. To limit the resources that websites must expend due to our study, we constructed our password inference algorithm to reduce the number of attempts and successful accounts registered. For successfully created accounts, we never access, verify, or use those accounts. We also crawl websites and attempt account signups in a heavily rate-limited fashion, ensuring that a website receives at most one attempt every half hour (and in most cases, attempts occur even less frequently). We believe that for websites supporting account registrations, this rate of signup attempts and the number of accounts created requires a limited amount of storage and load on websites, and should not tax even small websites. Furthermore, there is precedence in the research community for creating small numbers of test accounts for measurement purposes; existing studies on password policies also created test accounts to evaluate policy parameters, but did so manually (Beng et al., 2015; Han et al., 2018; Han et al., 2018; Han et al., 2018; Han et al., 2018; Han et al., 2018; Han et al., 2018) (e.g., Seitz et al. (2018) created up to 15 accounts per site). As part of our account creation method, we solve CAPTCHAs using an automated CAPTCHA solver. We avoid human-driven CAPTCHA solvers due to ethical issues identified with such services (Stein et al., 2018).
From the legal perspective, we consulted our organization's general counsel, as our methods may be contrary to some websites' policies and terms of services, which we are unable to explicitly check for all sites in our study. General counsel reviewed this study and determined that the legal risk is minimal, with support from judicial precedence, and that there lacked damages incurred by websites. Our organization's administration also reviewed and approved this study. Finally, there are no human subjects concerns with this study (as such, we were not reviewed by our organization's Institutional Review Board). No real user data was used for this study, and our study did not interact with any individuals.
## 4. Results
Here, we apply our measurement method to evaluate the password policies of websites in the Tranco Top 1M. We analyze the top password policies, the values of the various policy parameters, adherence to modern guidelines, and differences across rankings.
### Aggregate Measurement Results
We conducted our large-scale measurement in Dec. 2021, evaluating password policies across Tranco Top 1M (Dec. 13). Appendix Figure 4 visualizes the site population at each method stage.
Out of the 1M domains, we find signup pages on 141K domains (14.12%). While we could successfully submit one signup attempt (including CAPTCHA solving) on 59K domains, we were able to fully evaluate (across multiple attempts) 26K domains. Finally, we filter out domains where all signup attempts are reported as successes or failures (as this is not feasible, especially with our sanity check signup attempt), or where all trailing attempts are failures (we test permissive parameters last, and as discussed shortly, it is highly unlikely that any site truly does not permit all tested characters/structures). This filtering leaves us with 20,119 domains for which we successfully analyze password policies. We manually validated our results are accurate on a random sample of 100 evaluated sites. We note that this population is two orders of magnitude larger than prior work (as discussed in Section 2), providing large-scale data on password policies for the first time.
Our analyzed sites are also broadly distributed across rankings (unlike prior work's focus on top sites), with a slight skew towards lower-ranked sites, as shown in Appendix Figure 3. Across each 100K ranking interval, our final dataset contains between 1.4K-3.7K sites (and between 12.1K-19.2K signup sites found). In the subsequent discussion of our results, we separately consider our evaluated sites that are within the top 10K, 100K, and 1M (full dataset). Here, our results for Top X sites represent only the domains that we evaluated within the Top X ranking, rather than all Top X sites (as we did not evaluate all sites).
### Top Policies
To start, we group websites with identical password policy configurations (across all policy parameters), and consider the top password policies observed among our websites. Table 3 lists the top 15 policies observed across our 20K websites (spanning the
\begin{table}
\begin{tabular}{|c|c|c|} \hline Rank & Policy & \% \\ \hline
1 & \(L_{min}=1\) & 8.3 \\
2 & \(L_{min}=6\) & 7.1 \\
3 & \(L_{min}=5\), \(L_{max}=40\) & 4.1 \\
4 & \(L_{min}=8\) & 3.4 \\
5 & \(L_{min}=5\) & 2.9 \\
6 & \(L_{min}=12\) & 2.8 \\
7 & \(L_{min}=4\) & 1.2 \\
8 & \(L_{min}=8\), \(R_{cmb34}=T\) & 0.8 \\
9 & \(L_{min}=8\), \(L_{max}=72\) & 0.8 \\
10 & \(L_{min}=7\) & 0.7 \\
11 & \(L_{min}=4\), \(L_{max}=40\) & 0.5 \\
12 & \(L_{min}=8\), \(I_{long}=F\), \(P_{short}=F\) & 0.5 \\
13 & \(L_{min}=8\), \(LOW_{min}=UPP_{min}=DIG_{min}=1\) & 0.4 \\
14 & \(L_{min}=4\), \(L_{max}=20\) & 0.3 \\
15 & \(L_{min}=6\), \(L_{max}=100\), \(P_{emoj}=F\) & 0.3 \\ \hline \end{tabular}
\end{table}
Table 3. Top 15 password policies for all evaluated sites. For each policy, unless specified otherwise, \(L_{max}=128\), minimum required characters of a class is 0, restrictive parameters are false, and permissive parameters are true.
Tranco top 1M sites), and the percent of sites using those policies. Among the top policies, the majority (11 of 15) are simple policies, only constraining the password length without further restrictions. Surprisingly, the most popular policy (8.3% of sites) allowed passwords of any length without any constraints. Such a policy allows even single character passwords (we manually verify this behavior on a sample of sites), which are extremely weak passwords. Other top policies allow short passwords (e.g., 4, 5, and 6 characters). In addition, 5 top policies also cap the password's length (including one that limits passwords to only 20 characters). Other password constraints are less prominent in top policies, with only 4 of the top 15 policies applying any non-length constraints.
We find that policy popularity among sites exhibits a long-tail distribution. While the most popular policy was seen on 8.3% of sites, the top 10 policies cover only 32.1% of sites, with a total of 11,184 distinct policy configurations. Most policies appear on only one site, which highlights enormous diversity in the policies deployed (with implications for guidelines, password usability, and password managers, as will be discussed in Section 6).
### Policy Parameters Values
Here, we evaluate individual password policy parameters. As the top 15 policies (Section 4.2) capture only a third of our sites, their parameters do not necessarily reflect an aggregate perspective.
#### 4.3.1. Length
Figure 1(a) plots the CDF of the minimum password lengths enforced by password policies across our websites (Top 1M). As also seen with top policies, we find that a non-trivial fraction of sites (\(\sim\)12%) allow single-character passwords. The most prevalent minimum length is 5, seen at nearly 40% of sites. Only 25% of sites require passwords of length 8 or longer, as recommended by most modern guidelines (11; 20; 21; 34; 41), and \(\sim\)10% require 10+ lengths.
Figure 1(b) similarly depicts the CDF of the maximum password lengths allowed by our websites. We observe that 36% of sites do not cap the password length (or allow at least 128 characters). The most common cap was 40 characters, observed at about 10% of sites. For other sites, the maximum length widely varied, although we notice prevalent use of lengths 20, 72, and 100. Overall, nearly 60% of sites allowed passwords of at least 64 characters, as recommended by many current guidelines (11; 21; 41). We also find that a small portion of sites (1.7%) do not allow passwords longer than 10 characters, which is shorter than some sites' _minimum_ lengths.
**Case Study:**\(L_{min}=1\). We manually investigated 475 detected sites and verified the correctness of our measurements. Through analyzing the Javascript libraries and links embedded on these sites, we identified that the common pattern exhibited was simply accepting any non-empty password field, without applying password length logic. Interestingly, while this logic was customized for the majority of sites, we observed the prevalence of several web frameworks across these sites that we manually confirmed do not support password length constraints by default, such as WooCommerce (19% of such sites) and XenForo (1%).
**Case Study:**\(L_{min}=5\). We investigated the most common minimum length of 5 (38% of sites). Manually investigating a sample of 500 domains, we found 85% using the Spoify platform. We confirmed with Shopify customer support that their default length minimum was 5, indicating the influence a platform can have.
#### 4.3.2. Restrictive Parameters
In Table 4, we display the percent of sites requiring a minimum number of class characters, for each character class. We see that the vast majority of sites (82-86%) do not enforce such requirements, with special characters being least likely to be required and digits being most likely. Of the remaining sites that do, approximately half require one character of a class, while another half require two (or more). We note that higher numbers of required characters of a class increase the complexity in creating passwords, which prior research has demonstrated can ultimately
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{**Lower**} & **Upper** & **Digit** & **Special** \\ \hline \multirow{3}{*}{0} & 10K & 78.4 & 81.7 & 71.2 & 80.3 \\ \cline{2-6} & 100K & 79.2 & 79.4 & 76.3 & 82.5 \\ \cline{2-6} & 1M & 84.1 & 83.7 & 82.0 & 86.3 \\ \hline \multirow{3}{*}{1} & 10K & 10.6 & 10.1 & 20.7 & 14.9 \\ \cline{2-6} & 100K & 11.6 & 10.9 & 14.5 & 9.8 \\ \cline{2-6} & 1M & 8.3 & 8.7 & 10.0 & 7.0 \\ \hline \multirow{3}{*}{2} & 10K & 11.1 & 8.2 & 9.7 & 9.2 \\ \cline{2-6} & 100K & 9.2 & 9.7 & 9.2 & 7.7 \\ \cline{1-1} \cline{2-6} & 1M & 7.5 & 7.6 & 8.0 & 6.8 \\ \hline \end{tabular}
\end{table}
Table 4. For different character classes, we list the percent of sites in the Tranco Top 10K, 100K, and 1M (full dataset) that require a certain number of characters of that class.
Figure 2. CDFs of password minimum and maximum length requirements, for all sites in our dataset (Top 1M) as well as those ranked in the top 10K and 100K.
diminish the security and usability of passwords (Kumar et al., 2017), and is no longer recommended by many guidelines (Kumar et al., 2017; Kumar et al., 2017; Kumar et al., 2017).
Similarly, Table 5 lists the prevalence of the remaining restrictive requirements. Derived from these results, we observed a similar prevalence of character class combinations (15% of distinct sites have at least one required combination, considering all combination possibilities) as with character class minimums (with 11% of sites using both character class minimums and class combination requirements). Furthermore, as seen in Table 5, we note that a non-trivial portion of sites (2.4%) require word structure in passwords, while 2.9% of sites require passwords to begin with a letter. Thus, many sites are not as permissive as recommended (Kumar et al., 2017; Kumar et al., 2017; Kumar et al., 2017).
**Case Study: Required Word Structure and Letter Start.** We manually investigated 100 domains requiring a two-word structure as well as domains enforcing letter start, confirming our inference. We did not identify common platforms or frameworks, but many sites used form validation JS libraries (e.g., jQuery Validation, FormCheck.js, Knockout Validation) to enforce a password regex.
#### 4.3.3. Permissive Parameters
Finally, we evaluate the prevalence of permissive parameter values for our sites, as shown in Table 5. Two widely recommended password policies (Kumar et al., 2017; Kumar et al., 2017; Kumar et al., 2017; Kumar et al., 2017) are disallowing users to choose dictionary words and common breached passwords. We observe limited deployment of such password blocking though, as 72% of sites permit dictionary words as passwords and 88% allow breached passwords. Certain password structures are also often discouraged (Kumar et al., 2017), however we detect limited prevention of these patterns as well. Approximately 71% of sites permit sequences, repeating characters, and personal identifiers (e.g., username) in passwords, and 78% allow all-digit passwords. Recent password guidelines (Kumar et al., 2017; Kumar et al., 2017) also recommend allowing various types of characters. We observe over 30% of sites do not support spaces, Unicode, or emojis in passwords, and about 30% disallow one of the four most popular special characters ("., "!", "., and "*").
**Case Study: Accepting Popular Passwords.** We assess whether sites accept popular passwords using the top four passwords in a password breach dataset (Kumar et al., 2017). We list these passwords and their acceptance by sites across ranking ranges in Table 6: 39% of sites accepted the top password and nearly half accepted one of the top four passwords. These sites may be vulnerable to password spraying attacks (Kumar et al., 2017; Kumar et al., 2017) as their policies permit users to choose popular passwords. We note that most restrictive parameters and password blocklisting would disallow such passwords.
### Adherence to Standards and Guidelines
Over time, various organizations have released password policy guidelines. Here, we assess the extent to which sites adhere to these guidelines. In Table 7, we list 9 prominent guidelines in order of publication year, including different security levels offered by some. Appendix Table 9 summarizes these recommendations. While we can determine if a site's policy adheres to a standard, we do not know if the site's owners explicitly chose to follow the standard.
We observe that NIST's 2004 guidelines have been most widely adopted, with 42.1% of sites adhering. Meanwhile, 30.8% of sites' policies satisfy NIST 2017's guidelines, although 16.7% of sites exhibit policies that follow NIST's old 1985 recommendation. These results indicate the staying power of recommendations, as old NIST guidelines are still observed on most sites, even more than 5 years after updated guidelines were released. Similarly, fewer websites adhere to Germany BSI's latest guidelines compared to older ones.
Across NIST and DISA guidelines, we also observe that stronger security levels are significantly less adopted. For example, only 5.5% of sites have policies satisfying NIST 2004 Level 2, compared to 42.1% for Level 1. We also see low adoption of stricter password guidelines, such as those of US CERT, NCSC, and OWASP. Notably, these guidelines and higher security levels generally required stricter length requirements (particularly \(L_{min}=8\)), and checks against dictionary words and breached passwords. This suggests incentives to adopt stronger policies are ineffective and the costs of deploying these strong policy parameters are non-trivial.
### Variation by Website Rankings
Here we consider how password policies differ across websites ranked within the Tranco Top 10K, 100K, and 1M.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Password** & **Rank** & **10K** & **100K** & **1M** \\ \hline
123456 & 1 & 21 & 39 & 39 \\ \hline
123456789 & 2 & 26 & 46 & 40 \\ \hline qwerty & 3 & 22 & 38 & 42 \\ \hline password & 4 & 27 & 49 & 48 \\ \hline & **Top 4** & 27 & 51 & 53 \\ \hline \end{tabular}
\end{table}
Table 6. Percentages of signup sites accepting the top four most popular passwords (based on a breach dataset (Kumar et al., 2017)).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & **10K** & **100K** & **1M** \\ \hline
**Restrictive Parameters** & & & \\ \hline Requires 2 Words & 4.8 & 3.9 & 2.4 \\ \hline Requires No Arbitrary Special & 4.3 & 3.0 & 1.8 \\ \hline Any 3 of 4 Classes & 6.7 & 6.4 & 7.4 \\ \hline Any 2 of 4 Classes & 14.9 & 11.0 & 9.1 \\ \hline Any 2 of 3 General Classes & 10.1 & 10.0 & 9.3 \\ \hline Starting With a Letter & 1.7 & 2.0 & 2.9 \\ \hline
**Permissive Parameters** & & & \\ \hline Dictionary Words & 83.7 & 80.1 & 72.0 \\ \hline Sequential Characters & 84.1 & 79.1 & 71.7 \\ \hline Repeated Characters & 82.2 & 79.8 & 71.1 \\ \hline Short Digit-only & 38.9 & 66.2 & 78.0 \\ \hline Long Digit-Only & 57.2 & 69.4 & 78.2 \\ \hline Personal Identifier & 84.6 & 78.9 & 71.4 \\ \hline Space & 75.5 & 73.3 & 69.0 \\ \hline Unicode & 69.7 & 71.3 & 67.7 \\ \hline Emoji & 59.6 & 65.8 & 64.4 \\ \hline Breach Password & 84.1 & 84.8 & 88.2 \\ \hline
1st Popular Special =. & 82.7 & 78.5 & 70.0 \\ \hline
2nd Popular Special =! & 83.7 & 77.7 & 69.6 \\ \hline
3rd Popular Special = \_ & 84.1 & 78.3 & 69.7 \\ \hline
4th Popular Special = \# & 82.2 & 76.4 & 69.4 \\ \hline \end{tabular}
\end{table}
Table 5. Policy parameter values for all sites within the Tranco Top 10K, 100K, and Top 1M (full population). For both restrictive and permissive parameters, we list the percent of sites where the parameter value is _True_.
**Length.** Figure 2 shows the CDFs of minimum and maximum passwords lengths, respectively, for all three groups. We observe that in all graphs, the CDFs for top-ranked sites skew towards longer lengths, which is recommended for stronger passwords. The median minimum password length for top 10K sites is 8 characters, compared to 5 and 6 characters for the top 100K and all sites, respectively. Similarly, while about 40% of all sites allow long passwords that are at least 128 characters, 50% and 55% of top 10K and top 10K sites do, respectively (although a higher portion of top-ranked sites cap passwords at 20 or fewer characters than among all sites).
**Restrictive and Permissive Parameters.** Table 5 depicts the parameter values for all three ranking ranges, showing the percent of sites within each population where a parameter value is true. We observe that overall, top sites are more likely to enforce restrictions on the password (e.g., \(R_{cmb24}\) is true for 15% of top 10K sites, compared to 9% of all sites). Top sites are generally more permissive in which special characters they accept, including periods, exclamation marks, underscores, bound signs, space, and Unicode characters (although slightly fewer top sites accept emojis compared to all sites). Surprisingly, top sites also are more permissive of oft-discouraged password patterns, including dictionary words, sequential and repeated characters, and the inclusion of personal identifiers. However, top sites are significantly less likely to accept all-digit passwords, accepted by only 39-57% of top 10K sites compared to 78% of all sites. Top sites are also slightly less likely to allow breached passwords compared to all websites though (84% of the top 10K versus 88% for all). Overall, top sites apply more password composition requirements but also permit more characters/structures (except all-digit passwords).
**Adherence to Guidelines.** Table 7 lists the adherence to common guidelines across ranking ranges. We observe that across all guidelines, higher-ranked sites generally exhibit higher adherence, suggesting that they are more likely to follow recommendations. However, the most recent guidelines are still only adopted by a minority of sites across all three ranking ranges (see Section 4.4).
## 5. Comparison with Prior Findings
Prior works on assessing website password policies are small-scale and largely dated (Beng et al., 2018; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018) (see Section 2). Here, we compare our results with prior findings, to understand how policies may have changed over time, and the insights afforded by a large-scale perspective.
**Top Policies and Parameter Values.** Prior work assessed policy parameter values, rather than top policies, likely due to small sample sizes. In comparison, our large-scale study identified the top policies, most of which enforced only length constraints, as well as a long tail of policies which are mostly unique to a site.
_Length:_ A recent 2022 analysis of 120 top English sites observed that a minimum length of 8 was most frequently enforced, followed by lengths 6 and 5 (Chen et al., 2018). We observe the same for our top 10K sites, with 40% of sites requiring length 8 passwords, 30% requiring length 6, and 7% requiring length 5. However, when considering the top 1M sites, length 5 was the most prevalent, on nearly 40% of sites. Meanwhile, length 6 and length 8 passwords were required by approximately 15% of sites each. Further, (Beng et al., 2018; Chen et al., 2018) observed few sites without length requirements, but at scale, we observed this policy at nearly a quarter of the sites. Thus, our large-scale measurement identified shorter password length minimums on most sites than reported by recent studies focused on top sites.
Prior work observed widespread use of length caps (note, (Chen et al., 2018) did not investigate length maximums). Seitz et al. (Seitz et al., 2018) observed an average max length of 43 characters, and Wang et al. (Wang et al., 2018) did not observe any max lengths greater than 64. In contrast, we observe over a third of all sites allowing 128+ character passwords, with a median length cap of 86 (with even fewer sites using length caps among top-ranked sites). As these prior studies are over a half decade ago and of limited scale, it seems likely that sites today have broadly shifted towards accepting longer passwords.
_Restrictive and Permissive Parameters_: Few works systematically characterized restrictive and permissive parameters, with most highlighting case studies rather than comprehensive analysis. However, prior work (Beng et al., 2018; Chen et al., 2018; Chen et al., 2018) observed between 30-50% of sites enforced several restrictive parameters. We observe a smaller fraction, with only 1.8-9.3% of sites employing any given restrictive parameter, although top-ranked sites employed restrictive parameters more. Thus when considering websites at scale, restrictive parameters are less prevalent overall. Earlier work from 2010 (Beng et al., 2018) also found few sites performing dictionary checks. However, we observed a modest rate today, at 28% ((Chen et al., 2018) observed 41% on top English sites).
**Adherence to Standards and Guidelines.** Prior work mostly predates modern password guidelines (Beng et al., 2018; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018) (e.g., NIST 2017, BIS 2019), and did not identify comprehensive comparisons of password policies with the standards prevalent at a study's publication.
**Variation by Website Ranking.** Prior work (Chen et al., 2018; Chen et al., 2018) looked at several US university websites, and found that top-ranked sites had weaker policies than lower-ranked ones, although policies were evaluated using an entropy metric with notable limitations (Chen et al., 2018; Chen et al., 2018). In contrast, our site population is orders of magnitude larger and has substantially broader ranking coverage, and we observe stronger policy characteristics for top sites (e.g., longer length requirements, broader adherence to modern recommendations).
\begin{table}
\begin{tabular}{|c|c c c|} \hline
**Standard Name** & **1M** & **100K** & **10K** \\ \hline NIST 1985 (Low) & 16.7 & 22.1 & 27.4 \\ \hline NIST 1985 (Med) & 7.6 & 12.8 & 15.4 \\ \hline NIST 1985 (High) & 3.8 & 7.4 & 9.1 \\ \hline NIST 2004 (Lvl 1) & 42.1 & 65.3 & 77.9 \\ \hline NIST 2004 (Lvl 2) & 5.5 & 7.5 & 6.7 \\ \hline BSI 2005 & 4.8 & 6.7 & 6.7 \\ \hline US CERT 2009 & 0.3 & 0.4 & 0.5 \\ \hline DISA 2014 (Med) & 4.7 & 8.8 & 10.1 \\ \hline DISA 2014 (High) & 0.1 & 0.1 & 0.5 \\ \hline NIST 2017 (Should) & 30.8 & 40.8 & 34.6 \\ \hline NIST 2017 (Shall) & 1.8 & 3.16 & 3.9 \\ \hline NCSC 2018 & 0.7 & 1.2 & 2.4 \\ \hline BSI 2019 & 14.6 & 22.3 & 32.7 \\ \hline BSI 2020 & 5.9 & 7.5 & 7.2 \\ \hline OWASP & 1.3 & 2.9 & 4.3 \\ \hline \end{tabular}
\end{table}
Table 7. Percent of sites satisfying different guidelines, across the Tranco Top 10K, 100K, and 1M (full population).
## 6. Concluding Discussion
In this study, we conducted the largest evaluation of website password creation policies to date, assessing over 20K sites (\(\sim\)135x more sites than prior work). Our results revealed the state of modern web authentication, and identified insecure policies deployed (especially outside of the top sites). Of note, we observed that 75% of sites allow shorter passwords than the recommended 8 characters [(11; 20; 21; 34; 41)] (with 12% allowing single-character passwords) and 40% cap password lengths below the 64 characters recommendation [(11; 21; 41)]. Meanwhile, 15% of sites enforce character constraints, which is no longer recommended [(21; 34; 41)]. Only 12%-28% of sites employ password blocking, as widely advocated [(11; 20; 21; 34; 41)]. Finally, a third of sites did not support certain password characters as suggested [(21; 41)], including whitespaces needed for passphrases. Ultimately, only a minority of sites adhered to modern guidelines overall. Here, we synthesize our findings into lessons for moving web authentication forward.
**Improving Software Defaults and Implementation Support**. Our case studies in Section 4.3 identified that insecure password policy decisions were closely aligned with the default configurations of popular web software (such as WooCommerce and Shopify). These findings demonstrate the influence of software defaults on web authentication, but also illuminate a potential remediation path: if popular web software implemented recommended password policy configurations by default, many websites could be moved to stronger password policies. For example, _nearly half_ of our sites with password length minimums below the 8 characters recommended [(11; 20; 34; 41; 12)] use the Shopify platform and its default 5 characters minimum. Thus, if Shopify increases its default length to 8 characters, potentially a third of our sites would become newly aligned with modern guidelines. We are currently in the process of communicating with platforms identified offering weak default configurations to encourage such changes.
Related to defaults are the feature support by popular web software. We observed in Section 4.3.3 that only a minority of sites blocked passwords with certain characteristics, which is widely recommended [(11; 20; 21; 34; 41)]. We hypothesize that this arises partly because many popular web platforms do not provide full support for such blocking, so web developers would need to custom implement such functionality. For example, both Python's Django library 5 and the WordPress CMS 6 by default do not support all password checks. By implementing such features (and enabling by default) for popular web frameworks (many of which are open-source), our community can meaningfully improve web authentication.
Footnote 5: [https://docs.djangorproject.com/en/4.2/topics/auth/passwords/](https://docs.djangorproject.com/en/4.2/topics/auth/passwords/)
Footnote 6: [https://www.wpbgeminer.com/jplugins/how-to-force-strong-password-on-users-in-wordpress](https://www.wpbgeminer.com/jplugins/how-to-force-strong-password-on-users-in-wordpress)
**Promoting Modern Password Guideline Adoption**. Our analysis in Section 4.4 revealed that many sites exhibit policies satisfying password guidelines, but primarily more dated versions. This result provides evidence that password guidelines do generally inform the policy decisions of many websites. However, there must be barriers inhibiting the adoption of more recent recommendations.
A lack of awareness may be one barrier. Here, education and outreach efforts can help inform websites about current guidelines. Prior work on web administrator notifications [(27; 28; 49; 50)] demonstrated that such outreach efforts can drive the remediation of security issues at scale. Future work can also investigate the resources available about web authentication, and identify information sources that should be updated with current recommendations.
In addition, in Section 4.4, we saw different guidelines from various organizations, with sometimes conflicting recommendations. For example, NIST 2017 [(21)] and OWASP [(41)] guidelines avoid password complexity requirements, unlike BSI 2020 [(20)]. A unified password guideline would provide more consistent and clear recommendations to web administrators around the world. We also uncovered that some guidelines (e.g., OWASP, NCSC 2018) are rarely adopted, suggesting that these guidelines are overly strict or lack visibility and incentives to drive adoption.
Even if adopting a new policy, a remaining challenge is the policy update process. How should websites handle passwords created under the old policy? If old passwords are left as is, the new policy's benefits are not realized. Meanwhile, forced password resets are often onerous to users (as seen with the password resets during data breaches). Future work should investigate effective processes for upgrading password creation policies, and integrate them into existing web software. Organizations releasing password guidelines also must be cognizant of the high burden imposed upon websites when adopting new policies, and guidelines must be released with care (e.g., BSI released two guidelines only one year apart [(19; 20)]).
**Standardizing Password Creation Policies to Promote Usability**. In Section 4.2, we observed that websites exhibit wildly diverse policies, with many policies unique to one site. This heterogeneity is likely a usability burden during password creation, where users do not know what constraints are enforced on chosen passwords across different sites. This is especially true as we found that few sites explicitly document their password policies (from Section 3.9). Standardizing password policies would significantly reduce this user friction, providing a unified policy across the web.
Such standardization would benefit password managers as well, as many password managers assist users by automatically generating random and strong passwords. To do so correctly, they must generate a password valid under a site's policy, which is inhibited by the diversity of real-world site policies. For example, some sites disallow long passwords or require certain character compositions (from Section 4), which may not be satisfied by a password manager's randomly generated password. We note that even with the absence of standardization, our results help inform password managers of the common policy constraints enforced by most sites. For example, we found that passwords of length 12-16 are the most likely to be accepted, permitted by 96-98% of sites. Our measurement dataset can also be inputted directly to password managers to provide the specific constraints on the sites that we analyzed.
**Future Research Directions.** Our study highlights avenues for future investigation. One direction is in improving upon our measurement techniques. While our collected dataset is significantly larger than those of prior work [(8; 14; 16; 17; 25; 26; 29; 30; 39; 44; 54)], we still successfully analyzed only a minority of sites with account signups. Expanding measurement coverage would allow for more generalizable findings and more extensive analysis of authentication policies across different site characteristics. Similarly, longitudinal measurements could afford insights into policy evolution. Future work could also investigate which website characteristics
correlate with secure and usable password policies, such as website categories, geographic regions, and languages.
## 7. Acknowledgements
We thank the anonymous reviewers for their constructive feedback. The first author was supported by the Kuwait University Scholarship. This work was also supported in part by the National Science Foundation award CNS-20555549. The opinions expressed in this paper do not necessarily reflect those of the research sponsors.
|
2309.12289 | Real-Time Capable Decision Making for Autonomous Driving Using Reachable
Sets | Despite large advances in recent years, real-time capable motion planning for
autonomous road vehicles remains a huge challenge. In this work, we present a
decision module that is based on set-based reachability analysis: First, we
identify all possible driving corridors by computing the reachable set for the
longitudinal position of the vehicle along the lanelets of the road network,
where lane changes are modeled as discrete events. Next, we select the best
driving corridor based on a cost function that penalizes lane changes and
deviations from a desired velocity profile. Finally, we generate a reference
trajectory inside the selected driving corridor, which can be used to guide or
warm start low-level trajectory planners. For the numerical evaluation we
combine our decision module with a motion-primitive-based and an
optimization-based planner and evaluate the performance on 2000 challenging
CommonRoad traffic scenarios as well in the realistic CARLA simulator. The
results demonstrate that our decision module is real-time capable and yields
significant speed-ups compared to executing a motion planner standalone without
a decision module. | Niklas Kochdumper, Stanley Bak | 2023-09-21T17:52:20Z | http://arxiv.org/abs/2309.12289v1 | # Real-Time Capable Decision Making
###### Abstract
Despite large advances in recent years, real-time capable motion planning for autonomous road vehicles remains a huge challenge. In this work, we present a decision module that is based on set-based reachability analysis: First, we identify all possible driving corridors by computing the reachable set for the longitudinal position of the vehicle along the lanelets of the road network, where lane changes are modeled as discrete events. Next, we select the best driving corridor based on a cost function that penalizes lane changes and deviations from a desired velocity profile. Finally, we generate a reference trajectory inside the selected driving corridor, which can be used to guide or warm start low-level trajectory planners. For the numerical evaluation we combine our decision module with a motion-primitive-based and an optimization-based planner and evaluate the performance on 2000 challenging CommonRoad traffic scenarios as well in the realistic CARLA simulator. The results demonstrate that our decision module is real-time capable and yields significant speed-ups compared to executing a motion planner standalone without a decision module.
## I Introduction
A typical architecture for an autonomous driving system consists of a navigation module that plans a route (e.g. a sequence of roads that lead to the destination), a decision module that makes high-level choices like when to do a lane change or overtake another car, a motion planning module that constructs a collision-free and dynamically feasible trajectory, and a controller that counteracts disturbances like wind, model uncertainty, or a slippery road to keep the car on the planned trajectory. This paper presents a novel approach for decision making, which is based on reachable sets.
### _State of the Art_
Let us first review the state of the art for decision making and motion planning for autonomous road vehicles. Motion planning approaches can be classified into the four groups _graph search planners_, _sampling-based planners_, _optimization-based planners_, and _interpolating curve planners_. Graph search planners [1, 2, 3, 4, 5, 6] represent the search space by a finite grid, where the grid cells represent the nodes and transitions between grid cells the edges of a graph. Motion planning then reduces to the task of finding the optimal path through the graph, which can be efficiently realized using graph search algorithms such as Dijkstra [1, 2] and A* [3, 4]. The main disadvantage of this approach is the often large number of grid cells required to cover the search space, especially if spatio-temporal lattices [5, 6] are used as a grid. While for graph search planners the discrete motion primitives are deterministically defined by the grid, sampling-based planners choose motions randomly to explore the search space. For autonomous driving, this is often implemented using rapidly exploring random trees [7, 8, 9]. Disadvantages are that sampling based planners in general do not find the optimal solution in finite time, and that the determined trajectories are often jerky and therefore have low driving comfort. Optimization-based planners [10, 11, 12, 13, 14, 15] determine trajectories by minimizing a specific cost function with respect to the constraints of dynamic feasibility with the vehicle model and avoiding collisions with other traffic participants and the road boundary. The main challenge is to incorporate the non-convex collision avoidance constraints, which is often realized using mixed-integer programming [10, 11], nonlinear programming with a suitable initial guess [12, 13], or via successive convexification [14, 15]. However, all these methods are either computationally expensive or have the risk of getting stuck in local minima. Finally, interpolating curve planners construct trajectories by interpolating between a sequence of desired waypoints using clothoids [16, 17], polynomial curves [18, 19], Bezier curves [20, 21], or splines [22, 23]. Obstacle avoidance can be realized by modifying the waypoints accordingly [24],
Fig. 1: Steps for our decision making approach for an exemplary traffic scenario, where the drivable area is depicted in red, the space occupied by the other traffic participants in blue, the goal set in yellow, and the generated reference trajectory in black.
[25]. While interpolating curve planners are able to create smooth trajectories with high comfort, the solutions might not be optimal with respect to a given cost function. A more detailed overview of different motion planning approaches is provided in [26, 27].
Even though many of the above motion planners already have the ability to make decisions, it is still a common practice to separate high-level decision making from low-level trajectory planning since this usually simplifies the motion planning problems and therefore improves computational efficiency. One frequently applied method is rule-based decision making, which is often realized using state-machines [28, 29]. Another approach is topological trajectory grouping, which selects topological patterns from a pool of trajectories [30, 31]. Also hybrid automata can be used for decision making, where the discrete modes of the automaton represent the high-level decisions [32]. Yet another strategy is to apply reinforcement learning for high-level decision making [33, 34]. Finally, some recent approaches [35, 36, 37] use set-based reachability analysis to identify potential driving corridors. These methods linearize the system around a given reference path to construct the drivable area by computing the reachable set in longitudinal and lateral direction. However, this has the disadvantage that the linearization can become very inaccurate or conservative if the vehicle significantly deviates from the reference path. Our approach avoids this problem since we only compute the reachable set for the longitudinal position along the lanelet, and model changes in lateral position as discrete events.
### _Notation_
We denote vectors and scalars \(a\in\mathbb{R}^{n}\) by lowercase letters, matrices \(M\in\mathbb{R}^{m\times n}\) by uppercase letters, sets \(\mathcal{S}\subset\mathbb{R}^{n}\) by calligraphic letters, and lists \(\mathbf{L}=(L_{1},\ldots L_{n})\) by bold uppercase letters. Given two matrices \(M_{1}\in\mathbb{R}^{n\times m}\) and \(M_{2}\in\mathbb{R}^{n\times w}\), \([M_{1}\;M_{2}]\in\mathbb{R}^{n\times m+w}\) denotes their concatenation. Empty sets and lists are denoted by \(\emptyset\), and we assume that all lists are ordered. Given sets \(\mathcal{S}_{1},\mathcal{S}_{2}\subset\mathbb{R}^{n}\) and a matrix \(M\in\mathbb{R}^{m\times n}\), we require the set operations linear map \(M\,\mathcal{S}_{1}:=\{M\,s\mid s\in\mathcal{S}_{1}\}\), Minkowski sum \(\mathcal{S}_{1}\oplus\mathcal{S}_{2}:=\{s_{1}+s_{2}\,|\,s_{1}\in\mathcal{S}_{1 },\,s_{2}\in\mathcal{S}_{2}\}\), Cartesian product \(\mathcal{S}_{1}\times\mathcal{S}_{2}:=\{[s_{1}^{T}\;s_{2}^{T}]^{T}\,|\,s_{1} \in\mathcal{S}_{1},\,s_{2}\in\mathcal{S}_{2}\}\), intersection \(\mathcal{S}_{1}\cap\mathcal{S}_{2}:=\{s\,|\,s\in\mathcal{S}_{1}\wedge s\in \mathcal{S}_{2}\}\), union \(\mathcal{S}_{1}\cup\mathcal{S}_{2}:=\{s\,|\,s\in\mathcal{S}_{1}\lor s\in \mathcal{S}_{2}\}\), and set difference \(\mathcal{S}_{1}\setminus\mathcal{S}_{2}:=\{s\,|\,s\in\mathcal{S}_{1}\wedge s \not\in\mathcal{S}_{2}\}\). We represent one-dimensional sets by intervals \(\mathcal{I}=[\underline{i},\overline{i}]:=\{x\,|\,\underline{i}\leq x\leq \overline{i}\}\) and two-dimensional sets by polygons, for which all of the above set operations can be performed efficiently.
## II Problem Formulation
We represent the dynamics of the vehicle with a kinematic single track model [38, Chapter 2.2]:
\[\begin{split}\dot{x}&=v\,\cos(\varphi)\qquad\dot{v} =a\\ \dot{y}&=v\,\sin(\varphi)\qquad\dot{\varphi}=\frac{v }{\ell_{wb}}\,\tan(s),\end{split} \tag{1}\]
where the vehicle state consists of the position of the rear axis represented by \(x\), \(y\), the velocity \(v\), and the orientation \(\varphi\). The parameter \(\ell_{wb}\) is the length of the vehicle's wheelbase, and the control inputs are the acceleration \(a\) and the steering angle \(s\), which are bounded by \(a\in[-a_{\text{max}},a_{\text{max}}]\) and \(s\in[-s_{\text{max}},s_{\text{max}}]\). Moreover, an additional constraint is given by the friction circle [38, Chapter 13]
\[\sqrt{a^{2}+(v\dot{\varphi})^{2}}\leq a_{\text{max}}, \tag{2}\]
which models the maximum force that can be transmitted by the tires.
The road network is represented by a list of lanelets \(\mathbf{L}=(\mathcal{L}_{1},\ldots,\mathcal{L}_{q})\), where each lanelet is a tuple \(\mathcal{L}=(\text{id},\text{left},\text{right},\mathbf{S},\ell_{\text{ lane}},\mathcal{L})\) consisting of the lanelet identifier \(\text{id}\in\mathbb{N}_{\geq 0}\), the identifiers of the neighboring lanelets to the left and right, \(\text{left}\in\mathbb{N}_{\geq 0}\), a list \(\mathbf{S}\) storing the identifiers of the successor lanelets, the length \(\ell_{\text{lane}}\in\mathbb{R}_{\geq 0}\) of the lanelet, and a polygon \(\mathcal{L}\subset\mathbb{R}^{2}\) representing the shape of the lanelet. In addition to the road network, one also has to consider the other traffic participants such as cars or pedestrians for motion planning. We denote the space occupied by traffic participant \(j\) at time \(t\) by \(\mathcal{O}_{j}(t)\subset\mathbb{R}^{2}\). For simplicity, we assume that \(\mathcal{O}_{j}(t)\) is known for all surrounding traffic participants. In practice, the current positions of the surrounding traffic participants are obtained via perception using computer vision [39], LiDAR [40], radar [41], or combinations [42, 43], and the future positions of the traffic participants can be determined using probabilistic [44] or set-based prediction [45].
A motion planning problem is defined by an initial state \(x_{0},y_{0},v_{0},\varphi_{0}\) for the vehicle and a goal region \(\mathcal{G}=(\mathcal{G},\tau_{goal})\) consisting of a goal set for the vehicle state \([x\;y\;v\;\varphi]\in\mathcal{G}\subseteq\mathbb{R}^{4}\) and a time interval \(\tau_{\text{goal}}=[t_{\text{start}},t_{\text{end}}]\) at which this goal set should be reached. An exemplary motion planning problem is visualized at the top of Fig. 1. The task for motion planning is to determine control inputs \(a(t)\) and \(s(t)\) that drive the car from the initial state to the goal region under consideration of the vehicle dynamics in (1) and such that the vehicle stays on the road and does not collide with other traffic participants at all times. In this paper we propose to solve motion planning problems with a novel decision making module that determines a suitable driving corridor that leads to the goal set and generates a desired reference trajectory inside this driving corridor. Our decision module can then be combined with a low-level trajectory planner that tracks the reference trajectory and generates the control input trajectories \(a(t)\) and \(s(t)\).
## III Algorithm
We now present our novel approach for decision making.
### _Simplified Vehicle Dynamics_
For the sake of computational efficiency, it is common practice to use a simplified version of the vehicle dynamics in (1) for decision making. To obtain such a simplified model, we introduce a curvilinear coordinate frame that follows the lanelet centerline, and in which \(\xi\) denotes the longitudinal position of the vehicles center along the corresponding
lanelet. Moreover, we represent the vehicle as a discrete-time system with time step size \(\Delta t\), where \(t_{i}=i\,\Delta t\) are the corresponding time points and we assume without loss of generality that the initial time is \(t_{0}=0\). Our simplified model for decision making is then given by the following double integrator for the longitudinal position \(\xi\)
\[\begin{bmatrix}\xi(t_{i+1})\\ v(t_{i+1})\end{bmatrix}=\underbrace{\begin{bmatrix}1&\Delta t\\ 0&1\end{bmatrix}}_{A}\begin{bmatrix}\xi(t_{i})\\ v(t_{i})\end{bmatrix}+\underbrace{\begin{bmatrix}0.5\,\Delta t^{2}\\ \Delta t\end{bmatrix}}_{B}a_{i}, \tag{3}\]
where \(v\) is the velocity and \(a_{i}\) is the constant acceleration of the vehicle during time step \(i\). In the remainder of the paper we will use notation \(z(t)=[\xi(t)\ v(t)]^{T}\) for the state of the simplified vehicle dynamics in (3).
### _Drivable Area_
To determine all possible driving corridors, we use reachability analysis. The reachable set \(\mathcal{R}(t)\) for system (3) is defined as the set of longitudinal positions and velocities reachable under consideration of the bounded acceleration \([-a_{\text{max}},a_{\text{max}}]\). This set can be computed by the following propagation rule:
\[\mathcal{R}(t_{i+1})=A\,\mathcal{R}(t_{i})\oplus B\,[-a_{\text{max}},a_{\text {max}}], \tag{4}\]
where we represent sets by polygons. To avoid collisions with other traffic participants, we have to consider their occupied space \(\mathcal{O}_{j}(t)\). Given a lanelet \(\mathscr{L}=(\text{id},\text{left},\text{right},\mathbf{S},\ell_{\text{lane} },\mathcal{L})\), we therefore first compute the set \(\mathcal{O}_{\text{long},\text{id},j}(t)\subset\mathbb{R}\) of longitudinal lanelet positions occupied by obstacle \(j\) at time \(t_{i}\) from the occupied space \(\mathcal{O}_{j}(t_{i})\) in the global coordinate frame. Using this set, we can compute the free space on the lanelet as follows:
\[\begin{split}\mathcal{F}_{\text{id}}(t_{i})=& \bigg{(}[0,\ell_{\text{lane}}]\setminus\bigg{(}\bigcup_{j=1}^{o} \mathcal{O}_{\text{long},\text{id},j}(t_{i})\ \oplus\\ &\Big{[}-\frac{\ell_{\text{car}}}{2}-d_{\text{min}},\frac{\ell_{ \text{car}}}{2}+d_{\text{min}}\Big{]}\bigg{)}\bigg{)}\!\!\times[0,v_{\text{ max},\text{id}}],\end{split} \tag{5}\]
where we subtract the occupied space for all \(o\) traffic participants and bloat the obstacles by the length of the ego vehicle \(\ell_{\text{car}}\) as well as by a user-defined minimum distance \(d_{\text{min}}\) we want to keep to the other traffic participants. In addition, we take the Cartesian product with the set of legal velocities \([0,v_{\text{max},\text{id}}]\), where \(v_{\text{max},\text{id}}\) is the speed limit for the current lanelet. Since the free space \(\mathcal{F}_{\text{id}}(t_{i})\) in general consists of multiple disjoint regions, we introduce the list \(\mathbf{F}_{\text{id}}(t_{i})\) that stores all these disjoint regions. The drivable area is finally given by the intersection of the reachable set with the free space on the lanelet:
\[\mathcal{D}(t_{i})=\mathcal{R}(t_{i})\cap\mathcal{F}_{\text{id}}(t_{i}). \tag{6}\]
While equations (4), (5), (6) enable us to compute the drivable area for a single lanelet, we additionally have to consider transitions between the lanelets to obtain the drivable area for the whole road network. This procedure is summarized in Alg. 1, which computes the drivable area for a single lanelet under consideration of transitions. The algorithm takes as input a list of drivable areas that correspond to transitions to the given lanelet and computes the drivable area for the lanelet as well as all possible transitions to left, right, or successor lanelets.
### _Driving Corridor Selection_
Using the approach for computing the drivable area for a single lanelet together with possible transitions to other lanelets in Alg. 1, we can formulate the identification of possible driving corridors as a standard tree search problem, where each node consists of a lanelet and a corresponding drivable area. This procedure is summarized in Alg. 2 and
visualized for an exemplary traffic scenario in Fig. 1. Once we identified all driving corridors that reach the goal set, we finally have to select the best driving corridor in Line 15 of Alg. 2. For this, we use the cost function
\[J=w_{\text{change}}\,n_{\text{change}}+w_{\text{profile}}\,d_{\text{profile}}, \tag{7}\]
where \(n_{\text{change}}\) is the number of lane changes for the driving corridor, \(d_{\text{profile}}\) is the average deviation from a desired position-velocity-profile \(z_{\text{des}}(t)\), and \(w_{\text{change}},w_{\text{profile}}\in\mathbb{R}_{\geq 0}\) are user-defined weighting factors. For the desired position-velocity-profile, we choose to accelerate to the current speed limit \(v_{\text{max,id}}\) with a user-defined desired acceleration \(a_{\text{des}}\):
\[z_{\text{des}}(t_{i+1})=A\,z_{\text{des}}(t_{i})+B\,a_{i}\]
with \(z(t_{0})=[\xi_{0}\ v_{0}]^{T}\) and
\[a_{i}=\max\Big{(}-a_{\text{des}},\min\Big{(}a_{\text{des}},\frac{v_{\text{max,id}}-v(t_{i})}{\Delta t}\Big{)}\Big{)}.\]
The deviation \(d_{\text{profile}}\) is given as the minimum deviation inside the drivable area \(\mathcal{D}(t_{i})\) averaged over all time steps:
\[d_{\text{profile}}=\frac{1}{[t_{\text{end}}/\Delta t]}\sum_{i=0}^{[t_{\text{ end}}/\Delta t]}\min_{z\in\mathcal{D}(t_{i})}\|z-z_{\text{des}}(t_{i})\|_{2}.\]
If the driving corridor contains multiple drivable areas on different lanelets for the same time step, we take the minimum deviation from all these areas. Moreover, we shift the desired position-velocity-profile by the lanelet length \(\ell_{\text{lane}}\) when moving on to a successor lanelet.
### _Driving Corridor Refinement_
While the driving corridor computed using Alg. 2 is guaranteed to contain at least one state sequence that leads to the goal set, it usually also contains states that do not reach the goal. To remove those states, we refine the computed driving corridor by propagating the reachable sets backward in time starting from the intersection of the final set \(\mathcal{D}(t_{\text{end}})\) with the goal set \(\mathcal{G}_{\text{long}}\) from Alg. 2. The refined drivable area is then given by the intersection of the backpropagated sets with the original drivable area, which corresponds to the following propagation rule:
\[\mathcal{D}(t_{i-1})=\mathcal{D}(t_{i-1})\cap A^{-1}\big{(}\mathcal{D}(t_{i}) \oplus B\,[-a_{\text{max}},a_{\text{max}}]\big{)},\]
where \(\mathcal{D}(t_{\text{end}})=\mathcal{D}(t_{\text{end}})\cap\mathcal{G}_{\text {long}}\). Again, we shift the drivable area by the lanelet length \(\ell_{\text{lane}}\) when moving on to a predecessor lanelet.
### _Reference Trajectory Generation_
As the last step of our approach, we generate a suitable reference trajectory inside the selected driving corridor. Optimally, we would like to drive with the desired position-velocity-profile \(z_{\text{des}}(t)\). However, this position-velocity-profile might not be located inside the driving corridor. We therefore generate the reference trajectory by choosing for each time step the point \(z(t_{i+1})\) inside the drivable area \(\mathcal{D}(t_{i+1})\) that is closest to \(z_{\text{des}}(t_{i+1})\) and reachable from the previous state \(z(t_{i})\) given the dynamics in (3):
\[z(t_{i+1})=\underset{z\in\mathcal{I}}{\text{argmin}}\,\|z-z_{\text{des}}(t_{i +1})\|_{2}\]
where
\[\mathcal{I}=\mathcal{D}(t_{i+1})\cap\big{(}A\,z(t_{i})\oplus B\,[-a_{\text{ max}},a_{\text{max}}]\big{)}.\]
After generating the reference trajectory \(z(t)\) in the curvilinear coordinate frame, we have to transform it to the global coordinate frame. Here, we obtain \(x(t)\), \(y(t)\) and \(\varphi(t)\) from the lanelet centerline, and \(v(t)\) is directly given by \(z(t)\). At
Fig. 2: Trajectory planned by our decision module in combination with the optimization-based planner for the CommonRoad scenario DEU_F_Iensburg-73_1_T-1 visualized at times 0s, 3s, and 6s, where the ego vehicle is shown in red, the other traffic participants in blue, and the goal set in yellow.
lane changes to a left or right lanelet we interpolate between the lanelet centerlines \(x_{1}(t),y_{1}(t)\) and \(x_{2}(t),y_{2}(t)\) of the lanelets before and after the lane change as follows:
\[\begin{bmatrix}x(t)\\ y(t)\end{bmatrix}=(1-\mu(t))\begin{bmatrix}x_{1}(t)\\ y_{1}(t)\end{bmatrix}+\mu(t)\begin{bmatrix}x_{2}(t)\\ y_{2}(t)\end{bmatrix},\]
with
\[\mu(t)=\frac{1}{1+e^{-10(\delta(t)-0.5)}},\ \ \delta(t)=\frac{(t-t_{\text{init}})}{t_{ \text{fin}}-t_{\text{init}}},\]
where \(t_{\text{init}}\), \(t_{\text{fin}}\) are the start and end time for the lane change.
## IV Improvements
We now present several improvements for our algorithm.
### _Traffic Rules_
So far, the only traffic rule we considered is the speed limit. However, our approach makes it easy to already incorporate many additional traffic rules on a high-level during driving corridor generation. For example, traffic rules such as traffic lights, no passing rules, or the right-of-way can simply be considered by removing the space that is blocked by the traffic rule from the free space \(\mathcal{F}_{\text{id}}(t)\) on the lanelet. Other rules such as keeping a safe distance to the leading vehicle can be considered by adding the corresponding rule violations with a high penalty to the cost function in (7). Incorporating traffic rule violations into the cost function ensures that our decision module can find a solution even if the initial state violates the rule. This can for example happen if another traffic participant performs an illegal cut-in in front of the ego vehicle, which makes it impossible to keep a safe distance at all times.
### _Partially Occupied Lanelets_
Often, other traffic participants only occupy a small part of the lateral space on the lanelet, for example if a bicycle drives on one side of the lane. Classifying the whole lanelet as occupied in those cases would be very conservative and prevent the decision module from finding a feasible solution in many cases. A crucial improvement for the basic algorithm in Sec. III is therefore to check how much of the lateral space is occupied by the other traffic participants, and only remove the parts where the remaining free lateral space is too small to drive on from the free space in (5). We additionally correct the final reference trajectory accordingly to avoid intersections with other traffic participants that only partially occupy the lateral space of the lanelet.
### _Cornering Speed_
Our algorithm in Sec. III assumes that the vehicle can drive around corners with arbitrary speed, which obviously is a wrong assumption since the vehicle speed in corners is limited by the friction circle in (2). Therefore, we now derive a formula for the maximum corner speed given the lanelet curvature \(\Delta\varphi/\Delta\xi\), which we use as an artificial speed limit for all lanelets. We assume that we drive the corner with constant velocity (\(a=0\)), in which case (2) simplifies to
\[(v\dot{\varphi})^{2}\stackrel{{\text{(I)}}}{{=}}\left(\frac{v^{2} }{\ell_{\text{wb}}}\tan(s)\right)^{2}\leq a_{\text{max}}^{2}. \tag{8}\]
Additionally assuming a constant steering angle (\(\dot{s}=0\)), we can estimate the change in orientation as
\[\Delta\varphi=\int_{t=0}^{t=\Delta t}\dot{\varphi}\,dt\stackrel{{ \text{(I)}}}{{=}}\frac{v}{\ell_{\text{wb}}}\tan(s)\Delta t=\frac{ \Delta\xi}{\ell_{\text{wb}}}\tan(s), \tag{9}\]
where we used the estimation \(\Delta t\approx\Delta\xi/v\). Combining (8) and (9) finally yields
\[v\leq\sqrt{a_{\text{max}}\,\Delta\xi/\Delta\varphi} \tag{10}\]
for the maximum corner velocity. In the implementation we compute \(\Delta\varphi/\Delta\xi\) for each segment of the lanelet and take the maximum over all segments.
### _Minimum Lane Change Time_
Alg. 1 assumes that a single time step is sufficient to perform a lane change, which is unrealistic. We therefore now derive a formula that specifies how many time steps are required to perform a lane change. The orientation of the vehicle during the lane change is modeled as a function
\[\varphi(t)=\begin{cases}2\,\frac{\varphi_{\text{peak}}}{t_{\text{fin}}}\,t,& 0\leq t\leq t_{\text{fin}}/2\\ 2\,\varphi_{\text{peak}}-2\,\frac{\varphi_{\text{peak}}}{t_{\text{fin}}}\,t,&t_ {\text{fin}}/2<t\leq t_{\text{fin}}\end{cases}, \tag{11}\]
where \(t_{\text{fin}}\) is the time required for the lane change. The maximum peak orientation \(\varphi_{\text{peak}}\) we can choose is bounded by the friction circle (2), for which we obtain under the assumption of constant velocity (\(a=0\)):
\[(v\dot{\varphi}(t))^{2}\stackrel{{\text{(I1)}}}{{=}}\left(2\,v \frac{\varphi_{\text{peak}}}{t_{\text{fin}}}\right)^{2}\leq a_{\text{max}}^{2}. \tag{12}\]
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Scenarios**} & \multicolumn{2}{c}{**Decision Module**} & \multicolumn{2}{c}{**Motion Primitives**} & \multicolumn{2}{c}{**Decision Module +**} & \multicolumn{2}{c}{**Decision Module +**} \\ & \multicolumn{2}{c}{**Standalone**} & \multicolumn{2}{c}{**Standalone**} & \multicolumn{2}{c}{**Motion Primitives**} & \multicolumn{2}{c}{**Optimization**} \\ \cline{2-11} \cline{2-11} & time & solved & time & solved & collisions & time & solved & collisions & time & solved & collisions \\ \hline overall & 106 & 100\(\%\) & 4570 & 11\(\%\) & 0\(\%\) & 302 & 60\(\%\) & 0\(\%\) & 424 & 91\(\%\) & 7\(\%\) \\ \(t_{\text{end}}\leq 4\text{s}\) & 79 & 100\(\%\) & 8324 & 9\(\%\) & 0\(\%\) & 192 & 76\(\%\) & 0\(\%\) & 343 & 93\(\%\) & 7\(\%\) \\ \(t_{\text{end}}>4\text{s}\) & 139 & 100\(\%\) & 1353 & 14\(\%\) & 0\(\%\) & 581 & 38\(\%\) & 0\(\%\) & 451 & 92\(\%\) & 7\(\%\) \\ urban & 121 & 100\(\%\) & 2288 & 5\(\%\) & 0\(\%\) & 506 & 30\(\%\) & 0\(\%\) & 451 & 92\(\%\) & 7\(\%\) \\ highway & 226 & 100\(\%\) & 894 & 48\(\%\) & 0\(\%\) & 384 & 74\(\%\) & 0\(\%\) & 621 & 89\(\%\) & 3\(\%\) \\ intersections & 98 & 100\(\%\) & 6875 & 7\(\%\) & 0\(\%\) & 300 & 58\(\%\) & 0\(\%\) & 413 & 91\(\%\) & 7\(\%\) \\ \hline \hline \end{tabular}
\end{table} TABLE I: Performance of different motion planners for 2000 CommonRoad traffic scenarios, where the evaluation metrics are the average computation time in milliseconds for planning a trajectory with a duration of one second, the percentage of successfully solved scenarios, and the percentage of scenarios where the vehicle collides with other cars or leaves the road.
Moreover, with the small angle approximation \(\sin(\varphi)\approx\varphi\), we obtain for the change in lateral position of the vehicle
\[\Delta\eta\overset{\eqref{eq:2}}{=}\int_{t=0}^{t=t_{\text{fin}}}v\,\varphi(t)\, dt\overset{\eqref{eq:2}}{=}0.5\,v\,\varphi_{\text{peak}}\,t_{\text{fin}}. \tag{13}\]
Combining (12) and (13) finally yields
\[t_{\text{fin}}\geq\sqrt{4\,\Delta\eta/a_{\text{max}}} \tag{14}\]
for the minimum time required for a lane change, where \(\Delta\eta\) is the lateral distance between the lanelet centerlines.
## V Numerical Evaluation
We implemented our approach in Python, and all computations are carried out on a 3.5GHz Intel Core i9-11900KF processor. Our implementation is publicly available on GitHub1, and we published a repeatability package that reproduces the presented results on CodeOcean2. For the parameter values of our decision module we use \(a_{\text{des}}=$1\,\mathrm{m}\,\mathrm{s}^{-1}$\) for the desired acceleration, \(d_{\text{min}}=$1\,\mathrm{m}$\) for the minimum safe distance, and \(w_{\text{change}}=10\), \(w_{\text{profile}}=1\) for the weights of the cost function (7). Moreover, the time step size is \(\Delta t=0.1\mathrm{s}\) for CommonRoad and \(\Delta t=0.2\mathrm{s}\) for CARLA.
Footnote 1: [https://github.com/KochdumerNiklas/MotionPlanner](https://github.com/KochdumerNiklas/MotionPlanner)
Footnote 2: [https://codeocean.com/capsule/8454823/tree/v1](https://codeocean.com/capsule/8454823/tree/v1)
### _CommonRoad Scenarios_
CommonRoad [46] is a database that contains a large number of challenging motion planning problems for autonomous vehicles, and is therefore well suited to evaluate the performance of our decision module. For the experiments, we combine our decision module with two different types of motion planners, namely a motion-primitive-based planner and an optimization-based planner. To obtain the motion-primitive-based planner we used the AROC toolbox [47] to create a maneuver automaton with 12793 motion primitives by applying the generator space control approach [48]. For the optimization-based planner we solve an optimal control problem with the objective to track the reference trajectory generated by our decision module and the constraint of dynamical feasibility with the vehicle model (1). The results for the evaluation on 2000 CommonRoad traffic scenarios are listed in Tab. I, where we aborted the planning if the computation took longer than one minute. The outcome demonstrates that our decision module standalone on average runs about 10 times faster than real-time and can generate feasible driving corridors for all scenarios. Moreover, while the computation times for running the motion-primitive-based planner standalone are very high and the success rate is consequently quite low due to timeouts, in combination with our decision module the planner is real-time capable and can solve a large number of scenarios, which nicely underscores the benefits of using a decision module. Finally, even though we do not consider any collision avoidance constraints, the optimization based planner still produces collision-free trajectories most of the time, which can be attributed to the good quality of the reference trajectories generated by our decision module. The planned trajectory for an exemplary traffic scenario is visualized in Fig. 2, where the ego vehicle overtakes a bicycle that drives at the side of the road.
### _CARLA Simulator_
In contrast to CommonRoad scenarios which consist of a single planning problem, for the CARLA simulator [49] we use a navigation module to plan a route to a randomly chosen destination on the map. We then follow this route by replanning a trajectory with a duration of 3s every 0.3s until the vehicle reached the destination, where we combine our decision module with the optimization-based planner described in Sec. V-A. Moreover, since we use the high-fidelity vehicle model from CARLA for which the kinematic single track model in (1) is just an approximation, we additionally apply a feedback controller that counteracts model uncertainties and disturbances. For the experiments we consider the Town 1 map and use a constant velocity assumption to predict the future positions of the surrounding traffic participants. Tab. II displays the results for three different routes, and an exemplary snapshot from the CARLA simulator is shown Fig. 3. The outcome demonstrates that our decision module performs very well as part of a full autonomous driving software stack consisting of navigation, prediction, decision making, motion planning, and control, and therefore enables robust motion planning in real-time.
## VI Conclusion
We presented a novel approach for decision making in autonomous driving, which applies set-based reachability to identify driving corridors. As we demonstrated with an extensive numerical evaluation on 2000 CommonRoad traffic scenarios, our decision module runs in real-time, can be combined with multiple different motion planners, and leads to significant speed-ups compared to executing a motion planner standalone. Moreover, our experiments in the CARLA simulator, for which we integrated our decision module into a full autonomous driving software stack, showcase that our approach performs well for a lifelike setup close to reality.
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Route** & **Distance** & **Planning Problems** & **Computation Time** \\ \hline
1 & \(789\,\mathrm{m}\) & 622 & \(74\,\mathrm{ms}\) \\
2 & \(685\,\mathrm{m}\) & 489 & \(73\,\mathrm{ms}\) \\
3 & \(409\,\mathrm{m}\) & 346 & \(74\,\mathrm{ms}\) \\ \hline \hline \end{tabular}
\end{table} TABLE II: Results for the experiments in CARLA, where we specify the average computation time for our decision module to plan a trajectory with a duration of one second.
Fig. 3: Snapshot from the CARLA simulator, where the corresponding motion planning problem is visualized on the right hand side. |
2309.14820 | Three-dimensional Tracking of a Large Number of High Dynamic Objects
from Multiple Views using Current Statistical Model | Three-dimensional tracking of multiple objects from multiple views has a wide
range of applications, especially in the study of bio-cluster behavior which
requires precise trajectories of research objects. However, there are
significant temporal-spatial association uncertainties when the objects are
similar to each other, frequently maneuver, and cluster in large numbers.
Aiming at such a multi-view multi-object 3D tracking scenario, a current
statistical model based Kalman particle filter (CSKPF) method is proposed
following the Bayesian tracking-while-reconstruction framework. The CSKPF
algorithm predicts the objects' states and estimates the objects' state
covariance by the current statistical model to importance particle sampling
efficiency, and suppresses the measurement noise by the Kalman filter. The
simulation experiments prove that the CSKPF method can improve the tracking
integrity, continuity, and precision compared with the existing constant
velocity based particle filter (CVPF) method. The real experiment on fruitfly
clusters also confirms the effectiveness of the CSKPF method. | Nianhao Xie | 2023-09-26T10:36:59Z | http://arxiv.org/abs/2309.14820v1 | Three-dimensional Tracking of a Large Number of High Dynamic Objects from Multiple Views using Current Statistical Model
###### Abstract
Three-dimensional tracking of multiple objects from multiple views has a wide range of applications, especially in the study of bio-cluster behavior which requires precise trajectories of research objects. However, there are significant temporal-spatial association uncertainties when the objects are similar to each other, frequently maneuver, and cluster in large numbers. Aiming at such a multi-view multi-object 3D tracking scenario, a current statistical model based Kalman particle filter (CSKPF) method is proposed following the Bayesian tracking-while-reconstruction framework. The CSKPF algorithm predicts the objects' states and estimates the objects' state covariance by the current statistical model to importance particle sampling efficiency, and suppresses the measurement noise by the Kalman filter. The simulation experiments prove that the CSKPF method can improve the tracking integrity, continuity, and precision compared with the existing constant velocity based particle filter (CVPF) method. The real experiment on fruitfly clusters also confirms the effectiveness of the CSKPF method.
multiple object tracking 3D reconstruction current statistical model bio-cluster _Drosophila melanogaster_
## 1 Introduction
Multi-object tracking (MOT) are widely applied on traffic control, sports analysis, battlefield reconnaissance, etc. In the tracking of a biol-cluster, e.g. fish school, birds flock and insect swarm [1, 2, 3], there are more objects and higher motion kinematics compared with the traditional MOT scenario, which makes the general MOT methods not applicable. Moreover, 3D trajectories of biol-cluster are needed which requires images from synchronized multiple cameras to do object 3D reconstruction at the same time.
The 3D multi-view multi-object tracking faces two major challenges compared with the single-object tracking [4] or static objects 3D reconstruction [5]. One is temporal association problem, i.e. how to determine the detection1 association of each object between the current frame and the previous frames on a certain view. The other one is the spatial association problem, i.e. how to determine the detection association of each object on different views in a certain frame. Muti-view multi-object tracking is essentially a problem of assigning temporal-spatial association of detections. The temporal association decides the 2D tracking while the spatial association decides the 3D reconstruction.
Footnote 1: For the sake of clarity, the concepts of _measurement_, _detection_, and _observation_ in this paper are firstly be claimed. On an image, the foreground pixels of objects form many _measurements_. The measurement of a certain object is its _detection_ and the state reconstructed by the detection is its _observation_.
For the temporal association problem, the associated detection on the current frame to the previous frame can be constrained to a smaller region using motion continuity constraints. For the spatial association problem, the associated detection on one view to another view can be constrained on the epipolar. Assuming that the association credibility between two measurements can be defined, there are statistical-based methods that assign the nearest measurement to
the target detection, such as joint probability data association filter (JPDAF) and multiple hypothesis tracking (MHT), which is effective when the number of objects is not large and the measurement distinction is not small [6]. The methods based on linear allocation establish a cost matrix between the target detection and the candidate measurement, and then the Hungarian algorithm is used to solve the optimal detection-measurement allocation problem[7].
The order of reconstruction and tracking is important. According to the relationship between the temporal association and spatial association, the existing multi-view multi-object tracking method can be divided into three categories: a) Firstly, temporal associations are assigned to generate 2D trajectories on each view, and then the spatial association of these 2D trajectories is assigned to reconstruct 3D trajectories, named tracking-to-reconstruction method [8]. This type of algorithm cannot effectively use the 3D motion prediction of the object to deal with the occlusion problem, which often leads to trajectory interruptions and switch. b) Firstly, the spatial association is assigned to reconstruct the 3D points in each frame, and then the temporal association of these 3D points is assigned to link 3D trajectories, named reconstruction-to-tracking method [9]. This type of method often results in many illusory 3D trajectories when the measurement is not discriminative enough, because the spatial association will generate many "ghost" 3D points. c) Assuming the initial 3D position of each object is known, firstly, the 3D position in the next frame is predicted by the kinematic model and reprojected to each view by spatial association constraints. The prediction can be confirmed if there is overlap between the reprojected detection and the measurements on every view, which makes the temporal association natural satisfies the spatial association constraint. This type of method, named tracking-while-reconstruction method [10], can effectively eliminate phantoms and is insensitive to occlusion, which can obtain continuous and reliable trajectories.
In many cases, especially for bio-clusters, it is difficult to obtain an object's kinematic model. Chen et al. [10] used the assumption of uniform linear motion to predict the object's position which is reasonable when the frame rate is large enough. Wang et al. [11] used the LSTM network to learn the kinematic model of a single fruitly, which makes the position prediction more reasonable but largely increases calculation burden. Considering that the individual's motion is a variable acceleration motion with no sudden acceleration, in this paper, the current statistical model (CSM) [12] is used to predict not only the position but also the velocity of the object. We establish a tracking-while-reconstructing framework based on Bayesian inference and solve the temporal-spatial association problem by particle filter method, where the object's states and covariance are predicted by the current statistical model. After the detections are assigned to each object, the state observations are determined. The Kalman filter (KF) is used to suppress measurement noise and improve tracking precision. The proposed method, named current statistical model based Kalman particle filter (CSKPF) tracking algorithm, can not only build the trajectories with higher integrity, continuity, and precision but also can estimate the target velocity during tracking, which is essential to analyze the clusters behaviors.
This paper is arranged as follows: Section 1 introduces the significance and challenges of multi-view multi-object tracking, especially the 3D tracking of a large number of objects with high kinematics, and summarizes existing temporal-spatial association methods and tracking-reconstruction frameworks. Section 2 extends the Bayesian inference framework from single-view single-object tracking to the multi-view multi-object tracking-while-reconstructing. Then, the existing constant velocity based particle filter (CVPF) method is explain in the above framework, which leads to our CSKPF method. Section 3 first defines the evaluation index for multi-view multi-object tracking, and then the tracking performance of CVPF and CSKPF methods are compared by simulated and real data, which proves the improvement of the proposed method. Finally, the paper is summarized in section 4.
## 2 Method
In this section, the Bayesian framework for single-view single-object tracking is reviewed and the reason why it cannot be directly applied to multi-view multi-object tracking is analyzed. Then, a multi-view multi-object Bayesian inference framework to deal with the above problems is built. Finally, the existing CVPF method and the proposed CSKPF method based on this framework are explained.
### The Bayesian single-view single-object tracking framework
The discrete kinematic model and observation model of the target can be formulated as
\[\mathbf{x}_{t}=\mathbf{f}\left(\mathbf{x}_{t-1},\mathbf{v}_{t-1}\right), \mathbf{y}_{t}=\mathbf{h}\left(\mathbf{x}_{t},\mathbf{n}_{t}\right), \tag{1}\]
where \(\mathbf{x}_{t},\mathbf{y}_{t}\) are the state and obervation of the target at moment \(t\), and \(\mathbf{v}_{t},\mathbf{n}_{t}\) are the process noise and obervation noise, and \(\mathbf{f}\) and \(\mathbf{h}\) are the state transition function and measurement function, respectively.
From the Bayesian point of view, the object tracking is to estimate the current state of \(\mathbf{x}_{t}\) based on historical obervation \(\mathbf{y}_{1:t}\). Assuming that the state transition is a first-order Markov process, and the _posterior_ state probability \(p\left(\mathbf{x}_{t-1}|\mathbf{y}_{1:t-1}\right)\)
at time \(t-1\) is known, then the _priori_ state at time \(t\) can be _predicted_ by the state transition model \(p\left(\mathbf{x}_{t}|\mathbf{x}_{t-1}\right)\) before getting the obervation \(\mathbf{y}_{t}\) at moment \(t\), i.e.
\[p\left(\mathbf{x}_{t}|\mathbf{y}_{1:t-1}\right)=\int p\left(\mathbf{x}_{t}| \mathbf{x}_{t-1}\right)p\left(\mathbf{x}_{t-1}|\mathbf{y}_{1:t-1}\right)\,d \mathbf{x}_{t-1}. \tag{2}\]
After the obervation \(\mathbf{y}_{t}\) is known, the _priori_ state can be _corrected_ to _posterior_ state
\[p\left(\mathbf{x}_{t}|\mathbf{y}_{1:t}\right)\propto p\left(\mathbf{y}_{t}| \mathbf{x}_{t}\right)p\left(\mathbf{x}_{t}|\mathbf{y}_{1:t-1}\right), \tag{3}\]
where \(p\left(\mathbf{y}_{t}|\mathbf{x}_{t}\right)\) means the probability that an obervation \(\mathbf{y}_{t}\) occurs when the state is \(\mathbf{x}_{t}\).
It seems that by generating as many trackers as there are objects and making each tracker run independently, the multi-view multi-object tracking problem can be solved by the above single-view single-object framework. However, on the one hand, the target detection cannot be determined before the temporal association of detection; on the other hand, the target observation cannot be reconstructed before the spatial association of detection. In brief, \(\mathbf{y}_{t}\) is unknown because the temporal-spatial association is pendent, which causes the _correct_ step of Bayesian inference unimplementable. An improved Bayesian framework is designed in next subsection to determine the temporal-spatial association which makes the single-object Bayesian implementable in a multi-view multi-object tracking scenario.
### A Bayesian multi-view multi-object tracking-while-reconstruction framework
As mention above, the framework treat each object independently. Without loss of generality, as shown in Figure 1, only one tracker's temporal-spatial association Bayesian inference will be explained. The horizontal axis is the temporal axis driven by the kinematic model, and the left half-plane represents the previous moment \(t-1\), and the right half-plane is the current moment \(t\). The vertical axis is the spatial axis determined by the camera projection model, the upper half-plane is 3D space, and the lower half-plane represents the 2D space of all views. Supposed that the state \(\mathbf{x}_{t-1}\) has been tracked according to the historical measurements. Then the _prior_ state \(\mathbf{x}_{t|t-1}\) can be predicted based on the kinematic model \(p\left(\mathbf{x}_{t}|\mathbf{x}_{t-1}\right)\). Finally, the projection of \(\mathbf{x}_{t|t-1}\) on each view can be calculated by the camera projection model. The state transfers from the third quadrant through the second and first quadrants to the fourth quadrant is constrained by the temporal association constraints and spatial association constraints.
In the fourth quadrant, the temporal-spatial association can be determined by the position and appearence similarity between the target and candidate detection, which means the target observation \(\mathbf{y}_{t}\) is known. To be specific, in the fourth quadrant, if the projection \(\mathbf{P}^{v}(\mathbf{x}_{t|t-1})\) has overlap with measurement \(\chi_{v,t,k_{v}}\) on view \(v\), then the candidate \(\mathbf{z}_{t,k}=\langle\chi_{1,t,k_{1}},\cdots,\chi_{V,t,k_{V}}\rangle,k=1, \cdots,K\) is a potential temporal-spatial association. All potential candidates
Figure 1: A Bayesian multi-view multi-object tracking-while-reconstruction framework. The filled circles are 3D state and the dashed circles are their projection on each view. The solid circles are the 2D measurements, which are colored red/yellow if they are overlapping/non-overlapping with the state projection.
constitute the _candidate association group_\(\mathbf{z}_{t}=\{\mathbf{z}_{t,k}\}_{k=1}^{K}\). The credibility of the candidate association is defined as
\[w(\mathbf{z}_{t,k})=p\left(\mathbf{z}_{t,k}|\mathbf{x}_{t|t-1}\right)=\prod_{v=1 }^{V}\frac{1}{\left(\sqrt{2\pi}\sigma_{v}\right)^{-\frac{V}{2}}}\exp\big{(} \tau_{v}\left(\mathbf{x}_{t|t-1},\chi_{v,t,k_{v}}\right)-1\big{)}, \tag{4}\]
where \(\tau_{v}\left(\mathbf{x}_{t|t-1},\chi_{v,t,k_{v}}\right)\) is the appearence similarity between the projection \(\mathbf{P}^{v}(\mathbf{x}_{t|t-1})\) and measurement \(\chi_{v,t,k_{v}}\). Thus, the target observation is the most reliable candidate association
\[\mathbf{y}_{t}=\operatorname*{arg\,max}_{k}w(\mathbf{z}_{t,k}). \tag{5}\]
According to the Bayesian inference formula (3), the corrected target state is
\[\mathbf{x}_{t}=\operatorname*{arg\,max}_{\hat{\mathbf{x}}_{t}}p\left(\hat{ \mathbf{x}}_{t}|\mathbf{y}_{1:t}\right). \tag{6}\]
The proposed framework can track muliple objects while reconstruction. The tracker for each object runs independently in parallel and one measurement can be associated with more than one object, which improves tracking efficiency and avoids tracking interruption caused by image occlusion in some views.
### The constant velocity based particle filter (CVPF) method
It is diffcult to solve the above tracking framework because there is an integral term in Equation (2). Therefore, the Monte Carlo sampling method is used to solve the Bayesian inference problem [13], i.e. particle filter method. Specifically, Chen et al. [10] proposed the CVPF method (as shown in Algorithm 1) based on constant velocity kinematic model and sampling importance resampling (SIR).
```
0: number of particles \(N_{p}\), particle variance \(\sigma^{2}\)
0: the measurements \(\mathcal{B}_{t,v}\) of each frame \(t\) and each view \(v\)
0: 3D trajectories of all objects
0: Initialize: Calculate temporal-temporal association among \(\mathcal{B}_{1,v},\mathcal{B}_{2,v},v=1,\cdots,V\) using method of exhaustion and obtain \(N_{2}\) matching pairs. All associated measurements on the second frame are put into the associated measurement set \(\mathcal{M}_{2,v}\). Create \(N_{2}\) new trackers and put them into the active tracker set \(\mathcal{T}_{2}\) and initialize the inactive tracker set \(\mathcal{L}_{2}\) as empty set.
1:for\(t=3,\cdots,T\)do
2: Initialize \(\mathcal{M}_{t,v}=\emptyset\)
3:for\(\tau\) in \(\mathcal{T}_{t-1}\)do
4: Predict the state \(\mathbf{x}_{t|t-1}\) of the tracker \(\tau\) from \(\mathbf{x}_{t-1}\) using Equation (7).
5: Sample \(N_{p}\) particles \(\mathbf{x}_{t|t-1}^{*}\sim\mathcal{N}\left(\mathbf{x}_{t|t-1},\sigma^{2}\right)\) and renew their weights by \(1/N_{p}\).
6:for\(i=1,\cdots,N_{p}\)do
7: Calculate \(\mathbf{z}_{t|t-1}^{*}\) and their credibilities for \(\mathbf{x}_{t|t-1}^{i}\) using Equation (4).
8: Determine the particle weights \(v_{i}\) using Equation (9).
9:endfor
10:if\(\forall i,w_{i}=0\)then
11: Move the tracker \(\tau\) from the active tracker set \(\mathcal{T}_{t-1}\) to inactive \(\mathcal{L}_{t-1}\).
12:else
13: Estimate the state \(\hat{\mathbf{x}}_{t}\) by Equation (10).
14: Calculate \(\mathbf{z}_{t}\) its credibilities for \(\hat{\mathbf{x}}_{t}\) using Equation (4).
15: Put the most likely association \(\mathbf{z}_{t}^{*}\) into the association measurements set \(\mathcal{M}_{t,v}\).
16: Reconstruct the most likely obervation \(\mathbf{y}_{t}\) by \(\mathbf{z}_{t}^{*}\) using stereovision method.
17:\(\mathbf{x}_{t}\leftarrow\hat{\mathbf{x}}_{t}\).
18:endif
19:endfor
20: Do temporal-spatial association among the unassociated measurements \(\mathcal{B}_{t-1,v}/\mathcal{M}_{t-1,v}\), \(\mathcal{B}_{t,v}/\mathcal{M}_{t,v}\) using method of exhaustion and initialize new trackers \(\mathcal{R}_{t}\).
21: Update the tracker set by \(\mathcal{T}_{t}\leftarrow\mathcal{T}_{t-1}\bigcup\mathcal{R}_{t}\).
22:endfor
23: Parse the trajectories of all objects from the historical states of trackers \(\mathcal{T}_{T}\) and \(\mathcal{L}_{T}\).
```
**Algorithm 1** CVPF algorithm
#### 2.3.1 State prediction
Denote the target state as \(\mathbf{x}=[x,\dot{x},y,\dot{y},z,\dot{z}]^{\top}\), the precited state based on constant velocity hypothesis is
\[\mathbf{x}_{t|t-1}=\mathbf{F}\mathbf{x}_{t}, \tag{7}\]
where \(\mathbf{F}\) is the state transition matrix
\[\mathbf{F}=\left[\begin{array}{ccc}\mathbf{F}_{1}&\mathbf{0}&\mathbf{0}\\ \mathbf{0}&\mathbf{F}_{1}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}&\mathbf{F}_{1}\end{array}\right],\;\mathrm{and}\;\; \mathbf{F}_{\mathbf{1}}=\left[\begin{array}{cc}1&\Delta t\\ 0&1\end{array}\right], \tag{8}\]
and \(\Delta t\) is the sampling interval.
#### 2.3.2 Particle sampling
Suppose there is \(N_{p}\) particles, the importance probability density function of the particle \(\mathbf{x}_{i|t-1}^{i},i=1,\cdots,N_{p}\), is \(q\left(\mathbf{x}_{t}^{i}|\mathbf{x}_{t-1}^{i},\mathbf{y}_{1:t}\right)=p\left( \mathbf{x}_{t}^{i}|\mathbf{x}_{t-1}^{i}\right)\) according to the SIR filter method. The CVPF method assumes that \(p\left(\mathbf{x}_{t}^{i}|\mathbf{x}_{t-1}^{i}\right)\) obeys Gaussian distribution with mean value \(\mathbf{x}_{t|t-1}\) and variance \(\sigma^{2}\). By sampling \(N_{p}\) particles from the the importance probability density function \(\mathbf{x}_{t|t-1}^{i}\sim\mathcal{N}(\mathbf{x}_{t|t-1},\sigma^{2})\), and renewing the particle weights as \(w_{i}=1/N_{p}\), the candidate association group \(\mathbf{z}_{t|t-1}^{i}\) of \(i\)-th particle and its credibility can be calculated according to the tracking framework.
#### 2.3.3 State correlation
According to Equation (4), the particle weights are updated to
\[w_{i}=\max_{k}w(\mathbf{z}_{t|t-1,k}^{i}), \tag{9}\]
where \(\mathbf{z}_{t|t-1,k}^{i}\) is the \(k\)-th candidate association of the particle \(\mathbf{x}_{t|t-1}^{i}\). Thus, the estimated target state is
\[\hat{\mathbf{x}}_{t}=\sum_{i=1}^{n}w_{i}\mathbf{x}_{t|t-1}^{i}. \tag{10}\]
The CVPF method takes the \(\hat{\mathbf{x}}_{t}\) as the state correction \(\mathbf{x}_{t}\). By reprojecting the state \(\hat{\mathbf{x}}_{t}\) to each view, the most likely association \(\mathbf{z}_{t}^{*}\) can be determined according to the Equation (5) and the observation \(\mathbf{y}_{t}\) can be calculated by stereovision method. Finally, new trackers can be initialized from the unassociated measurement.
### Currect statistical model based Kalman particle filter (CSKPF) method
The constant velocity hypothesis of CVPF method makes it be unsuitable for maneuvering target. Moreover, the particle covariance is always constant \(\sigma^{2}\), which does not reflect the state prediction uncertainty. The proposed CSKPF method (as shown in Algorithm 2) will improve the CVPF method by solving the two problems.
#### 2.4.1 State prediction
The object motion is modeled by correct statistical model which assumes that the object acceleration is bounded and obeys modified Rayleigh distribution to the average acceleration in a past period of time. Denote the target state as \(\mathbf{x}=[x,\dot{x},\ddot{x},y,\dot{y},\ddot{y},z,\dot{z},\ddot{z}]^{\top}\) and predict the target state by
\[\mathbf{x}_{t|t-1}=\mathbf{G}\mathbf{x}_{t}+\mathbf{U}\bar{\mathbf{a}}, \tag{11}\]
and estimate its covariance by
\[\mathbf{P}_{t|t-1}=\mathbf{G}\mathbf{P}_{t-1}\mathbf{G}^{\top}+\mathbf{Q}_{t}, \tag{12}\]
where \(\bar{\mathbf{a}}\) is the current average acceleration and
\[\mathbf{G}=\left[\begin{array}{ccc}\mathbf{G}_{1}&\mathbf{0}&\mathbf{0}\\ \mathbf{0}&\mathbf{G}_{1}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}&\mathbf{G}_{1}\end{array}\right],\;\mathrm{and}\; \mathbf{G}_{1}=\left[\begin{array}{ccc}1&\Delta t&\frac{1}{\alpha^{2}} \left(-1+\alpha\Delta t+e^{-\alpha\Delta t}\right)\\ 0&1&\frac{1}{\alpha}\left(1-e^{-\alpha\Delta t}\right)\\ 0&0&e^{-\alpha\Delta t}\end{array}\right] \tag{13}\]
is the state transition matrix,
\[\mathbf{U}=\left[\begin{array}{ccc}\mathbf{u}_{1}&\mathbf{0}&\mathbf{0}\\ \mathbf{0}&\mathbf{u}_{1}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}&\mathbf{u}_{1}\end{array}\right],\;\mathrm{and}\;\mathbf{ u}_{1}=\left[\begin{array}{ccc}\frac{1}{\alpha^{2}}\left(-\alpha\Delta t+ \frac{\alpha^{2}(\Delta t)^{2}}{2}+1-e^{-alpha lapha\Delta t}\right)\\ \frac{1}{\alpha}\left(\alpha\Delta t-1+e^{-\alpha\Delta t}\right)\\ 1-e^{-\alpha\Delta t}\end{array}\right] \tag{14}\]
is the input control matrix. The the process noise estimation matrix
\[\mathbf{Q}=\left[\begin{array}{ccc}\mathbf{Q}_{1}&\mathbf{0}&\mathbf{0}\\ \mathbf{0}&\mathbf{Q}_{2}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}&\mathbf{Q}_{3}\end{array}\right],\;\mathrm{and}\;\; \mathbf{Q}_{i}=2\alpha_{i}\sigma_{i}^{2}\left[\begin{array}{ccc}q_{11}( \alpha_{i})&q_{12}(\alpha_{i})&q_{13}(\alpha_{i})\\ q_{12}(\alpha_{i})&q_{22}(\alpha_{i})&q_{23}(\alpha_{i})\\ q_{13}(\alpha_{i})&q_{23}(\alpha_{i})&q_{33}(\alpha_{i})\end{array}\right],i= 1,2,3 \tag{15}\]
keeps changing over time, where
\[\sigma_{i}^{2}=\left\{\begin{array}{ccc}\frac{4-\pi}{\pi}{[a_{\max,i}-\bar{a} _{i}(t)]}^{2}&\bar{a}_{i}(t)>0\\ \frac{4-\pi}{\pi}{[-a_{\max,i}-\bar{a}_{i}(t)]}^{2}&\bar{a}_{i}(t)<0\end{array},\right. \tag{16}\]
and
\[\begin{array}{l}q_{11}=\frac{1}{2\alpha^{5}}\left[1-e^{-2\alpha\Delta t}+2 \alpha\Delta t+\frac{3}{2}\alpha^{3}\Delta t^{3}-2\alpha^{2}\Delta t^{2}-4 \alpha\Delta te^{-\alpha\Delta t}\right]\\ q_{12}=\frac{1}{2\alpha^{4}}\left[e^{-2\alpha\Delta t}+1-2e^{-\alpha\Delta t }+2\alpha\Delta te^{-\alpha\Delta t}-2\alpha\Delta t+\alpha^{2}\Delta t^{2} \right]\\ q_{13}=\frac{1}{2\alpha^{3}}\left[1-e^{-2\alpha\Delta t}-2\alpha\Delta te^{- \alpha\Delta t}\right]\\ q_{22}=\frac{1}{2\alpha^{3}}\left[4e^{-\alpha\Delta t}-3-e^{-2\alpha\Delta t }+2\alpha\Delta t\right]\\ q_{23}=\frac{1}{2\alpha^{3}}\left[e^{-2\alpha\Delta t}+1-2e^{-\alpha\Delta t }\right]\\ q_{33}=\frac{1}{2\alpha}\left[1-e^{-2\alpha\Delta t}\right].\end{array} \tag{17}\]
The correct statistical model parameters \(\alpha_{i}\) is the reciprocal of the maneuvering time constant, and \(a_{\max,i}\) is the maximum possible acceleration.
#### 2.4.2 Particle sampling
The particle sampling method of CSKPF method is almost the same with that of CVPF method, except that the CSKFP method assumes that \(p\left(\mathbf{x}_{i}^{t}|\mathbf{x}_{i-1}^{t}\right)\) obeys Gaussian distribution with mean value \(\mathbf{x}_{t|t-1}\)_but_ variance \(\mathbf{P}_{t|t-1}\)), which
means the particles are intensive when the covariance is small and sparse when the covariance is large. In this way, the particle sampling is more efficient.
#### 2.4.3 State correction
Similar to the CVPF method, the state estimation \(\hat{\mathbf{x}}_{t}\) and its corresponding observation \(\mathbf{y}_{t}\) can be obtained, but the CSKPF loop does not end here. The state estimation and its covariance is then corrected by observation \(\mathbf{y}_{t}\) using Kalman filter, i.e.
\[\mathbf{x}_{t} =\hat{\mathbf{x}}_{t}+\mathbf{K}_{t}\left(\mathbf{y}_{t}-\mathbf{ H}\hat{\mathbf{x}}_{t}\right), \tag{18}\] \[\mathbf{P}_{t} =\mathbf{P}_{t|t-1}-\mathbf{K}_{t}\mathbf{H}\mathbf{P}_{t|t-1}, \tag{19}\]
where
\[\mathbf{K}_{t}=\mathbf{P}_{t|t-1}\mathbf{H}^{\top}\left(\mathbf{H}\mathbf{P}_ {t|t-1}\mathbf{H}^{\top}+\mathbf{R}\right)^{-1} \tag{20}\]
is the Kalman gain matrix, \(\mathbf{H}\) is the observation matrix and \(\mathbf{R}\) is the observation noise matrix.
#### 2.4.4 Notes
It should be noted that the current statistical model assumes that the acceleration obeys the modified Rayleigh distribution whose mean value is the prediction of the current acceleration. Therefore, the CSKPF method requires a period of "warm-up" time to estimate the mean acceleration. In practice, at first \(T_{h}\) moment, a tracker uses both CVPF and CSKPF method to predict and correct the state, but only accept the CVPF output. The mean acceleration is updated at every CSKPF tracker step so that the estimation of acceleration converges to a reasonable value after \(T_{h}\) frames. When \(t\geq T_{h}\), only CSKPF runs.
As can be seen, the CSKPF can not only build the trajectories of objects but also estimate their velocity and acceleration at the same time, which are useful in motion analysis.
## 3 Experiments
In this section, the CVPF and CSKPF method are compared based on the proposed evaluation index on simulated data and verified on the fruitfly data in the real world.
### Evaluation index
The tracking accuracy index MOTA and the tracking precision index MOTP is widely used to evaluate multiple object tracking (MOT) method[14]. Based on MOTA and MOTP, we propose more intuitively index which customizes for 3D trajectories construction, i.e., tracking integrity, continuity and precision.
Suppose that the series \(\mathcal{P}^{i}=\{\mathbf{p}_{t_{i}}^{i},\cdots,\mathbf{p}_{t_{i}}^{i}, \cdots,\mathbf{p}_{T_{i}}^{i}\}\) is the \(i\)-th true trajectory which starts at \(t_{i}\) and end at \(T_{i}\). The \(j\)-th tracked trajectory is \(\mathcal{Q}^{j}=\{\mathbf{q}_{t_{i}}^{j},\cdots,\mathbf{q}_{t}^{j},\cdots, \mathbf{q}_{T_{i}}^{j}\}\). Note that at some moment \(t\), \(\mathbf{q}_{t}^{j}\) may not exist due to miss tracking. Define the position discrepancy between \(\mathcal{P}^{i}\) and \(\mathcal{Q}^{j}\) at moment \(t\) by
\[d_{ij}\left(t\right)=\left\{\begin{array}{cc}\left|\mathbf{p}_{t}^{i}- \mathbf{q}_{t}^{j}\right|&\text{if }\mathbf{q}_{t}^{j}\text{ exist}\\ d_{0}&\text{otherwise}\end{array}\right.,i=1,\cdots,N,j=1,\cdots,M, \tag{21}\]
where \(d_{0}\) is the threshold distance. Then, the discrepancy between \(\mathcal{P}^{i}\) and \(\mathcal{Q}^{j}\) among the whole tracking is
\[d_{ij}=\frac{1}{\left(T_{i}-t_{i}\right)}\sum_{t=t_{i}}^{T_{i}}d_{ij}(t). \tag{22}\]
**Integrity.** For the \(i\)-th true trajectory, more integral tracking means it can be tracked as long as possible. Therefore, define the tracking integrity as possibility that the true trajectories tracked, i.e.,
\[\mathrm{Integrity}=\frac{\sum_{i=1}^{N}\sum_{t=t_{i}}^{T_{i}}\delta\left(k_{t}^ {i}\right)}{\sum_{i=1}^{N}\left(T_{i}-t_{i}\right)}, \tag{23}\]
where \(\delta\left(\cdot\right)\) is Dirac function.
**Continuity.** For the \(i\)-th true trajectory, more continuous tracking means its matching trajectories are as less as possible. Using \(\text{IDSW}\big{(}\mathbf{k}^{i}\big{)}\) to count the switch times of matching series \(\mathbf{k}^{i}\), the tracking continuity can be represented by
\[\mathrm{Continuity}=1-\frac{\sum_{i=1}^{N}\text{IDSW}\left(\mathbf{k}_{i}\right) }{\sum_{i=1}^{N}\left(T_{i}-t_{i}\right)}. \tag{24}\]
**Precision.** For the \(i\)-th true trajectory, the tracking precision is defined by the average discrepancy at \(k_{t}^{i}\neq 0\) between true and tracked trajectories, i.e.,
\[\mathrm{Precision}=\frac{\sum_{i=1}^{N}\sum_{t=t_{i}}^{T_{i}}d_{ik_{t}^{i}(t)} }{\sum_{i=1}^{N}\sum_{t=t_{i}}^{T_{i}}\sum_{j=1}^{M}b_{t,ij}}. \tag{25}\]
### Simulation experiment
#### 3.2.1 Generate true trajectories
The simulation data is generated in the coordinate system shown in Figure 2. There are \(N\) objects whose positions are randomly initialized in the box of \(x_{0},y_{0},z_{0}\in[-20,20]\). The \(i\)-th object's speed is \(V_{i}(t)=6+2\sin\big{(}\frac{2\pi}{5}t+\alpha_{i,0}\big{)}\), and its heading angle is \(\xi_{i}(t)=\frac{1}{2}A_{\xi,i}\left(1+\cos\big{(}\frac{\pi}{10}t+\xi_{i,0} \big{)}\right)\) and the climbing angle is \(\gamma_{i}(t)=\frac{1}{4}A_{\gamma,i}\cos\big{(}\frac{\pi}{10}t+\gamma_{i,0} \big{)}\) at moment \(t\), where \(\alpha_{i,0},\xi_{i,0},\gamma_{i,0},\in[0,2\pi)\), \(A_{\xi,i},A_{\gamma,i}\in[-1,1]\) are randomly initialized. Then, the velocity at moment \(t\) is
\[\mathbf{v}_{i}(t)=V_{i}(t)\left[\cos\gamma_{i}\cos\xi_{i}\quad\cos\gamma_{i} \sin\xi_{i}\quad\sin\gamma_{i}\right]^{\top} \tag{26}\]
The true trajectories are integrated from their velocities.
#### 3.2.2 Render noised measurements
Suppose that the object entity is a ball with a radius of 0.5. As shown in Figure 2, two cameras are orthogonally placed, and appropriate camera parameters are selected to ensure that all objects are in the view field of the two cameras. Gaussian noise is added to each object's projection to simulate the measurement error. An example is shown in Figure 3 and Figure 4.
#### 3.2.3 Result
The performance of CVPF and CSKPF are compared when there are \(N=\)1,20,40,60,80,100, 120, 140, 160 objects and repeats 50 times for each \(N\). The particle number of the two methods \(N_{p}=100\). The particle covariance \(\sigma^{2}=0.09\) in CVPF method and \(\alpha=5,a_{\max}=5\) in CSKPF method. The time step \(\Delta t=0.1\)s, and the tracking lasts 5 seconds.
As shown in Figure 5, when there is only one object in the scene, the two methods both successfully build integral and continuous trajectory because there is no temporal-spatial association uncertainty for one object. Nevertheless, the CSKPF method has better precision than the CVPF algorithm. With the increase of objects, the temporal-spatial association uncertainties increases. As a result, the integrity and continuity of the two methods both show a downward
trend, and the tracking errors increase. However, the integrity and continuity deterioration of the CSKPF method is more moderate. When there are 160 objects, the CSKPF integrity and continuity can still reach about 0.85 and 0.995, respectively, while that of CVPF has dropped below 0.7 and 0.99. In terms of precision, the tracking errors of the two methods increase linearly, but that of CSKPF is always lower than CVPF.
In summary, the simulation experiments prove that the proposed CSKPF method can improve tracking integrity, continuity, and precision.
### Fruitflies' experiment
The tracking of the biological swarm is a challenging scenario for the multi-view multi-object tracking method. The CVPF and CSKPF algorithm are further compared on the flight data of fruitflies (_Drosophila melanogaster_) recorded in a laboratory environment.
### Data acquisition
As shown in the Figure 6, the fruitflies fly in a \(400\times 400\times 400\)mm box glued by transparent acrylic board. The box is placed between infrared high-speed digital cameras (PCO Dimax HS4 2000x2000@100FPS, Zeiss Biogon T* 35mm f/2 ZM) and the infrared light tabula which are covered with a diffuser to produce soft and flicker-free light. The two calibrated cameras are synchronized and orthogonally placed about 900 mm away from the center of the box.
Figure 4: Randered images in (a) view 1 and (b) view 2 for \(N=60\) at moment \(t=1\). The blue regions are box and the black dots are objects.
Figure 5: The tracking (a) integrity, (b) continuity, (c) precision of CVPF and CSKPF method in simulated data. The solid line represents the average index of 50 experiments, and the corresponding shade surrounds one standard deviation.
Figure 6: image acquisition system.
The measurements are expressed as pixel blobs \(\chi_{v,t,k},v=1,2,t=1,\cdots,T,k=1,\cdots,K\) segmented by Gaussian background modeling method, whcih is the same with [10]. Other detection method [15] can be used, but it is not discussed here.
#### 3.4.1 Particle weight
Define the geometrical overlap ratio of the particle \(\mathbf{x}_{|t-1}^{i}\) and the measurement \(\chi_{v,t,k_{v}}\) by
\[\eta\left(\mathbf{x}_{t|t-1}^{i},\chi_{v,t,k_{v}}\right)=\frac{|\mathbf{P}^{v }\Upsilon(\mathbf{x}_{|t|-1}^{i})\bigcap\chi_{v,t,k_{v}}|}{|\chi_{v,t,k_{v}}^{ i}|},v=1,2 \tag{27}\]
where the \(|\cdot|\) operator counts the number of pixels, \(\mathbf{P}^{v}\) is the projection matrix of the camera \(v\), \(\Upsilon(\mathbf{x}_{|t|-1}^{i})\) is a ball with the center \(\mathbf{x}_{|t|-1}^{i}\). (The diameter of the ball is set to 3mm because the average body length of fruitly is about 3 mm). Denote the most likely association at the moment of \(t-1\) is \(\langle\chi_{1,t-1,k_{1}^{*}},\chi_{2,t-1,k_{v}^{*}}\rangle\), the credibility of the \(k\)-th association \(\langle\chi_{1,t,k_{1}},\chi_{2,t,k_{2}}\rangle\) on view \(v\) is
\[\tau_{v,k}\left(\mathbf{x}_{|t|-1}^{i}\right)=\eta\left(\mathbf{x}_{|t|-1}^{i },\chi_{v,t,k_{v}}\right)\mathrm{NCC}\left(\chi_{v,t,k_{v}},\chi_{v,t-1,k_{v}^ {*}}\right) \tag{28}\]
where \(\mathrm{NCC}(\cdot,\cdot)\) is the normalized covariance of the target appearence and the measurement. The credibility of particle \(\mathbf{x}_{t|t-1}^{i}\) on view \(v\) is \(\tau_{v}=\underset{k}{\arg\max}\,\tau_{v,k}\), and the weight \(w_{i}\) of the particle \(\mathbf{x}_{t|t-1}^{i}\) can be updated by Equation (4).
#### 3.4.2 Result
The performance of CVPF and CSKPF are compared on the 100 frames data. The particle number of the two methods \(N_{p}=300\). The particle covariance \(\sigma^{2}=4\)mm in CVPF method and \(\alpha=1,a_{\max}=0.1\) in CSKPF method. The time step \(\Delta t=0.01\)s.
As shown in Figure 7, the trajectories of CSKPF are smoother than that of CVPF. Moreover, by counting the trajectories that are longer than \(x\) in Figure 8, it can be seen that there are more CSKPF trajectories than CVPF at \(30<x<50\), which means CSKPF can build a more integral trajectory. It should be noted that at \(50<x<60\), the integrity of the CVPF is slightly higher because CVPF regards the trajectory that rebounded by the wall of the box as one trajectory while the CSKPF does not accept these kinds of unsmooth trajectories. In practice, only continuous trajectories are concerned. Thus, the integrity of the CSKPF method is better than CVPF.
One typical trajectory is shown in Figure 9 and it position and velocity curve are shown in Figure 10 and Figure 11. Obviously, the CVPF trajectory is unsmooth and even temporally deviates from the object. The deviation of CVPF occurs around 30 frames according to the position curve. By reordering the trajectory between the \(25\sim 36\) frame to each view in Figure 12. It is obvious that the deviation is caused by another fruitly that flew towards the target fruitly in view 2 at 30 frames. The image of the two fruitflies is too close for CVPF to distinguish while CSKPF successes thanks to better motion prediction which is proved by the smoother and more reasonable velocity prediction. On the other hand, better velocity estimation is important for motion analysis.
Figure 7: Trajectories of Fruitfly reconstructed by (a) CVPF and (b) CVKPF method.
In summary, the experiment of fruitfly flight proves that the proposed CSKPF method can more effectively track a large amount of high kinematic objects whose motion model is unknown.
## 4 Conclusion
Aiming at the multi-view multi-object tracking problem, this paper proposes the CSKPF method which improves the existing CVPF method by introducing the current statistical model to predict the motion, using the state covariance matrix to resample particles and using the Kalman filter to suppress the measurement noise. Both simulation experiments and real-world fruitfly experiments prove the advantages of CSKPF over the baseline CVPF. The proposed method is also suitable for tracking other highly kinematic objects even the kinematic model is unknown. In future research, the missed assumption can be introduced to deal with the trajectory interruption caused by the missed detection, and other fast algorithms to solve the Bayesian inference problem like unscented Kalman filter can be applied to improve the calculation efficiency.
## Acknowledgments
None
|
2309.11780 | Geometric Extensions | We prove that the derived direct image of the constant sheaf with field
coefficients under any proper map with smooth source contains a canonical
summand. This summand, which we call the geometric extension, only depends on
the generic fibre. For resolutions we get a canonical extension of the constant
sheaf. When our coefficients are of characteristic zero, this summand is the
intersection cohomology sheaf. When our coefficients are finite we obtain a new
object, which provides interesting topological invariants of singularities and
topological obstructions to the existence of morphisms. The geometric extension
is a generalization of a parity sheaf. Our proof is formal, and also works with
coefficients in modules over suitably finite ring spectra. | Chris Hone, Geordie Williamson | 2023-09-21T05:01:50Z | http://arxiv.org/abs/2309.11780v1 | # Geometric extensions
###### Abstract.
We prove that the derived direct image of the constant sheaf with field coefficients under any proper map with smooth source contains a canonical summand. This summand, which we call the geometric extension, only depends on the generic fibre. For resolutions we get a canonical extension of the constant sheaf. When our coefficients are of characteristic zero, this summand is the intersection cohomology sheaf. When our coefficients are finite we obtain a new object, which provides interesting topological invariants of singularities and topological obstructions to the existence of morphisms. The geometric extension is a generalization of a parity sheaf. Our proof is formal, and also works with coefficients in modules over suitably finite ring spectra.
## 1. Introduction
This paper introduces _geometric extensions_, which are generalizations of intersection cohomology sheaves and parity sheaves. We work in the setting of constructible sheaves on algebraic varieties, and show that direct image sheaves along any resolution contain a canonical direct summand which is independent of the resolution.1 When our coefficients are \(\mathbb{Q}\), this summand is the intersection cohomology sheaf. When our coefficients are finite, we obtain a new object. Our proof is formal, and works more generally for proper maps with smooth source, and with coefficients in any suitably finite ring spectrum. The stalks of the geometric extension (with coefficients in finite fields and other ring spectra) provide subtle topological invariants of the singularities of algebraic varieties.
Footnote 1: After having discovered this statement and its proof, we became aware of the McNamara’s paper [38, §5] where a statement equivalent to one of our main theorems is proved. Our proof is almost identical to McNamara’s.
In order to motivate geometric extensions, we first recall the traditional route to intersection cohomology extensions through perverse sheaves. We then turn to an alternative approach via the Decomposition Theorem, which will motivate the consideration of geometric extensions. We then state our main result, and finally give some motivation from modular representation theory, where geometric extensions generalise the notion of a parity sheaf.
### Motivation from the Decomposition Theorem
Let \(Y\) be a complex algebraic variety, equipped with its classical (metric) topology. Inside the constructible derived category of sheaves of \(\mathbb{Q}\)-vector spaces on \(Y\) there is a remarkable abelian category of _perverse sheaves_, which is preserved by Verdier duality. The abelian category of perverse sheaves is finite length and its simple objects are the intersection cohomology extensions of simple local systems on irreducible, smooth, locally closed subvarieties. Their global sections compute intersection cohomology.
The central importance of intersection cohomology extensions becomes manifest in the Decomposition Theorem. The Decomposition Theorem states that for smooth \(X\) and any proper morphism
\[f:X\to Y\]
of complex algebraic varieties the derived direct image \(f_{*}\mathbb{Q}_{X}\) is _semi-simple_: isomorphic in the derived category to a direct sum of shifts of intersection cohomology extensions of simple local systems on strata. The Decomposition Theorem implies that the intersection cohomology of \(Y\) is a direct summand in the cohomology of any resolution. It is also implies (and generalizes) fundamental ideas in the topology of complex algebraic varieties like the local and global invariant cycle theorems, semi-simplicity of monodromy and Hodge theory [7, 12, 46, 48].
The Decomposition Theorem also provides another route to intersection cohomology complexes. The constructible derived category is Krull-Schmidt: every object admits a decomposition into indecomposable summands, and this decomposition is unique. The Decomposition Theorem implies that if one considers all proper maps to \(Y\) with smooth source
then the summands of the derived direct images \((f_{i})_{*}\mathbb{Q}_{X_{i}}\) are of a special form: they are shifts of intersection cohomology complexes.
This observation allows one to imagine an alternate version of history, where intersection cohomology complexes were discovered via the Krull-Schmidt theorem rather than through the theory of perverse sheaves2. It also naturally raises the following question:
Footnote 2: The expert will miss the adjective “of geometric origin” in this discussion. Everything we discuss in this paper will be “of geometric origin”.
_Question 1.1_.: Let \(\Lambda\) denote a ring, and let \(\Lambda_{X}\) denote the constant sheaf on \(X\) with coefficients in \(\Lambda\). What can one say about the summands of the derived direct image \(f_{*}\Lambda_{X}\), for any resolution \(f:X\to Y\)? More generally, what can one say about the summands of \(f_{*}\Lambda_{X}\) for any proper morphism with smooth source \(f:X\to Y\)?
This question should be considered the central motivation of this paper.
By the proper base change theorem, the stalks of \(f_{*}\Lambda_{X}\) record the \(\Lambda\)-cohomology of the fibres of \(f\). If Question 1.1 has an answer giving a small list of possible summands (as is the case with \(\Lambda=\mathbb{Q}\), as implied by the Decomposition Theorem) then there are basic building blocks of the cohomology of morphisms, which only depend on \(Y\) and not on the particular morphism. For example, the "support theorems" (see e.g. [8, 41, 39]) show that the fibres of certain "minimal" maps (e.g. the Grothendieck-Springer resolution or Hitchin fibration) are determined to a large extent by the base \(Y\), and the generic behaviour of the map.
### Main results
Let \(\Lambda\) denote a field or complete local ring. As above, \(\Lambda_{X}\) denotes the constant sheaf on \(X\) with coefficients in \(\Lambda\). We are able to give a partial answer to Question 1.1. Any resolution contains a canonical direct summand:
**Theorem 1.2**.: _(see Theorem 5.1) Let \(Y\) be an irreducible variety. There exists a complex \(\mathscr{E}(Y,\Lambda)\in D^{b}_{c}(Y,\Lambda)\) characterised up to isomorphism by the following:_
1. \(\mathscr{E}(Y,\Lambda)\) _is indecomposable and its support is dense;_
2. \(\mathscr{E}(Y,\Lambda)\) _is a summand inside_ \(f_{*}\Lambda_{X}\)_, for any resolution_ \(f:X\to Y\)_._
We call \(\mathscr{E}(Y,\Lambda)\) the **geometric extension** on \(Y\).
_Remark 1.3_.: When \(\Lambda=\mathbb{Q}\) then \(\mathscr{E}(Y,\Lambda)=IC(Y,\mathbb{Q})\), by the Decomposition Theorem (see Proposition 5.11).
The stalks of the geometric extension record behaviour which "has to be there in any resolution". Indeed, the proper base change theorem and Theorem 5.1 immediately imply:
**Corollary 1.4**.: _Suppose \(\Lambda\) is a field. For any resolution \(f:X\to Y\) one has_
\[\dim H^{i}(\mathscr{E}(Y,\Lambda)_{y})\leq\dim H^{i}(f^{-1}(y),\Lambda)\]
_for any \(y\in Y\)._
_Remark 1.5_.: This corollary can be used to rule out the existence of resolutions of a particular form. For example, if \(\mathscr{E}(Y,\Lambda)_{y}\) has non-zero stalks in degrees \(0\) and \(2m\) for some \(m\), then Corollary 1.4 and the existence of fundamental classes implies that any resolution of \(Y\) has to have fibres dimension at least \(m\) over \(y\) (see Example 5.17). This can be used to prove the non-existence of semi-small resolutions: if \(\mathscr{E}(Y,\Lambda)\) is not perverse, then no semi-small resolution of \(Y\) exists (see Proposition 5.18).
_Remark 1.6_.: Theorem 5.1 has also been obtained by McNamara [38, SS5], with a very similar proof. McNamara also noticed Remark 1.5 and uses this observation to rule out the existence of semi-small resolutions of certain Schubert varieties.
In the setting of Decomposition Theorem, it is essential to take local systems into account. This is already the case for a smooth morphism with smooth target \(f:E\to X\), where the Decomposition Theorem implies that \(f_{*}\mathbb{Q}_{E}\) splits as a direct sum of its cohomology sheaves \(\mathcal{H}^{i}(f_{*}\mathbb{Q}_{E})\), and each of the resulting local systems (with stalks \(H^{i}(f^{-1}(x),\mathbb{Q})\)) are semi-simple.
When we take more general coefficients, it is no longer true that the direct image along a proper smooth morphism has to split, nor that the resulting local systems are semi-simple. It is easy to produce examples where the monodromy fails to be semi-simple when the coefficients are not of characteristic \(0\). The failure of the direct image to split in the derived category is a little more subtle. We give examples of this failure to split for \(\mathbb{P}^{1}\)-bundles (where the non-splitting is connected to the Brauer group) in Example 5.23.
This motivates us to consider geometric local systems. A **geometric local system**\(\mathscr{L}\) on \(U\) is a smooth and proper map with smooth target
\[V\xrightarrow{\mathscr{L}}U.\]
The following generalises Theorem 1.2 to take local systems into account:
**Theorem 1.7**.: _(see Theorem 5.6) Assume \(Y\) is irreducible. For any dense (smooth) \(U\subset Y\) and geometric local system \(V\xrightarrow{\mathscr{L}}U\) there is a unique complex \(\mathscr{E}(Y,\mathscr{L})\in D^{b}_{c}(Y,\Lambda)\) satisfying:_
1. \(j^{*}\mathscr{E}(Y,\mathscr{L})\cong\mathscr{L}_{*}\Lambda_{U}\) _where_ \(j:U\hookrightarrow Y\) _denotes the inclusion;_
2. \(\mathscr{E}(Y,\mathscr{L})\) _has no summands supported on the complement of_ \(U\)_;_
3. _for any proper map with smooth source_ \(f:X\to Y\) _which agrees with_ \(\mathscr{L}\) _over_ \(U\)_,_ \(\mathscr{E}(Y,\Lambda)\) _occurs as a summand of_ \(f_{*}\Lambda_{X}\)_._
We call \(\mathscr{E}(Y,\mathscr{L})\) the **geometric extension** of the geometric local system \(\mathscr{L}\).
_Remark 1.8_.: We explain what Theorem 5.6 says when \(\Lambda=\mathbb{Q}\). By the (smooth case of the) Decomposition Theorem, \(\mathscr{L}_{*}\mathbb{Q}_{V}\) is isomorphic to the direct sum of its intersection cohomology sheaves, \(\bigoplus\mathscr{H}^{i}(\mathscr{L}_{*}\mathbb{Q}_{V})[-i]\), and each cohomology sheaf \(\mathscr{H}^{i}(\mathscr{L}_{*}\mathbb{Q}_{V})\) is semi-simple. If we define \(IC(\mathscr{L}_{*}\mathbb{Q}_{V})\) to be \(\bigoplus IC(Y,\mathscr{H}^{i}(\mathscr{L}_{*}\mathbb{Q}_{V}))[-i]\), then \(\mathscr{E}(Y,\mathscr{L})=IC(Y,\mathscr{L}_{*}\mathbb{Q}_{V})\), by the Decomposition Theorem.
_Remark 1.9_.: Again, \(\mathscr{E}(Y,\Lambda)\) provides lower bounds on the cohomology of any proper morphism extending \(\mathscr{L}\). We leave it to the reader to formulate an analogue of Corollary 1.4 in this more general setting.
_Warning 1.10_.: In contrast to the setting over \(\mathbb{Q}\), we prove that \(\mathscr{E}(Y,\mathscr{L})\) is not determined by its restriction to \(U\). More precisely, using the Legendre family of elliptic curves, we produce two geometric local systems \(V^{\prime}\xrightarrow{\mathscr{L}^{\prime}}U\) and \(V^{\prime\prime}\xrightarrow{\mathscr{L}^{\prime\prime}}U\) which have the same monodromy over \(\mathbb{F}_{2}\), but whose geometric extensions are not isomorphic (see Example 5.21).
### Coefficients in ring spectra
One interesting aspect of the current paper is that the results are formal: we only need the proper base change theorem, the existence of fundamental classes, and some finiteness to ensure the Krull-Schmidt theorem. (In the body of the paper we axiomatise our setup as a **base change formalism**, and prove our results in that setting.)
Using our formalism, we deduce that our main theorems hold with coefficients in suitable stable \(\infty\)-categories. In case the reader (like the authors) is intimidated by this theory, we provide a few paragraphs of motivation as to why we are interested in this level of generality.3
Footnote 3: \(\infty\)-categories are not needed in any arguments in this paper. However, \(\infty\)-categories are needed in providing the input (the “base change formalism”) with which we work.
A major theme in homotopy theory is the consideration of generalized cohomology theories like K-theory, elliptic cohomology, Brown-Peterson cohomology and the Morava K-theories. One can think about all of these cohomology theories as lenses through which to view homotopy theory: facts which are transparent in one theory are often opaque in another. Computation plays an enormously important role, and computations are often performed using the fact that any map is homotopic to a fibration, which gives rise to useful spectral sequences.
In algebraic geometry, smooth morphisms (the algebraic geometer's fibrations) are extremely rare, and an important role is played by constructible sheaves and the six functor formalism. The spectral sequence of a fibration is replaced by the Leray-Serre spectral sequence, or its variants. The Decomposition Theorem is a very powerful tool, as it allows one to conclude that the perverse Leray-Serre spectral sequence degenerates for any proper map. Traditionally, this formalism only
encompases cohomology, homology and its variants. The connection to cohomology is via the basic fact that the derived global sections of the constant sheaf compute cohomology.
In homotopy theory, it has been clear for decades that one can obtain generalised cohomology as the global sections of a local object. (Indeed, by Brown representability, the generalized \(E\)-cohomogy of \(X\) is given by homotopy classes of maps \([X,E^{i}]\), where \(E^{i}\) represents \(i^{th}\)\(E\)-cohomology.) Thus it is natural to ask: is there some theory of constructible sheaves, which would allow one to push and pull constant \(E\)-sheaves in much the same way that one can push and pull constant sheaves in algebraic geometry? Such a theory would unify the two approaches to cohomology of the proceeding two paragraphs.4
Footnote 4: For an excellent articulation of this question, see [43].
Building on the fundamental work of Lurie [33, 34, 35], such a theory has become available [47]. We believe these more general coefficients (e.g. Morava K-theories) will provide a powerful tool to study torsion phenomena in the topology of complex algebraic varieties, in much the same way as they have done in homotopy theory. It is for this reason that we work in the generality of sheaves with coefficients in certain \(\infty\)-categories. (Again, we emphasise that we only need very formal properties from this theory, and none of its internals.) However, we do not discuss any computations with these more general objects in this paper. Geometric extensions in greater generality play an important role in forthcoming work of the first author [24] and Elias and the second author [18]. The idea of taking summands in more general (motivic) cohomology theories also shows up in the work of Eberhardt [16, 15].
_Remark 1.11_.: In SS1.1 we discussed two routes to intersection cohomology sheaves: one via the theory of perverse sheaves (abelian categories), and one via the Decomposition Theorem and Krull-Schmidt (additive categories). In the setting of the more exotic coefficients discussed above, one often encounters periodic cohomology theories. (This is the case for K-theory, as well as all the Morava K-theories.) It is interesting to note that (the homotopy category of) sheaves of modules over such spectra cannot support a non-trivial \(t\)-structure, so there is no analogue of perverse sheaves with these coefficients. Geometric extensions, on the other hand, make sense as long as the coefficients satisfy a Krull-Schmidt condition.
_Remark 1.12_.: Above, our discussion centered on constructible sheaves (algebraic geometry) and generalized cohohomology (homotopy theory). Another major motivation for the development of a sheaf theory underlying cohomology theories is the theory of triangulated categories of motives (see [11, Introduction SSA] for an excellent historical introduction). Our results have a strong motivic flavour, as the reader may have already sensed in our definition of a geometric local system. It should be emphasised, however, that morphism categories in categories of motives rarely have the finiteness conditions that we need in this paper.
_Remark 1.13_.: Ever since the discovery of intersection cohomology in the 1970s, it has been suggested that there should be a reasonable theory of intersection K-theory. Such a definition has recently been given by Padurariu [42], as a subquotient of a geometric filtration on K-theory. The notion of geometric extension with coefficients in (rationalised) \(KU\)-modules provides another possible definition of intersection K-theory (see Definition 5.24). It would be interesting to compare the two approaches.
One can also hope that there is some (abelian, exact, triangulated, stable \(\infty\),...) category \(\mathcal{C}\) associated to our space \(X\) which categorifies intersection K-theory. The current work suggests a possible route towards such a category (at least in examples). Namely, intersection K-theory is realised as a summand inside the K-theory of any resolution, and the isomorphisms for different resolutions are sometimes realised by fundamental classes of correspondences. It would be very interesting to know if the classes realizing these isomorphisms could be lifted to functors, inducing categorical idempotents on categories of coherent sheaves on resolutions.
### Motivation from Modular Representation Theory
A major motivation for the current work comes from geometric modular representation theory. In the work of Lusztig and others, geometric methods (e.g. Deligne-Lusztig theory, character sheaves, the Kazhdan-Lusztig conjecture) have played a decisive role in classical (i.e. characteristic \(0\)) representation theory. Modular geometric representation theory aims to transport these successes to modular (i.e. mod \(p\)) representation theory (see [25, 1, 51]).
In this theory the notion of a **parity sheaf** has come to play a central role. These are sheaves whose stalks and costalks vanish in either even or odd degrees. In [26] it is proved that on many varieties arising in geometric representation theory parity sheaves are classified in the same way as intersection cohomology complexes. Their importance in geometric modular representation theory appears to stem from two sources:
1. Whilst it is extremely difficult to compute with intersection cohomology sheaves with modular coefficients, computations with parity sheaves are sometimes possible, thanks to the role of intersection forms [26, SS3]. This computability is behind counter-examples to the bounds in Lusztig's conjecture arising from unexpected torsion [50, 49] and the billiards conjecture of Lusztig and the second author [36].
2. When establishing derived equivalences, it is often useful to have a good class of generators whose algebra of extensions is formal. With \(\mathbb{Q}\)-coefficients, intersection cohomology complexes often provide such objects. When working with modular coefficients, parity sheaves seem to play the role of "pure" objects, although it is still somewhat mysterious as to why (see [44, 3, 4, 2]).
The main theorem of [26] relies crucially on the vanishing of odd cohomology of the strata in a fixed stratification. These properties often hold in geometric representation theory, but can be a hindrance. For example, they can be destroyed by passing to a normal slice.
Geometric extensions address this deficiency: parity sheaves are very often geometric extensions. Consider a stratified variety \(X=\bigsqcup X_{\lambda}\) satisfying the conditions of [26, 2.1], so that the notion of a parity sheaf makes sense. In almost all examples of parity sheaves (for the constant pariversity) one has nice5 resolutions
Footnote 5: i.e. _even_ in the language of [26, §2.4]
\[\pi_{\lambda}:\widetilde{X}_{\lambda}\to\overline{X_{\lambda}}\]
such that the parity sheaf corresponding to the stratum \(X_{\lambda}\) is an indecomposable direct summand of \((\pi_{\lambda})_{*}\Lambda_{\widetilde{X}_{\lambda}}\). It follows from Theorem 5.1 that the parity sheaf coincides with the geometric extension \(\mathscr{E}(Y,\Lambda)\).
_Remark 1.14_.: As we remarked above, Theorems 5.1 and 5.6 provide a partial answer to our guiding Question 1.1. Namely, indecomposable summands with dense support are geometric extensions. However, our theorems say nothing about what happens on lower strata. One could hope that they are geometric extensions, but we have very limited evidence for this claim. (The issue is that, in contrast to the situation for IC and parity sheaves, we have no characterisation of the geometric extension which is intrinsic to the space.) In the setting of parity sheaves (where one does have a local characterisation in terms of stalks and costalks) it is true that all summands are parity sheaves, which can be considered a weak form of the Decomposition Theorem.
### Acknowledgements
We would like to thank Bhargav Bhatt for helpful comments, in particular he suggested Example 5.23. We would like to thank Burt Totaro for useful comments on Example 5.17. We would like to thank Roman Bezrukavnikov, Peter McNamara, Luca Migliorini, Marco Volpe and Allen Yuan for useful discussions.
## 2. Base change formalism
In this section we will describe the categorical formalism used to prove Theorem 1. This section is purely \(2\)-categorical, and we shall proceed axiomatically to emphasise its formal nature. The reader comfortable with the formalism of the constructible derived category of sheaves will find nothing unfamiliar in what follows.
From here, \(\mathscr{C}\) will be a category with pullbacks and a terminal object \(*\).
**Definition 2.1**.: A **base change formalism**\(S:=(S_{*},S_{!})\) on \(\mathscr{C}\) is a data of a pair of pseudo-functors \(S_{*},S_{!}\) from \(\mathscr{C}\) to the \(2\)-category \(\mathbf{Cat}\), and a lax natural transformation of pseudofunctors \(c:S_{!}\to S_{*}\). These two functors \(S_{*}\) and \(S_{!}\) strictly agree on objects, and the object components of \(c\) are the identity functor. For any morphism \(f\) in \(\mathscr{C}\) we abbreviate \(S_{X}=S_{*}(X)=S_{!}(X)\), \(f_{*}=S_{*}(f)\), \(f_{!}=S_{!}(f)\), and \(c_{f}:f_{!}\to f_{*}\) for the component of \(c\) at a morphism \(f\) in \(\mathscr{C}\). We require the following:
1. For all morphisms \(f\), \(f_{*}\) admits a left adjoint \(f^{*}\), and \(f_{!}\) admits a right adjoint \(f^{!}\).
In view of (BC1), we say a morphism \(f\) in \(\mathscr{C}\) is **proper** if \(c_{f}:f_{!}\to f_{*}\) is an isomorphism, and **etale** if there exists an isomorphism \(f^{*}\cong f^{!}\).
For the remaining two conditions, we fix a pullback square:
1. [label=(PS)]
2. \(X^{\prime}\)\(\begin{CD}\overset{\tilde{g}}{\longrightarrow}X\\ \overset{\tilde{f}}{\longrightarrow}Y\end{CD}\)
3. \(Y^{\prime}\)\(\begin{CD}\overset{g}{\longrightarrow}X\\ \overset{g}{\longrightarrow}Y\end{CD}\)
Our final conditions are the following:
1. [label=(BC2)]
2. In (PS), if \(f\) is etale (resp. proper), then \(\tilde{f}\) is also etale (resp. proper).
3. In (PS), the induced base change morphisms \[\begin{array}{l}g^{*}f_{*}\to\tilde{f}_{*}\tilde{g}^{*}\\ \tilde{f}_{!}\tilde{g}^{!}\to g^{!}f_{!}\end{array}\]
are both isomorphisms if \(f\) is proper, or if \(g\) is etale (see Remark 2.4 for the definition of these base change morphisms).
_Remark 2.2_.: In many settings (constructible sheaves on complex varieties, etale sheaves, \(D\)-modules,...) one encounters a "6-functor formalism". Usually this manifests as a collection of triangulated (or \(\infty-\)) categories, with six functors
\[f_{*},f_{!},f^{!},f^{!},\mathcal{H}\mathrm{om},\otimes\]
satisfying a raft of relations (see e.g. [12]). There has been recent progress on axiomatizing what a six functor formalism is, particularly in the setting of \(\infty-\)categories (e.g. [11, 37, 32, 20]). As far as we are aware this process is ongoing and there is still no generally accepted definition. We need very little from the theory, and have tried to isolate the key features we require in Definition 2.1. The reader should have little trouble adapting other settings (e.g. etale sheaves, or \(D\)-modules) to our axioms.
The following will be a recurring example throughout this paper.
**Example 2.3**.: Let \(\mathscr{C}\) be the category of complex algebraic varieties, and let \(S_{X}=D_{c}^{b}(X,k)\) be the constructible derived category of sheaves on \(X(\mathbb{C})\) with coefficients in a field \(k\). (It is important that the derived category is used here, since \(f^{!}\) does not exist in general as a functor on abelian categories.) In this framework, the notions of etale and proper match their topological definitions, hence are closed under pullbacks giving (BC2). Our third axiom (BC3) goes under the name of proper base change in the literature (e.g. [28, Proposition 2.5.11].)
_Remark 2.4_.: Explicitly, the data of a base change formalism is an assignment of a category \(S_{X}\) to each object \(X\) in \(\mathscr{C}\), functors \(f_{!}:S_{X}\to S_{Y}\) and \(f_{*}:S_{X}\to S_{Y}\) for each morphism \(f:X\to Y\), and coherent isomorphisms \(f_{*}\circ g_{*}\cong(f\circ g)_{*}\), \(f_{!}\circ g_{!}\cong(f\circ g)_{!}\), along with natural transformations \(c_{f}:f_{!}\to f_{*}\), satisfying some compatibilities. We will suppress these compatibility 2-isomorphisms for \(f_{*}\) and \(f_{!}\), but the reader should bear in mind that they are a critical part of our input data, as they supply the middle maps used for the base change morphisms of (BC3):
\[g^{*}f_{*}\stackrel{{\eta}}{{\rightarrow}}\tilde{f}_{*}\tilde{f} ^{*}g^{*}f_{*}\cong\tilde{f}_{*}\tilde{g}^{*}f^{*}f_{*}\stackrel{{ \epsilon}}{{\rightarrow}}\tilde{f}_{*}\tilde{g}^{*}\]
\[\tilde{f}_{!}\tilde{g}^{!}\stackrel{{\eta}}{{\rightarrow}}\tilde {f}_{!}\tilde{g}^{!}f^{!}f_{!}\cong\tilde{f}_{!}\tilde{f}^{!}g^{!}f_{!} \stackrel{{\epsilon}}{{\rightarrow}}g^{!}f_{!}\]
### The convolution isomorphism
**Definition 2.5**.: Consider a pull-back square, with \(g\) proper:
We have the following natural isomorphism of functors, which we call the **convolution isomorphism**:
This is defined as the composition of the following isomorphisms:
\[\mathcal{H}\mathrm{om}(f_{!},g_{*})\rightarrow\mathcal{H}\mathrm{om}(\_,f^{! }g_{*})\rightarrow\mathcal{H}\mathrm{om}(\_,f^{!}g_{*})\rightarrow\\ \mathcal{H}\mathrm{om}(\_,\tilde{g}_{!}\tilde{f}^{!})\rightarrow \mathcal{H}\mathrm{om}(\_,\tilde{g}_{*}\tilde{f}^{!}\_)\rightarrow \mathcal{H}\mathrm{om}(\bar{g}^{*}\_,\bar{f}^{!}\_)).
By evaluating this isomorphism on objects \(F\) in \(S_{X}\) and \(G\) in \(S_{X^{\prime}}\), we obtain the pointwise convolution isomorphism:
\[\mathcal{H}\mathrm{om}(f\!_{!}F,g_{*}G)\xrightarrow{\tau_{f,g_{*}}}\mathcal{H} \mathrm{om}(\tilde{g}^{*}F,\tilde{f}^{!}G)\]
_Remark 2.6_.: In general, computing morphisms between the functors \(f_{!}\) and \(g_{*}\) on \(Y\) is hard, and our convolution isomorphism transforms this into a problem with easier functors \(\tilde{g}^{*}\) and \(\tilde{f}^{!}\) on a more complicated space \(X\times_{Y}X^{\prime}\).
In what follows, it will be important to be able to study the convolution isomorphism locally. Consider the following diagram, where \(j:U\to Y\) is etale, and all squares are pullbacks:
The main observation of this section is that the convolution isomorphism is etale local:
**Proposition 2.7**.: _The following diagram commutes, where the horizontal maps are our convolution isomorphisms, and the vertical maps are restriction followed by base change:_
The verification of this proposition is deferred to Proposition 6.3 in the Appendix.
## 3. Orientations and Duality
The key aspect of our main theorem (Theorem 5.6) is that for a map \(f:X\to Y\) with smooth source, the dense summand of \(f_{*}\mathbf{1}_{X}\) on \(Y\) is determined by the generic behaviour of the map. To prove this, we will build a comparison morphism for two maps that agree on an open set. The convolution isomorphism of Definition 2.5 turns this problem into giving a suitable morphism \(\tilde{g}^{*}\mathbf{1}_{X}\to\tilde{f}^{!}\mathbf{1}_{X^{\prime}}\). In this section we will describe how to produce such a map via cycle maps and fundamental classes.
### Topological reminders
We begin with a leisurely topological reminder of these concepts in the constructible setting. The reader already familiar with this story is invited to skip to SS3.3, in which we summarise the key idea of the paper.
Let our category \(\mathscr{C}\) be that of complex algebraic varieties, and our base change formalism that of Example 2.3, i.e., \(X\mapsto D^{b}_{c}(X,k)\). This base change formalism succinctly encodes the topological homology and cohomology of algebraic varieties with coefficients in \(k\). We may express the following cohomology groups of \(X\) in terms of our functors and the terminal map \(t:X\to*\). Letting \(\mathbf{1}\) be \(k\) on the point \(*\), we have
(1) \[H^{i}(X,k) \cong\mathcal{H}\mathrm{om}(\mathbf{1},t_{*}t^{*}\mathbf{1}[i]),\] (2) \[H^{i}_{l}(X,k) \cong\mathcal{H}\mathrm{om}(\mathbf{1},t_{l}t^{*}\mathbf{1}[i]),\] (3) \[H_{i}(X,k) \cong\mathcal{H}\mathrm{om}(\mathbf{1},t_{l}t^{i}\mathbf{1}[-i]),\] (4) \[H^{i}_{i}(X,k) \cong\mathcal{H}\mathrm{om}(\mathbf{1},t_{*}t^{i}\mathbf{1}[-i]).\] (5)
_Remark 3.1_.: The reader will notice that we are not using the standard notation for Borel Moore homology (see e.g. [28]) and compactly supported cohomology. We have opted to use \(!\) instead of \(BM\) or \(c\), as we find this notation more attractive, and it avoids poor notation later when we discuss more general cohomology theories.
In this setting, our categories \(S_{X}\) carry some crucial extra structure. They are monoidal and triangulated, with monoidal unit \(t^{*}\mathbf{1}\), and shift functor [1]. Throughout, we call \(t^{*}\mathbf{1}\) the **constant sheaf** on \(X\), denoted \(\mathbf{1}_{X}\). Similarly, the object \(t^{!}\mathbf{1}\) is the **dualising sheaf** of \(X\), denoted \(\omega_{X}\).
_Remark 3.2_.: For a singular space \(X\), the dualising sheaf \(\omega_{X}\) is not generally concentrated in a single degree in \(D^{b}_{c}(X,k)\), so cannot be interpreted as a sheaf in the usual sense.
**Example 3.3**.: In our running example of the constructible derived category \(X\mapsto D^{b}_{c}(X,k)\) (Example 2.3) \(t^{*}\mathbf{1}\) is the constant sheaf \(k_{X}\). When working in the setting of a general base change formalism, we will use \(\mathbf{1}_{X}:=t^{*}\mathbf{1}\) to denote the unit object, however when dealing with sheaves we will often stick to the more standard \(k_{X}\).
A crucial property of these objects is that for a topological manifold \(M\) of dimension \(n\), the dualising sheaf \(\omega_{M}\) is locally isomorphic to \(\mathbf{1}_{M}[n]\), a shift of the constant sheaf. To see this, consider the standard triangle associated to the inclusion \(j:M\setminus\{x\}\subset M\):
\[j_{!}j^{!}\omega_{M}\longrightarrow\omega_{M}\longrightarrow i_{x*}i_{x}^{*} \omega_{M}\longrightarrow j_{!}j^{!}\omega_{M}[1].\]
In view of (1), applying \(\mathcal{H}\mathrm{om}(\mathbf{1},t_{!}\_)\) gives the long exact sequence:
\[H_{i}(M\setminus\{x\})\longrightarrow H_{i}(M)\longrightarrow\mathcal{H} \mathrm{om}(\mathbf{1},i_{x}^{*}\omega_{M}[-i])\longrightarrow H_{i-1}(M \setminus\{x\})\]
We may therefore identify the \(-i^{th}\) cohomology of the stalk of \(\omega_{M}\) at \(x\) with the local homology group \(H_{i}(M,M\setminus\{x\})\). Since \(M\) is a manifold, it follows by a standard excision argument [21, SS3.3] that this sheaf has stalks \(k\) concentrated in degree \(-n\). By the local homogeneity of manifolds, we see that this sheaf is locally constant, and thus locally isomorphic to \(\mathbf{1}_{M}[n]\).
This also shows that the manifold \(M\) is \(k\)-orientable in the usual sense (see e.g. [21], Chapter 3) of having compatible local generators of these homology groups if and only if we have an isomorphism \(\mathbf{1}_{M}[n]\cong\omega_{M}\) in \(D^{b}_{c}(M,k)\). In view of the
definition of Borel-Moore homology (1), this is equivalent to a class in this group that restricts to a generator of each local homology group. We call such a class in Borel-Moore homology a **fundamental class** of \(M\).
There is another, more algebro-topological perspective on orientability. This more general notion of orientability is defined for vector bundles over arbitrary spaces. We say that an \(n\)-dimensional real vector bundle \(V\) over \(B\) is orientable with respect to a cohomology theory \(E\) if there exists a Thom class \(u\) in
\[E^{n}_{!}(V)\cong E^{n}(D(V),S(V))\]
that restricts to a generator of \(E^{n}(D(V_{x}),S(V_{x}))\) for all \(x\) in \(B\), where \(D(V)\) and \(S(V)\) denote the associated disk and sphere bundles of \(V\).
Thinking about other cohomology theories, there is an analogous base change formalism for algebraic varieties6 for any cohomology theory represented by an \(A_{\infty}\) ring spectrum \(E\)[47]. This category is the homotopy category of the \(\infty\)-category of constructible sheaves of \(E\)-module spectra on \(X\), and we will denote7 it by \(D^{b}_{c}(X,E-\)Perf). This category encapsulates the \(E\)-(co)homology of \(X\) in exactly the same manner as \(D^{b}_{c}(X,k)\) does for \(k\)-(co)homology. We say a manifold \(M\) is \(E\)-orientable if we may find a fundamental class in
Footnote 6: There are significant point set topological requirements for the existence of the whole formalism, they are satisfied in our case since our spaces are locally compact and conically stratifiable, and all maps are suitably stratifiable. For a fixed stratification, see Lurie [34], and for the functors see Volpe [47]. We aren’t aware of a source for constructibility in this generality, though this should follow same lines as the constructibility proofs in [28].
Footnote 7: We have opted for this suggestive notation to encourage the parallel with the constructible derived category.
\[E^{!}_{n}(M):=\mathcal{H}\mathrm{om}_{D^{b}_{c}(M,E-\mathrm{Perf})}(\mathbf{1} _{M},t^{!}\mathbf{1}[-n])\]
restricting to a generator in all local homology groups \(E_{n}(M,M\setminus\{x\})\). By the previous discussion, we see that \(E\)-orientability for \(M\) is equivalent to the existence of an isomorphism between the \(E\) dualising sheaf \(\omega_{M}\) and \(\mathbf{1}_{M}[n]\). An \(E\)-orientation is then a choice of such an isomorphism \(\mathbf{1}_{M}\to\omega_{M}[-n]\). The set of orientations is in general an \(\mathrm{Aut}_{E_{M}}(\mathbf{1}_{M})\) torsor, so orientations are not unique. For a smooth manifold \(M\), \(E\)-orientability in our sense is equivalent to the Thom class \(E\)-orientability of the stable normal bundle of \(M\)[45, Chapter 5, Theorem 2.4].
### Orientability and fundamental classes for a general base change formalism
With this in mind, let us now work with an arbitrary base change formalism \(S\) on the category of algebraic varieties. In order to emphasise the analogy with sheaves we will refer to objects of \(S_{X}\) as sheaves. In addition, we assume the following conditions.
1. Each \(S_{X}\) is triangulated with shift [1].
2. Over a point, \(S_{*}\) has a distinguished object \(\mathbf{1}\).
3. For any irreducible \(X\) with terminal map \(t:X\to*\), the object \(t^{*}\mathbf{1}\) is indecomposable.
4. Topologically proper (resp. etale) maps are proper (resp. etale) for \(S\) in the sense of Definition 2.1.
As before, we define the **constant sheaf** and **dualising sheaf**:
\[\mathbf{1}_{X} :=t^{*}\mathbf{1}\] \[\omega_{X} :=t^{!}\mathbf{1}\]
These will be the most important objects in what follows.
**Definition 3.4**.: An irreducible variety \(X\) of dimension \(d\) is \(S\)-smooth if there exists an isomorphism in \(S_{X}\):
\[\mathbf{1}_{X}\to\omega_{X}[-2d].\]
_Remark 3.5_.: When \(S_{X}=D_{c}^{b}(X,\mathbb{Q})\), then \(S\)-smoothness is the same as rational smoothness. More generally, when \(S_{X}=D_{c}^{b}(X,k)\), \(S\)-smoothness of a variety is the same thing as \(k\)-smoothness (see e.g. [27, SS1] and [19, SS8.1]).
_Remark 3.6_.: For a general multiplicative cohomology theory \(E\), \(X\) is \(E\)-smooth if and only if it is \(E\)-orientable. By our earlier discussion (see [45, Chapter 5, Theorem 2.4]), this is equivalent to the existence of Thom class in the \(E\) cohomology of the Thom spectrum of the stable normal bundle of \(X\). The Thom spectrum perspective helps make the problem of deciding \(E\)-smoothness more concrete and amenable to computation.
**Definition 3.7**.: A base change formalism \(S\) is **smoothly orientable** if all smooth irreducible varieties are \(S\)-smooth.
**Example 3.8**.: The constructible derived category (i.e. \(X\mapsto D_{c}^{b}(X,k)\)) is smoothly orientable. To see this, note that smooth varieties are topological manifolds of twice their algebraic dimension, so it suffices to check that they are \(k\)-orientable in the usual sense. This can be seen by noting that a smooth manifold is \(\mathbb{Z}\)-orientable if and only if some transition cocycle of its tangent bundle can be taken to have positive determinant. Since the tangent bundle of \(X\) admits an almost complex structure, we can take a presenting cocycle where locally, these transition functions sit inside \(GL_{n}(\mathbb{C})\subset GL_{2n}(\mathbb{R})\). Since \(GL_{n}(\mathbb{C})\) is connected, all of these real matrices have positive determinant, giving the desired orientability.
The following examples show that deciding smooth orientability can be subtle and geometrically meaningful.
**Example 3.9**.: Let \(KU\) denote the spectrum representing the cohomology theory of complex K-theory. Then the base change formalism \(X\mapsto D_{c}^{b}(X,KU-\)Perf\()\) is smoothly orientable. To see this, note that any real vector bundle admitting a complex structure is \(KU\)-orientable, via the explicit construction of a Thom class in [5, III, SS 11]. The stable normal bundle of a complex manifold then admits a complex structure, so we see that the stable normal bundle of \(X\) is \(KU\) orientable, so \(X\) is \(KU\)-smooth.
**Example 3.10**.: Let \(\mathbb{S}\) denote the sphere spectrum. Then a manifold is \(\mathbb{S}\)-orientable if and only if its stable normal bundle admits a framing, that is, is trivialisable. In particular, most complex algebraic varieties are not \(\mathbb{S}\)-smooth. For instance, if \(X\) is a smooth surface, the first Pontryagin class of its tangent bundle is equal to three times its signature, by Hirzebruch's Signature theorem [23]. So if \(X\) has nonzero signature of its intersection form, e.g., \(\mathbb{CP}^{2}\), then its tangent bundle is not stably trivial, so neither is its stable normal bundle. In particular, the base change formalism of sheaves of \(\mathbb{S}\)-modules on all varieties is not smoothly orientable.
We are interested in invariants of singular varieties, so we want fundamental classes/orientations for singular varieties also. In usual sheaf theoretic fashion, for a Zariski open \(j:U\to X\) in a space \(X\), we refer to the functor \(j^{*}\cong j^{!}\) as restriction to \(U\).
**Definition 3.11**.: An \(S\)-orientation of an irreducible variety \(X\) is a morphism
\[\gamma:\mathbf{1}\to\omega_{X}[-2d_{X}]\]
which is an isomorphism over the smooth locus of \(X\). We say \(X\) is orientable with respect to \(S\) if an \(S\)-orientation of \(X\) exists.
The following proposition shows that we can use resolution of singularities to orient all irreducible varieties.
**Proposition 3.12**.: _If \(S\) is smoothly orientable, and \(X\) admits a resolution of singularities \(f:\tilde{X}\to X\), then \(X\) is orientable._
Proof.: Let \(f:\tilde{X}\to X\) be a resolution of singularities. Then \(f\) is proper, and \(\tilde{X}\) is nonsingular, with \(f\) an isomorphism over the smooth locus \(U\) of \(X\). Since \(\tilde{X}\) is smooth, we have an orientation:
\[\gamma:\mathbf{1}_{\tilde{X}}\to\omega_{\tilde{X}}[-2d_{X}].\]
Pushing this forward gives:
\[f_{*}\gamma:f_{*}\mathbf{1}_{\tilde{X}}\to f_{*}\omega_{\tilde{X}}[-2d_{X}].\]
Composing with the unit and counits of our adjunctions gives
\[\mathbf{1}_{X}\to f_{*}\mathbf{1}_{\tilde{X}}\to f_{*}\omega_{\tilde{X}}[-2d_{ X}]\to f_{!}\omega_{\tilde{X}}[-2d_{X}]\to\omega_{X}[-2d_{X}].\]
By base change (BC3), the composite \(\gamma_{X}:\mathbf{1}_{X}\to\omega_{X}[-2d_{X}]\) restrict to isomorphisms over \(U\), giving the desired orientation of \(X\).
_Remark 3.13_.: The definitions of this section are based on purely topological realisation of algebraic varieties via their \(\mathbb{C}\)-points, but there are natural extensions of these definitions to other settings. For instance, one could consider real pseudomanifolds or algebraic varieties over fields more general than \(\mathbb{C}\). In these settings, one would need to modify Definition 3.4 to reflect the structure at hand. For example, incorporating weights, an orientation is a morphism \(\mathbf{1}_{X}\to\omega_{X}[-2d_{X}](d_{X})\), where \((n)\) denotes the Tate twist.
_Remark 3.14_.: For the reader who doesn't want to assume resolution of singularities, one can adapt the previous proof to show that if \(X\) admits a degree \(n\) alteration in the sense of de Jong [13], and \(n\) is invertible in the ring \(\mathcal{H}\mathrm{om}_{S_{*}}(\mathbf{1},\mathbf{1})\), then \(X\) is orientable.
The importance of orientations cannot be overstated in our context, since they allow us to produce morphisms between nontrivial objects in our categories \(S_{X}\), via functoriality and the convolution isomorphism.
**Definition 3.15**.: For \(X\) an algebraic variety, we define the \(n^{th}\) compactly supported \(S\)-homology of \(X\) to be
\[S_{n}^{!}(X):=\mathcal{H}\mathrm{om}_{S_{X}}(1_{X},\omega_{X}[-n]).\]
This functions similarly to Borel-Moore homology in the constructible setting, as the codomain of a cycle class morphism. In particular, any orientation \(\gamma\) of an irreducible variety \(X\) is naturally an element of \(S^{!}_{2d_{X}}(X)\). Like Borel-Moore homology, these groups are covariantly functorial under proper maps \(f:X\to Y\). This is given by the composition
\[\mathcal{H}\mathrm{om}^{*}(\mathbf{1}_{X},\omega_{X})\to\mathcal{H}\mathrm{om} ^{*}(f_{*}\mathbf{1}_{X},f_{*}\omega_{X})\cong\mathcal{H}\mathrm{om}^{*}(f_{*} \mathbf{1}_{X},f_{!}\omega_{X})\to\mathcal{H}\mathrm{om}^{*}(\mathbf{1}_{Y}, \omega_{Y}).\]
### Why do geometric extensions exist?
Our goal is to construct a canonical extension of the constant sheaf on a potentially singular variety \(Y\). We construct this by first pushing forward the constant sheaf from a resolution of singularities \(X\to Y\). We then need a method for comparing these sheaves for different choices of resolution. We will construct a comparison morphism between these pushforwards using the existence of fundamental classes in our base change formalism.
We may summarise the machinery we have so far for a smoothly orientable base change formalism \(S\) as follows.
* An \(S\) internal notion of smoothness (Definition 3.4).
* An \(S\) orientation/fundamental class for any variety (not necessarily smooth) (Definition 3.11).
* A compactly supported \(S\)-homology group to interpret fundamental classes in (Definition 3.15).
We may now interpret our convolution isomorphism in this context. Let \(X\) and \(X^{\prime}\) be smooth (proper) resolutions of \(Y\), with a chosen orientation of \(X^{\prime}\), such that we have a pullback square:
(5)
We will use the functoriality of our setup to construct morphisms between \(f_{!}\mathbf{1}_{X}\) and \(g_{*}\mathbf{1}_{X^{\prime}}\), from the geometry of fundamental classes on the fibre product. Specifically, the convolution isomorphism and our choice of orientation of \(X\) yields the following isomorphism:
\[\mathcal{H}\mathrm{om}(f_{!}\mathbf{1}_{X},g_{*}\mathbf{1}_{X^{ \prime}}) \cong\mathcal{H}\mathrm{om}(\tilde{g}^{*}\mathbf{1}_{X},\tilde{f }^{\dagger}\mathbf{1}_{X^{\prime}})\cong\] \[\cong\mathcal{H}\mathrm{om}(\mathbf{1}_{X\times_{Y}X^{\prime}}, \tilde{f}^{\dagger}\omega_{X^{\prime}}[-2d_{X}])=S^{!}_{2d_{X}}(X\times_{Y}X^{ \prime}).\]
Via this isomorphism, we may translate compactly supported \(S\)-homology classes of \(X\times_{Y}X^{\prime}\) into maps from \(f_{!}\mathbf{1}_{X}\) to \(g_{*}\mathbf{1}_{X^{\prime}}\).
Since \(f,g\) are resolutions of \(Y\), if \(U\) is smooth locus of \(Y\), we have a canonical diagonal \(\Delta(U)\) inside \(X\times_{Y}X^{\prime}\), with closure \(Z:=\overline{\Delta(U)}\). Choosing an orientation of \(Z\), we may push forward the associated fundamental class (3.15) to get a class in \(S^{!}_{2d}(X\times_{Y}X^{\prime})\). This gives the desired comparison morphism between \(f_{!}\mathbf{1}_{X}\) and \(g_{*}\mathbf{1}_{Y}\).
In the next section, we will show that in the presence of finiteness conditions, we may deduce an isomorphism between the "dense" summands of these pushforwards, giving our main theorem.
_Remark 3.16_.: In the special case of the constant sheaf, there is an alternate argument8. Take our base change formalism to be \(X\mapsto D^{b}_{c}(X,E{-}\mathrm{Perf})\), for a suitably finite (see Definition 4.1) smoothly orientable cohomology theory \(E\). One may show (see [45, Chapter 5, Theorem 2.13]) that for a map \(f:X\to Y\) of \(E\)-orientable manifolds, the induced map \(E^{*}(Y)\to E^{*}(X)\) is injective, and upgrade this to the fact that \(\mathbf{1}_{Y}\to f_{*}\mathbf{1}_{X}\) is split injective in \(D^{b}_{c}(X,E{-}\mathrm{Perf})\). Now for singular \(Y\), given two such resolutions \(X_{i}\), we may resolve the diagonal component of their fibre product \(X_{1}\times_{Y}X_{2}\). From this splitting of the constant sheaf for maps of \(E\)-orientable smooth manifolds, we see that the dense summand of \(f_{i*}\mathbf{1}_{X_{i}}\) occurs as a summand of all resolutions. This argument also shows that isomorphism classes of summands of \(f_{*}\mathbf{1}_{X}\) over all resolutions \(f:X\to Y\) form a sort of "lattice": given any two resolutions \(f_{i}:X_{i}\to Y\) for \(i=1,2\), there exists a third resolution \(g:Z\to Y\) such that all summands of \(f_{1*}\mathbf{1}_{X_{1}}\) and \(f_{2*}\mathbf{1}_{X_{2}}\) also occur inside \(g_{*}\mathbf{1}_{Z}\).
Footnote 8: We learnt this argument from Roman Bezrukavnikov.
## 4. Finiteness and Krull-Schmidt categories
In the previous section we constructed a comparison morphism between \(f_{*}\mathbf{1}_{X}\) and \(g_{*}\mathbf{1}_{X^{\prime}}\) using an orientation of the irreducible component of the diagonal within \(X\times_{Y}X^{\prime}\). In any smoothly orientable base change formalism, it follows formally that this comparison morphism is an isomorphism over \(U\). In this section, we will introduce the finiteness conditions needed to show that this isomorphism over \(U\) lifts to an isomorphism on "dense summands" of \(f_{*}\mathbf{1}_{X}\) and \(g_{*}\mathbf{1}_{X^{\prime}}\). The finiteness constraint we need is that the categories of the base change formalism are Krull-Schmidt, which allows the use of the crucial Lemma 4.3.
For completeness, we recall the definition of a Krull-Schmidt category:
**Definition 4.1**.: A category \(\mathscr{C}\) is **Krull-Schmidt** if it is additive with finite sums, and each object is isomorphic to a finite direct sum of indecomposable objects, each with local endomorphism rings. A functor \(F\) between Krull-Schmidt categories is Krull-Schmidt if for each indecomposable object \(A\), \(F\) maps the Jacobson radical of \(\mathrm{End}(A)\) into the Jacobson radical of \(\mathrm{End}(F(A))\).
_Remark 4.2_.: There does not appear to be an agreed upon notion of a Krull-Schmidt functor in the literature. The above definition seems to encapsulate the necessary behaviour of our situation.
This condition is easily checked in some sheaf theoretic contexts since it is implied by the following three conditions:
* The "ring of coefficients" \(R:=\mathrm{End}(\mathbf{1}_{*})\) is a complete local ring.
* For any \(\mathscr{F},\mathscr{G}\) in \(S_{X}\), the group \(\mathcal{H}\mathrm{om}_{S_{X}}(\mathscr{F},\mathscr{G})\) is a finitely-generated \(R\)-module.
* The category \(S_{X}\) has split idempotents.
These conditions imply that the endomorphism ring of any indecomposable object is a local \(R\)-algebra. It also implies that the \(R\)-algebra morphisms \(\mathrm{End}(X)\to\mathrm{End}(F(X))\) are all finite, giving the Krull-Schmidt property on the functors of the base change formalism.
In particular these conditions are satisfied for the constructible base change formalism \(X\mapsto D^{b}_{c}(X,\Lambda)\), when \(\Lambda\) is a field or complete local ring. We will discuss this case in more detail in the appendix.
The primary result about Krull Schmidt categories we will use is the following automorphism lifting property.
**Lemma 4.3**.: _Let \(F:\mathscr{C}\to\mathscr{D}\) be a Krull-Schmidt functor between Krull-Schmidt categories, let \(A\) be an object with \(F(A_{i})\neq 0\) for all nonzero summands \(A_{i}\) of \(A\), and let \(\mu:A\to A\) an endomorphism of \(A\). If \(F(\mu)=\text{Id}_{F(A)}\), then \(\mu\) is an isomorphism._
Proof.: We induct on the number of indecomposable summands of \(A\), which is finite by our Krull-Schmidt hypothesis. If \(A\) is itself indecomposable, then since \(F\) is a Krull-Schmidt functor, \(\mu\) doesn't lie in the Jacobson radical of \(\text{End}(A)\). Since \(\text{End}(A)\) is a local ring, \(\mu\) is invertible.
Next, let us consider the case that \(A\) is a direct sum of \(n\) copies of a single indecomposable \(A_{0}\). In this case, we see that \(\mu-Id_{A}\) is in the kernel of the algebra morphism \(\text{End}(A)\to\text{End}(F(A))\). This kernel is then contained in the unique maximal two sided ideal \(M_{n\times n}(J(\text{End}(A_{0}))\) of the matrix ring \(M_{n\times n}(\text{End}(A_{i}))\cong\text{End}(A)\). So \(\mu\) is in \(\text{Id}+J(\text{End}(A))\), and is therefore an isomorphism.
Finally, we may assume that \(A\) is not indecomposable, and admits a nontrivial decomposition \(A\cong B\oplus C\) where \(B\) and \(C\) share no isomorphic summands. Then our morphism \(\mu\) decomposes as:
\[\mu=\begin{bmatrix}\mu_{BB}&\mu_{CB}\\ \mu_{BC}&\mu_{CC}\end{bmatrix}= \begin{bmatrix}\mu_{BB}&0\\ 0&\mu_{CC}\end{bmatrix}+\begin{bmatrix}0&\mu_{CB}\\ \mu_{BC}&0\end{bmatrix}\] \[\text{with }\mu_{XY}\in\mathcal{H}\text{om}(X,Y)\text{ for }X,Y\in\{B,C\}.\]
By induction, this diagonal piece is an isomorphism, and since \(B\) and \(C\) share no isomorphism classes of summands in common, the second matrix is in the radical of \(\text{End}(A)\) (see the lines following the proof of [31, Corollary 4.4]), so \(\mu\) is an isomorphism.
**Definition 4.4**.: Let \(F:\mathscr{C}\to\mathscr{D}\) be a Krull-Schmidt functor between Krull-Schmidt categories. Then an object \(A\) of \(\mathscr{C}\) is \(F\)-dense if for all summands \(A_{i}\) of \(A\) we have \(F(A_{i})\neq 0\).
We will only use this notion with respect to \(j^{*}\) for \(j:U\to X\) a Zariski open morphism to irreducible \(X\). In sheaf theoretic contexts, this agrees with the usual notion of having all indecomposable summands of dense support, and we will write this as \(U\)-dense. We say an object \(\mathscr{E}\) of \(S_{X}\) is **dense** in \(S_{X}\) if it is \(U\)-dense for any dense Zariski open \(U\) of \(X\).
**Lemma 4.5**.: _Let \(F:\mathscr{C}\to\mathscr{D}\) be a Krull-Schmidt functor between Krull-Schmidt categories. Then any object \(A\) of \(\mathscr{C}\) has a decomposition \(A\cong A_{F}\oplus A_{0}\), such that \(A_{F}\) is a maximal \(F\)-dense summand of \(A\). This decomposition is unique up to non-unique isomorphism._
Proof.: Choose any decomposition of \(A\) into indecomposable objects \(A_{i}\), and let \(A_{F}\) be the summand of those isomorphism types \(A_{i}\) with \(F(A_{i})\neq 0\). The isomorphism class of \(A_{F}\) is then unique by the Krull-Schmidt property.
**Proposition 4.6**.: _Let \(A\) and \(B\) be objects in a Krull-Schmidt category \(\mathscr{C}\) and \(F:\mathscr{C}\to\mathscr{D}\) a Krull-Schmidt functor. If for two maps \(f:A\to B\), \(g:B\to A\), we have \(F(f)\) and \(F(g)\) are mutually inverse isomorphisms, then \(f\), \(g\) induce isomorphisms
\(f^{\prime}\), \(g^{\prime}\) of \(F\)-dense summands:_
Proof.: First, note that \(F(\pi_{A})\) and \(F(i_{A})\) are both isomorphisms, since \(A_{0}\) is sent to zero under \(F\) by maximality of \(A_{F}\) and similarly for \(B\). Thus, our maps \(f^{\prime}:=\pi_{B}\circ f\circ i_{A}\) and \(g^{\prime}:=\pi_{A}\circ g\circ i_{B}\), induce mutually inverse isomorphisms \(F(f^{\prime})\) and \(F(g^{\prime})\) under \(F\). So Lemma 4.3 applied to the compositions of these gives that \(f^{\prime}\circ g^{\prime}\) and \(g^{\prime}\circ f^{\prime}\) are both isomorphisms. By elementary category theory, this then yields that \(f^{\prime}\) and \(g^{\prime}\) are both isomorphisms, as was to be shown.
We can now state our main theorem.
**Theorem 4.7**.: _Let \(X,X^{\prime}\) be smooth, irreducible varieties with proper, surjective maps \(f:X\to Y\), \(g:X^{\prime}\to Y\), and \(j:U\to Y\) a Zariski open in \(Y\). Assume that the pullbacks \(f_{U},g_{U}:X_{U},X_{U}^{\prime}\to U\) are isomorphic over \(U\):_
_Then for any smoothly orientable, Krull-Schmidt base change formalism \(S\) the \(U\)-dense summands of \(f_{*}\mathbf{1}_{X}\) and \(g_{*}\mathbf{1}_{X^{\prime}}\) in \(S_{Y}\) are isomorphic._
Proof.: Let the isomorphism over \(U\) be \(\alpha:X_{U}\to X_{U}^{\prime}\). Then we choose orientations of the spaces involved such that we have the following commutative diagram:
That we can do this is Proposition 6.4 in the appendix. Transporting the fundamental class of \(\overline{\Delta_{\alpha}}\) across the convolution isomorphism then yields a map
\[f_{*}\mathbf{1}_{X}\to g_{*}\mathbf{1}_{X^{\prime}}.\]
By commutativity, this restricts to \(\alpha_{*}\mathbf{1}\) over \(U\). By symmetry there also exists a morphism back, which gives the two morphisms restricting to mutually inverse isomorphisms over \(U\). We may then use Lemma 4.6 to conclude that these induce isomorphisms on the \(U\)-dense summands.
**Corollary 4.8**.: _In the setting of Theorem 4.7, the dense summands of \(f_{*}\mathbf{1}_{X}\) and \(g_{*}\mathbf{1}_{X^{\prime}}\) are isomorphic._
Proof.: The dense summands of these are the dense summands of the \(U\)-dense summands of \(f_{*}\mathbf{1}_{X}\) and \(g_{*}\mathbf{1}_{X^{\prime}}\), hence are isomorphic.
_Remark 4.9_.: One may note that the use of a _Zariski_ neighbourhood was essential in this proof, to be able to take a closure of the graph of the isomorphism over \(U\). If one uses a simple etale neighbourhood instead, one must push forward this cycle, and the induced map is an isomorphism only if the degree of the etale morphism is invertible in the ring of coefficients.
## 5. Applications
In this final section we will see some applications of Theorem 4.7. This theorem allows one to construct canonical objects in \(S_{X}\) for any smoothly orientable base change formalism that play the role of intersection cohomology sheaves in the \(\mathbb{Q}\) constructible setting, and parity sheaves in the \(\mathbb{F}_{p}\) constructible setting.
Before considering the general case, let us consider the smoothly orientable base change formalism of constructible sheaves with coefficients in a field or complete local ring \(\Lambda\). As an immediate corollary of Theorem 4.7, we obtain the following:
**Theorem 5.1**.: _Let \(Y\) be an irreducible variety. There exists a complex \(\mathscr{E}(Y,\Lambda)\in D^{b}_{c}(Y,\Lambda)\) characterised up to isomorphism by the following:_
1. \(\mathscr{E}(Y,\Lambda)\) _is indecomposable and its support is dense;_
2. \(\mathscr{E}(Y,\Lambda)\) _is a summand inside_ \(f_{*}\Lambda_{X}\)_, for any resolution_ \(f:X\to Y\)_._
We call \(\mathscr{E}(Y,\Lambda)\) the **geometric extension** on \(Y\).
As we explained in SS1.4, in the special case of a cellular resolution of singularities, this geometric extension will be a parity sheaf [26]. We may think of this object as a "geometrically motivated" minimal way to extend the constant sheaf on the smooth locus of \(Y\). In particular, since this summand occurs for any resolution of singularities, we obtain the following corollary for \(\mathbb{F}_{p}\) coefficients.
**Corollary 5.2**.: _For any resolution of singularities \(\pi:X\to Y\), for all \(y\in Y\) with fibre \(X_{y}=\pi^{-1}(y)\), we have the inequality_
\[\dim H^{i}(\mathscr{E}_{\mathbb{F}_{p}}(Y)_{y})\leq\dim H^{i}(X_{y},\mathbb{ F}_{p}).\]
Proof.: By definition, we know that \(\mathscr{E}_{\mathbb{F}_{p}}(Y)_{y}\) is a summand of \(i^{*}_{y}\pi_{*}\mathbf{1}_{X}\). The cohomology of \(i^{*}_{y}\pi_{*}\mathbf{1}_{X}\) then computes the cohomology of the fibre by proper base change, giving the result.
We now consider the case of a general smoothly orientable, Krull-Schmidt base change formalism, and higher dimensional local systems. First, we need the definition of a higher dimensional local system in this context.
**Definition 5.3**.: A **geometric local system**\(\mathscr{L}\) on a smooth irreducible variety \(U\) is a smooth, proper, surjective map \(V\xrightarrow{\mathscr{L}}U\). The restriction of \(\mathscr{L}\) to an open \(U^{\prime}\to U\) is the base change of this morphism.
The following proposition lets us interpret compactification of morphisms as a method to "extend" geometric local systems.
**Proposition 5.4**.: _For any geometric local system \(V\xrightarrow{\mathscr{L}}U\) over \(U\) a (smooth Zariski) open in \(Y\), there exists a proper morphism \(X\xrightarrow{\tilde{\mathscr{L}}}Y\) from smooth \(X\) such that we have a pullback square:_
Proof.: The proof may be summarised in the following diagram:
First, we compactify the composition \(j\circ\mathscr{L}\) as \(V\to\tilde{Y}\xrightarrow{g}Y\), where \(g\) is proper. Choosing a resolution of singularities \(X\to\tilde{Y}\), as \(V\) is smooth, the map \(V\to Y\) factors through \(X\). Then composing with \(g\) gives the desired map \(X\xrightarrow{\tilde{\mathscr{L}}}Y\).
From here we will let \(S\) denote a Krull-Schmidt, smoothly orientable base change formalism, satisfying the following condition for geometric local systems:
1. If \(\mathscr{L}:V\to U\) is a geometric local system, then all summands of \(\mathscr{L}_{*}\mathbf{1}\) have dense support.
_Remark 5.5_.: This condition holds in all examples we have discussed so far, and we don't know of any situation where it fails to hold. In sheaf theoretic or algebro-topological situations, this follows from homotopy invariance.
With these preliminaries, we have the following general version of Theorem 5.1.
**Theorem 5.6**.: _Let \(Y\) be an irreducible variety, \(S\) a smoothly orientable, Krull-Schmidt base change formalism satisfying condition (D). For any dense \(U\subset Y\) and geometric local system \(V\xrightarrow{\mathscr{L}}U\) there is a unique object \(\mathscr{E}(Y,\mathscr{L})\in S_{Y}\) satisfying:_
1. \(j^{*}\mathscr{E}_{S}(Y,\mathscr{L})\cong\mathscr{L}_{*}\mathbf{1}_{V}\) _where_ \(j:U\hookrightarrow Y\) _denotes the inclusion;_
2. \(\mathscr{E}_{S}(Y,\mathscr{L})\) _is dense, with no summands supported on a proper closed subset of_ \(Y\)_;_
3. _for any proper map with smooth source_ \(f:X\to Y\) _which restricts to_ \(\mathscr{L}\) _over_ \(U\)_,_ \(\mathscr{E}_{S}(Y,\mathscr{L})\) _occurs as a summand of_ \(f_{*}\mathbf{1}_{X}\)_._
**Definition 5.7**.: We define the **geometric extension** of \(\mathscr{L}\) on \(Y\) to be the object \(\mathscr{E}_{S}(Y,\mathscr{L})\). When the local system is the identity, we call this the geometric extension on \(Y\). We call the groups
\[\mathscr{E}_{S}^{i}(Y):=\mathcal{H}\mathrm{om}_{S_{Y}}(\mathbf{1}_{Y},\mathscr{ E}_{S}(Y)[i])\]
the geometric \(S\)-cohomology groups of \(Y\).
_Remark 5.8_.: For a fixed \(S\), one may replace smoothness with \(S\)-smoothness (see Definition 3.4) in the preceeding definitions, with slightly more general results.
One can think of these groups \(\mathscr{E}_{S}^{i}(Y)\) concretely as unavoidable summands of the \(S\)-cohomology of any resolution of singularities of \(X\).
_Warning 5.9_.: In general, the object \(\mathscr{E}_{S}(Y,\mathscr{L})\) depends on the geometry of the map \(V\xrightarrow{\mathscr{L}}U\), not just on the object \(\mathscr{L}_{*}\mathbf{1}_{V}\) in \(S_{U}\). An example with further discussion is given in Example 5.21.
We will now give some properties of these geometric extensions.
**Proposition 5.10**.: _If \(\mathscr{L}\) and \(\mathscr{L}^{\prime}\) on \(U\) and \(V\) agree on \(U\cap V\), then:_
\[\mathscr{E}_{S}(Y,\mathscr{L})\cong\mathscr{E}_{S}(Y,\mathscr{L}^{\prime}).\]
_In particular, for \(f:X\to Y\) a proper map with smooth source, the dense summand of \(f_{*}\mathbf{1}_{X}\) depends only on the generic behaviour of the map \(f\)._
Proof.: The geometric extensions arise as summands of \(f_{*}\mathbf{1}_{X}\), \(g_{*}\mathbf{1}_{X^{\prime}}\) for compactifications \(f,g\) of these geometric local systems. Since these two maps agree on a dense open \(U\cap V\), their dense summands are isomorphic by Theorem 4.7.
Consider a complex \(\mathscr{L}\) isomorphic to \(\bigoplus_{i}\mathscr{L}^{i}[-i]\) for local systems \(\mathscr{L}^{i}\) on a dense subvariety of the smooth locus of \(Y\). Define \(IC(Y,\mathscr{L})\) to be the sum \(\bigoplus_{i}IC(Y,\mathscr{L}^{i})[-i]\) (see Remark 1.8).
**Proposition 5.11**.: _If \(S\) is the constructible derived category of sheaves over \(\mathbb{Q}\), then the geometric extension is the intersection cohomology complex of sheaves:_
\[\mathscr{E}_{S}(Y,\mathscr{L})\cong IC(Y,\mathscr{L}_{*}\mathbb{Q}).\]
Proof.: By the Decomposition Theorem [7], the pushforward \(f_{*}\mathbf{1}_{X}\) is a direct sum of semisimple perverse sheaves. On the smooth locus of \(Y\), this sheaf is the local system \(f_{*}\mathbf{1}_{V}\). By the classification of simple perverse sheaves ([7] or [1, Theorem 3.4.5]), we see that the dense summand of \(f_{*}\mathbf{1}_{V}\) is \(IC(Y,\mathscr{L}_{*}\mathbf{1}_{V})\).
In general, like intersection cohomology sheaves, the geometric extension gives a way to interpolate between \(S\)-cohomology and noncompact \(S\)-homology.
**Proposition 5.12**.: _For any resolution of singularities \(f:\tilde{Y}\to Y\), chosen orientation \(\gamma\) of \(\tilde{Y}\), and choice of split inclusion \(\mathscr{E}_{S}(Y)\to f_{*}\mathbf{1}_{\tilde{Y}}\), we obtain a sequence:_
\[\mathbf{1}_{Y}\to\mathscr{E}_{S}(Y)\to\omega_{Y}[-2d_{Y}]\]
_such that the composite is an orientation of \(Y\)._
Proof.: We have the following maps of sheaves on \(Y\):
\[\mathbf{1}_{Y}\to f_{*}\mathbf{1}_{\tilde{Y}}\to\mathscr{E}_{S}(Y)\to f_{*} \mathbf{1}_{\tilde{Y}}\cong f_{!}\mathbf{1}_{\tilde{Y}}\cong f_{!}\omega_{ \tilde{Y}}[-2d]\to\omega_{Y}[-2d]\]
These maps are all isomorphisms over \(U\), so the composite is an orientation of \(Y\).
The following shows that for \(S\)-smooth varieties, the geometric extension is just the constant sheaf.
**Proposition 5.13**.: _If \(S\) is a smoothly orientable base change formalism, and \(Y\) is \(S\)-orientable, then the geometric extension is the constant sheaf on \(Y\):_
\[\mathscr{E}_{S}(Y)\cong\mathbf{1}_{Y}\]
Proof.: For a resolution \(\pi:\tilde{Y}\to Y\) of \(Y\), and chosen isomorphism \(\gamma:\mathbf{1}_{\tilde{Y}}\to\omega_{\tilde{Y}}[-2d]\) of \(\tilde{Y}\), then we claim that the pushforward orientation of \(Y\) is an isomorphism \(\mathbf{1}_{Y}\to\omega_{Y}[-2d]\). This morphism is between indecomposable, isomorphic objects, and is thus an isomorphism over \(U\), so is an isomorphism by the Krull-Schmidt property. Composing with the inverse isomorphism \(\omega_{Y}[-2d]\to\mathbf{1}_{Y}\) then gives the following commutative diagram:
The result then follows from Proposition 4.6.
The maps of Proposition 5.12 induce the following interpolation morphisms
\[S^{*}(Y)\to\mathscr{E}_{S}^{*}(Y)\to S^{!}_{2d_{Y}-*}(Y).\]
This interpolation perspective also lets us extract a canonical invariant of our singular space, the (co)kernel of the induced map \(S^{*}(Y)\to\mathscr{E}_{S}^{*}(Y)\).
**Definition 5.14**.: Let \(Y\) be irreducible and projective. The **geometrically pure**\(S\)-cohomology of \(Y\) is the quotient
\[S^{*}_{gp}(Y):=\frac{S^{*}(Y)}{\ker(S^{*}(Y)\to\mathscr{E}_{S}^{*}(Y))}\]
Similarly, the **geometrically non-pure**\(S\)-cohomology of \(Y\) is this kernel
\[S^{*}_{gnp}(Y):=\ker(S^{*}(Y)\to\mathscr{E}_{S}^{*}(Y)).\]
That these objects are independent of the choices involved in their construction is the content of the following Lemma.
**Lemma 5.15**.: _Let \(Y\) be irreducible, with two resolutions of singularities_
\[f_{i}:\tilde{Y}_{i}\to Y\quad\text{for $i\in\{1,2\}$}.\]
_Assume for each map we have a chosen a split projection onto the geometric extension of \(Y\):_
\[f_{i*}\mathbf{1}\xrightarrow{\pi_{i}}\mathscr{E}_{S}(Y).\]
_Then there exists an isomorphism \(\beta\) of \(\mathscr{E}_{S}(Y)\) such that the following diagram commutes:_
Proof.: Resolving the diagonal irreducible component of the fibre product \(Y_{1}\times_{Y}Y_{2}\), we may find a third resolution of singularities of \(Y\), dominating \(f_{i}\):
Then for any choice of split projection \(f_{3*}\mathbf{1}\to\mathscr{E}_{S}(Y)\), we have the following commutative diagram for \(i\in\{1,2\}\):
These maps \(\beta_{i}\) defined as the composition are isomorphisms by Theorem 5.1, so their composite \(\beta_{2}^{-1}\circ\beta_{1}\) gives the desired isomorphism.
_Remark 5.16_.: In the case of \(\mathbb{Q}\)-constructible coefficients, the geoemtrically pure (resp geometrically non-pure) is precisely the pure (resp non-pure) cohomology in the mixed Hodge structure on \(H^{*}(Y,\mathbb{Q})\). To see this, recall that the mixed Hodge structure on a singular, projective variety is given by resolving \(Y\) by a smooth simplicial hypercover, and the pure component is the first quotient of the associated spectral sequence [14].
**Example 5.17**.: Let us consider the geometric extension on the space \(\mathbb{A}_{\mathbb{C}}^{n}/\pm 1\) with constructible coefficients over a field \(k\) of characteristic two. We will show that the geometric extension over \(k\) is exactly \(\pi_{*}\mathbf{1}\) for a resolution \(\pi\) that contracts a divisor over \(0\). We will thus have nonzero cohomology in degree \(2(n-1)\) in the stalk over \(0\). This gives the geometric consequence that any resolution of singularities of this space must contract a divisor, by Corollary 5.2.9
Footnote 9: As explained to us by Burt Totaro, this may also be easily seen algebro-geometrically by the fact that our space \(\mathbb{A}^{n}/\pm 1\) is \(\mathbb{Q}\)-factorial, as follows. Let \(X\stackrel{{\pi_{*}}}{{\to}}\mathbb{A}^{n}/\pm 1\) be a resolution of singularities, and \(D\) a chosen very ample Weil divisor on \(X\). As \(\mathbb{A}^{n}/\pm 1\) is \(\mathbb{Q}\) factorial, a positive multiple \(n\pi_{*}(D)\) of the Weil divisor \(\pi_{*}(D)\) is Cartier. Pulling this back gives the Cartier divisor \(\pi^{*}(n\pi_{*}(D))\) on \(X\). If our exceptional fibre has codimension at least \(2\), then this divisor on \(X\) would be \(nD\), but then \(D\) cannot be very ample, as its sections don’t separate points in the exceptional fibre.
Consider the following diagram of blowups and quotients:
The space \(\operatorname{Bl}_{0}(\mathbb{A}_{\mathbb{C}}^{n})\) via its projection map to \(\mathbb{P}_{\mathbb{C}}^{n-1}\) is the total space of the tautological bundle, and this quotient \(\operatorname{Bl}_{0}(\mathbb{A}_{\mathbb{C}}^{n})/\pm 1\) is obtained by taking the quotient under the inversion map on the (vector space) fibres. So we see the maps with \(\simeq\) are homotopy equivalences of topological spaces, and \(\operatorname{Bl}_{0}(\mathbb{A}_{\mathbb{C}}^{n})/\pm 1\) is smooth. So we obtain a resolution of the singular space \(\mathbb{A}_{\mathbb{C}}^{n}/\pm 1\), with domain homotopic to \(\mathbb{P}_{\mathbb{C}}^{n-1}\). On our base, \(0\) is the unique singular point, and the fibre over this singular point in this resolution is \(\mathbb{P}_{\mathbb{C}}^{n-1}\). Its complement is \(\mathbb{A}_{\mathbb{C}}^{n}-\{0\}/\pm 1\), which is naturally homeomorphic to the space \(\mathbb{RP}^{2n-1}\times\mathbb{R}\).
We claim that the geometric extension is just the pushforward \(\pi_{*}\mathbf{1}\). To show this, we need to check that this sheaf is indecomposable, which is equivalent to showing that it has no skyscraper summands at the singular point.
To check this, consider the compactly supported cohomology of the open-closed triangle for the inclusion of the singular point:
\[j_{!}j^{!}\pi_{*}\mathbf{1}\to\pi_{*}\mathbf{1}\to i_{*}i^{*}\pi_{*}\mathbf{1} \xrightarrow{+1}\]
By base change, the compactly supported cohomology of \(j_{!}j^{!}\pi_{*}\mathbf{1}\) is the compactly supported cohomology of \(\mathbb{A}_{\mathbb{C}}^{n}-\{0\}/\pm 1\simeq\mathbb{RP}^{2n-1}\times\mathbb{R}\). Similarly, by base change, the sheaf \(i_{*}i^{*}\pi_{*}\mathbf{1}\) computes the cohomology of the fibre, which is \(\mathbb{P}_{\mathbb{C}}^{n-1}\). The middle term computes the compactly supported cohomology of the total space, which is a complex line bundle over \(\mathbb{P}_{\mathbb{C}}^{n-1}\), and so gives the cohomology of \(\mathbb{P}_{\mathbb{C}}^{n-1}\), shifted by \(2\). Applying the compactly supported cohomology functor \(\mathcal{H}\mathrm{om}_{k}^{*}(\mathbf{1},t_{!})\) yields an exact triangle:
Since the characteristic of \(k\) is two, \(H_{!}^{*}(\mathbb{RP}^{2n-1}\times\mathbb{R})\) is nonzero in all degrees between \(1\) and \(2n\) inclusive. So this \(+1\) degree map must be injective by the parity of the cohomology of \(\mathbb{CP}^{n-1}\). Thus, this extension is maximally nonsplit and there can be no skyscraper sheaf summand. So this \(\pi_{*}\mathbf{1}\) is indecomposable in characteristic two, giving the desired nonzero cohomology in the stalk. (This may also be seen using intersection forms (see [26, SSSS3.2-3.3]). The refined intersection form is identically zero modulo \(2\).)
The previous example shows that geometric extensions for the constructible base change formalism over fields need not be a perverse, and by similar ideas we obtain the following geometric consequence.
**Proposition 5.18**.: _If for some field \(k\), the geometric extension of \(Y\) over \(k\) is not perverse up to shift in \(D_{c}^{b}(Y,k)\), then \(Y\) does not admit a semismall resolution._
Proof.: For such an \(Y\), the geometric extension is a summand of \(\pi_{*}\mathbf{1}_{\tilde{Y}}\) for any resolution \(\pi:\tilde{Y}\to Y\). Since the geometric extension is assumed to not be perverse, no resolution can be semismall.
The following is an immediate corollary of Theorem 5.6, though we suspect there is a more direct way to see the result.
**Proposition 5.19** (Zariski trivial is cohomologically trivial.).: _Let \(f:X\to Y\) be a smooth proper morphism between smooth varieties. Then if \(f\) is Zariski locally trivial, then \(f_{*}\mathbf{1}_{X}\) is the trivial local system on the fibre for any smoothly orientable base change formalism._
Proof.: Let \(F\) be a fibre of this morphism. Then if \(f^{-1}(U)\cong F\times U\), then \(f_{*}\mathbf{1}_{X}\) is isomorphic to the geometric extension of the constant \(F\) local system over \(U\). But the trivial family \(F\times X\to X\) also gives the geometric extension, giving the result.
_Remark 5.20_.: By a similar argument, if \(f\) is etale locally trivial, then \(f_{*}\mathbf{1}\) is trivial in any base change formalism with coefficients of characteristic zero.
By Proposition 5.11, over \(\mathbb{Q}\), the geometric extension \(\mathscr{E}_{\mathbb{Q}}(Y,\mathscr{L})\) is determined by the \(\mathbb{Q}\) local system \(\mathscr{L}_{*}\mathbf{1}_{V}\) on \(U\) within \(Y\), being isomorphic to \(IC(Y,\mathscr{L}_{*}\mathbf{1}_{V})\). The following example shows that this is exceptional behaviour, and that geometric extensions in general are not determined by the \(S\)-local systems \(\mathscr{L}_{*}\mathbf{1}_{V}\). They require the map.
**Example 5.21** (The Legendre family of elliptic curves).: Consider the following projective family \(E_{t}\) of elliptic curves
given by:
\[Y^{2}Z=X(X-Z)(X-tZ).\]
Here \(t\) is the coordinate on \(\mathbb{A}^{1}_{\mathbb{C}}\), and we view this family inside \(\mathbb{A}^{1}_{\mathbb{C}}\times\mathbb{P}^{2}_{\mathbb{C}}\). This family is smooth away from \(t\in\{0,1\}\). The total space of this family has two isolated singular points, the nodes of the nodal cubics in the fibres over \(t=0\) and \(t=1\). These singular points have tangent cone isomorphic to the cone on a smooth conic. Blowing up these two singular points resolves the singularities, giving the resolved family
\[\tilde{X}\xrightarrow{\tilde{\pi}}\mathbb{A}^{1}_{\mathbb{C}}.\]
For this new map, the fibres over \(t\in\{0,1\}\) are each the union of two rational curves intersecting in two points transversely. (This is type \(I_{2}\) in Kodaira's classification of elliptic fibres [30]. This is often called the "double banana" configuration.) So for the constructible base change formalism with coefficients in \(k\), the stalks \(i^{*}_{t}\) of \(\tilde{\pi}_{*}\mathbf{1}\) are given by:
\[\begin{array}{l||lll}H^{*}&0&1&2\\ \hline i^{*}_{0}\tilde{\pi}_{*}\mathbf{1}&k&k&k^{\oplus 2}\\ i^{*}_{1}\tilde{\pi}_{*}\mathbf{1}&k&k&k^{\oplus 2}\\ i^{*}_{k}\tilde{\pi}_{*}\mathbf{1}\text{ if }t\neq 0,1&k&k^{\oplus 2}&k\\ \end{array}\]
The monodromy of this family is nontrivial only in the middle degree:
\[H^{1}(E_{t},k)\cong k^{2}.\]
One may then compute (see e.g. [10, Part 1]) that the monodromy of a small loop around either singular fibre over \(\mathbb{Z}\) is similar to
\[\begin{bmatrix}1&2\\ 0&1\\ \end{bmatrix}.\]
Thus, we observe that this monodromy action is trivial if the characteristic of \(k\) is two, and in this case the associated local system on \(\mathbb{A}^{1}_{\mathbb{C}}\setminus\{0,1\}\) is trivial. We note however that the geometric extension of this local system is always nontrivial, as a trivial local system \(E\times\mathbb{A}^{1}_{\mathbb{C}}\) has two copies of \(k[-1]\) in its stalk over its singular fibres, rather that the one copy in our family.
This example therefore shows that the geometric extension of a geometric local system cannot be deduced from just the knowledge of \(\tilde{\pi}_{*}\mathbf{1}\) restricted to the open
subset \(\mathbb{A}^{1}_{\mathbb{C}}\setminus\{0,1\}\). This example also lets us observe the failure of the local invariant cycle theorem in characteristic \(p\), as the specialisation map
\[H^{1}(E_{0})\to H^{1}(E_{t})^{\mu}\]
is not surjective.
_Remark 5.22_.: One may construct families of counterexamples as follows. Let \(\pi:X\to\mathbb{A}^{1}\) be proper with smooth source, smooth over \(\mathbb{A}^{1}\setminus\{0\}\), such that the \(n^{th}\) power base change of \(\pi\) has smooth total space \(\tilde{X}_{n}\).
Then the associated monodromy representation of \(\pi_{n}\) has monodromy the \(n\)th power of the monodromy of \(\pi\). This allows one to trivialise the monodromy if the order is finite, dividing \(n\).
Another feature of the decomposition theorem in the \(\mathbb{Q}\) coefficient setting is that the pushforward \(f_{*}\mathbf{1}_{X}\) is semisimple. This fails for more general coefficients, and can already be seen with smooth, projective maps with mod \(p\) coefficients. This result and its proof don't require geometric extensions, but we've decided to include it as we are not aware of any examples of this phenomenon in the literature.
**Example 5.23** (A non-semisimple geometric local system).: Let \(S\) be the \(\mathbb{F}_{2}\) constructible formalism, and let \(\pi:E\to X\) be an algebraic (etale local) \(\mathbb{P}^{1}_{\mathbb{C}}\) bundle over a smooth space \(X\), with nontrivial Brauer class in \(H^{3}(X,\mathbb{Z})\). For instance, one may take the tautological bundle over an algebraic approximation of the classifying space \(BPGL_{2}(\mathbb{C})\) (see e.g. [6]). Then \(\pi_{*}\mathbf{1}_{E}\) is an extension of \(\mathbf{1}_{X}\) by \(\mathbf{1}_{X}[2]\), classified by an element of
\[\operatorname{Ext}^{1}(\mathbf{1}_{X},\mathbf{1}_{X}[2])=H^{3}(X,\mathbb{F}_{2}).\]
This element is the reduction modulo \(2\) of the associated Brauer class in \(H^{3}(X,\mathbb{Z})\), so does not vanish in this quotient, and gives the desired indecomposable local system. One may construct counterexamples more generally using the fact that if \(f:X\to Y\) is any map, and the induced map \(H^{*}(Y,\mathbb{F}_{p})\to H^{*}(X,\mathbb{F}_{p})\) is not injective, then \(\mathbf{1}\to f_{*}\mathbf{1}\) cannot be split injective in \(D^{b}_{c}(X,\mathbb{F}_{p})\).
Thus far in this section we have been considering applications for the constructible derived category with field coefficients, but it is worth emphasising that there are other examples, which have been shown to be relevant to geometric representation theory.10
Footnote 10: The most famous examples are Kazhdan and Lusztig’s computation of the equivariant K-theory of the Steinberg variety [29] and Nakajima’s computation of the equivariant K-theory of quiver varieties [40]. Note that both computations can be interpreted as the computation of an endomorphism of a direct image with K-theory coefficients.
In particular, there is now an established theory, with a six functor formalism, for modules over any \(A_{\infty}\) ring spectrum. Let's consider the ring spectrum \(KU_{p}\), \(p\) completed complex K theory. This is the ring spectrum that represents the cohomology theory \(X\mapsto K^{*}(X)\otimes_{\mathbb{Z}}\mathbb{Z}_{p}\) on finite \(CW\) complexes \(X\), where \(K^{0}(X)\) is the usual Grothendieck group of complex vector bundles on \(X\). The formalism of
\(KU_{p}\) modules then gives rise to a smoothly orientable, Krull-Schmidt base change formalism for \(p\) completed complex K theory \(KU_{p}\), see Appendix 6.1.
By Theorem 5.6, we may then define the geometric extension for \(p\) completed K theory.
**Definition 5.24**.: For an irreducible variety \(Y\), the \(KU_{p}\) geometric extension is the sheaf of \(KU_{p}\) modules \(\mathscr{E}_{KU_{p}}(Y)\). The geometric \(K\)-theory groups at \(p\) are defined to be the homotopy groups of this indecomposable sheaf of \(KU_{p}\) modules:
\[\mathscr{E}_{KU_{p}}^{*}(Y):=\pi_{-*}(\mathscr{E}_{KU_{p}}(Y)).\]
We end with some natural questions regarding these geometric K groups.
_Question 5.25_.: We have natural maps \(K^{*}(Y)\to\mathscr{E}_{KU_{p}}^{*}(Y)\) for all \(p\), and by Lemma 5.15, the kernel of these maps are independent of our choices. We might call elements in this common kernel nonpure classes in (integral) K-theory. Is there a geometric, vector bundle description of these classes? The nontorsion part will be visible as nonpure classes in \(\mathbb{Q}\) cohomology, what about the torsion?
One may also use the rationalised K theory spectrum \(KU_{\mathbb{Q}}\) in the preceding definitions. In this case, the groups we obtain are just ordinary intersection cohomology, since in rational cohomology, geometric extensions are just intersection cohomology, and the Chern character gives an isomorphism of \(E_{\infty}\) ring spectra \(\operatorname{ch}:KU_{\mathbb{Q}}\cong H_{\mathbb{Q}}\). Though the groups are not new, this alternate description of intersection cohomology via K theory leads to the natural question of whether these geometric extensions can be categorified. Our main result gives, for a fixed base space \(Y\), for any resolution \(X\to Y\), an idempotent endomorphism of \(K^{*}(X)\otimes\mathbb{Q}\) which cuts out \(\mathscr{E}_{KU_{\mathbb{Q}}}(Y)\), and this image is independent of the resolution. Now specialise to the case where \(X\) and \(Y\) admit compatible affine pavings, so \(X\times_{Y}X\) also admits such a paving. This occurs for instance in the theory of Schubert varieties with Bott-Samelson resolutions. In this situation, the rationalised Grothendieck group of coherent sheaves on \(X\) is isomorphic to the rationalised topological K group of \(X\), via the Chern character to Chow groups. The fundamental classes of subvarieties of \(X\times X\) naturally act as endomorphisms of \(D^{b}_{coh}(X)\) as kernels of Fourier-Mukai transforms, and this naturally categorifies the action on \(K^{*}(X)\). This leads to the following (imprecise) question.
_Question 5.26_.: Let \(X\) be an affine paved resolution of \(Y\). Does there exist an idempotent endofunctor \(E_{X/Y}\) of \(D^{b}_{coh}(X)\) such that the image of \(E_{X/Y}\) is an invariant of \(Y\), which decategorifies to the idempotent cutting out \(\mathscr{E}_{KU_{\mathbb{Q}}}(Y)\) inside \(K^{*}_{\mathbb{Q}}(X)\)? Furthermore, is this category independent of the resolution \(X\)?
## 6. Appendix
### \(KU_{p}\) modules
Here we will give a short introduction to the smoothly orientable base change formalism of \(KU_{p}\) modules. We will start with ordinary topological K theory. This is a cohomology theory built from the Grothendieck group of complex vector bundles over \(X\):
\[X\mapsto K^{0}(X):=Gr(Vec_{\mathbb{C}}/X).\]
This is the zeroth of a series of functors \(K^{i}\), which form a cohomology theory in the sense of satisfying the Eilenberg-Steenrod axioms (except the dimension axiom).
Bott's periodicity theorem ([9], also see [22]) implies that this cohomology theory is represented by a two periodic sequential spectrum \(KU\) with component spaces:
\[KU_{2i} \cong\mathbb{Z}\times BU\] \[KU_{2i+1} \cong U.\]
Here \(U\) is the infinite unitary group, and \(BU\) is the union of the infinite complex Grassmannians \(BU(n)\). The tensor product on vector bundles gives a homotopy coherent commutative multiplication law on this spectrum, so \(KU\) is naturally an \(E_{\infty}\)-ring.
The (higher) coherence of this multiplication law allows one to define a well-behaved \(\infty\)-category of module spectra. This \(\infty\)-category is stable, so it can be thought of as an enhancement of its triangulated homotopy category.
For any stable \(\infty\)-category \(\mathcal{C}\), we have a notion of sheaves on a space \(X\) valued in \(\mathcal{C}\). We will not define this precisely, but in rough terms it gives an object for each open set, a morphism for each inclusion of open sets, a homotopy between the compositions for each pair of composable inclusions, and so on, such that an analogue of the sheaf condition holds.
The \(\infty\)-category of such \(\mathcal{C}\)-valued sheaves on a space \(X\) is stable. If one restricts to suitable, locally compact spaces with well-behaved maps between them, such as algebraic varieties with algebraic maps, then we obtain the whole six functor formalism for \(\mathcal{C}\)-valued sheaves, see e.g. [47]. Furthermore, one may restrict to constructible \(\mathcal{C}\)-valued sheaves. Constructibility is then preserved under these six functors, due to the good topological properties of algebraic maps. We will not need the inner workings of this construction. The following example shows why such a formal black box can still be useful.
**Example 6.1**.: Consider Example 5.17, interpreted within the K-theoretic framework. This whole example is formal, until we apply compactly supported cohomology to obtain the triangle:
If we instead used K theory, we would obtain the triangle:
Then one may show formally that the vertical arrow is multiplication by the Thom class of the \(KU\) orientable line bundle \(\mathscr{O}(2)\), and that compactly supported K-theory of a compact space times \(\mathbb{R}\) is the ordinary K-theory shifted by one. Since the Thom class is \(1-2[H]\), and we know the K theory of \(\mathbb{CP}^{n-1}\), this lets us easily compute the K theory of \(\mathbb{RP}^{2n-1}\).
### Localisation at \(p\)
We wish to work with Krull-Schmidt categories everywhere, so we need to localise the K theory base change formalism to obtain
modules. This is a formal procedure, essentially given on integral objects by tensoring with the \(p\)-adic integers \(\mathbb{Z}_{p}\) every place one sees a K group. For instance, to build the associated cohomology theory, we simply tensor with \(\mathbb{Z}_{p}\). That this preserves the property of being a cohomology theory is immediate from flatness of \(\mathbb{Z}_{p}\) over \(\mathbb{Z}\), so this functor gives the associated spectrum \(KU_{p}\) representing it.
We round out this section with a proof that \(KU_{p}\) modules are a base change formalism on algebraic varieties.
**Proposition 6.2**.: _The base change formalism \(Y\mapsto D_{c}^{b}(Y,KU_{p})\) of sheaves of constructible \(KU_{p}\) module spectra is a smoothly orientable Krull-Schmidt base change formalism on complex algebraic varieties, which satisfies condition (D)._
Proof.: First, the fact that this is a base change formalism entails many compatibilities which follow from the construction, and Lurie's proper base change theorem (Chapter 7, SS3 of [33]). One may find a streamlined proof of these properties in [47]. For orientability, note that orientability is just an existence statement for elements in \(K_{2d}^{BM}(X)\otimes_{\mathbb{Z}}\mathbb{Z}_{p}\). In particular, this is implied by the orientability of integral K-theory on complex manifolds, see Example 3.9. To check condition (D), note that density holds trivially if the fibre bundle is trivial, and for any two points \(x,y\) we may choose a contractible neighbourhood \(U_{x,y}\) of \(x\) and \(y\). Restricting our geometric local system to \(U_{x,y}\) gives a topologically trivial bundle, giving the density result. It remains to check the Krull-Schmidt property of this base change formalism. We first claim it suffices to check the following conditions11.
Footnote 11: This is slightly different to the conditions in §4, though the proof is the same.
* The "ring of coefficients" \(\mathbb{Z}_{p}[t,t^{-1}]:=\operatorname{End}(\mathbf{1}_{*})\) is a graded complete local ring.
* For any \(\mathscr{F},\mathscr{G}\) in \(D_{c}^{b}(X,KU_{p})\), the group \(\mathcal{H}\mathrm{om}^{*}(\mathscr{F},\mathscr{G})\) is a finitely generated graded \(\mathbb{Z}_{p}[t,t^{-1}]\) module.
* The category \(D_{c}^{b}(X,KU_{p})\) has split idempotents.
To see that these suffice, first note that the endomorphism algebra of any indecomposable object is a finite graded \(\mathbb{Z}_{p}[t,t^{-1}]\) module by the second condition. The graded version of the idempotent lifting lemma (Corollary 7.5, [17]), and splitting of idempotents implies the endomorphism ring of any indecomposable object is local. The Krull-Schmidt property of functors then follows from the fact that for any finite morphism of local rings \(\phi:R\to S\), we have \(\phi(J(R))\subset J(S)\), which is immediate from Nakayama's Lemma.
It remains to check that these conditions hold for sheaves of \(KU_{p}\) modules. The first condition is immediate by Bott Periodicity, as these are the homotopy groups of \(KU_{p}\). For the finiteness of the second condition, since \(\mathbb{Z}_{p}[t,t^{-1}]\) is Noetherian, one may apply open closed decomposition triangles to reduce to the case of morphisms between locally constant \(KU_{p}\) modules on a smooth variety. We then can find a finite good cover of contractible open sets trivialising these \(KU_{p}\) modules, and an induction with the Mayer-Vietoris sequence gives the result. Finally, to check idempotent completeness, we may assume our sheaves of \(KU_{p}\) modules are constructible with respect to a fixed stratification \(\lambda\). For a fixed stratification \(\lambda\), we have the associated exit path \(\infty\)-category \(EP_{\lambda,\infty}(X)\), and we may identify the category of \(\lambda\) constructible sheaves of \(KU_{p}\) modules with the functor category \([EP_{\lambda,\infty}(X),KU_{p}-\)Perf\(]\) (see Theorem A.9.3 [34]). As \(KU_{p}-\)Perf is accessible, this
functor category is accessible (see [33] Proposition 5.4.4.3), and thus since this functor \(\infty\) category is small, accessibility is equivalent to idempotent completeness (see Corollary 5.4.3.6 [33]).
### Commuting diagrams
In this section we prove the existence of the compatibility diagrams for the convolution isomorphism 2.5.
This can be broken into two distinct parts, the more formally 2-categorical Proposition 6.3, and the orientation compatibility, Proposition 6.4.
Let us first recall the setup. Our space \(Y\) is irreducible, with Zariski open set \(U\), and \(X,X^{\prime}\) are two smooth spaces over \(Y\). The following diagram will be our reference for the maps involved in the convolution isomorphism.
Our first proposition is the following:
**Proposition 6.3**.: _The following diagram commutes, where the horizontal maps are our convolution isomorphisms, and the vertical maps are restriction followed by base change:_
Our orientation compatibility is the following:
**Proposition 6.4**.: _There exist orientations of \(\overline{\Delta_{\alpha}}\) and \(X\times_{Y}X^{\prime}\) such that the following diagram exists and is commutative, where the fundamental class of \(\overline{\Delta_{\alpha}}\) restricted to \(X_{U}\times_{U}X^{\prime}_{U}\) maps to \(\alpha_{*}\mathbf{1}\) under the associated convolution isomorphism._
For notational convenience, in the proof of this proposition we will use \((\_,\_)\) to denote morphism sets, and we will only use \(f,g\) and \(j\), noting that the decorations are uniquely determined by the location within the diagram.
Proof.: We first prove Proposition 6.3. We may expand the diagram in Proposition 6.3 into the following:
Only the commutativity of the middle square is not standard, and its commutativity follows from the commutativity of the following diagram:
The commutativity of the internal faces are all standard compatibilities of base change and naturality.
_Remark 6.5_.: A general statement proving commutativity of diagrams of this type is to be found in the thesis of the first author [24].
It remains to prove Proposition 6.4. This entails proving the commutativity of the diagram, and for compatible choices of orientation, that the restriction of \([\overline{\Delta_{\alpha}}]\) to \(S^{!}_{2d_{X}}(X_{U}\times_{U}X^{\prime}_{U})\) corresponds to \(\alpha_{*}\mathbf{1}\) under the convolution isomorphism.
To show the existence of this diagram, we evaluate on the constant sheaf, and use the chosen orientation \(\mathbf{1}\xrightarrow{\gamma}\omega_{X}[-2d]\) of \(X\) to give the identifications with \(S_{*,2d}(X\times_{Y}X^{\prime})\).
\(\mathcal{H}\)\(\mathrm{om}(f_{\star}\mathbf{1},g_{\star}\mathbf{1})\)\(\mathcal{H}\)\(\mathrm{om}(f_{\star}\mathbf{1},g_{\star}\mathbf{1})\)\(\mathcal{H}\)\(\mathrm{om}(\tilde{g}^{\star}\mathbf{1},\tilde{f}^{\dagger}\mathbf{1})\)\(\xrightarrow{\gamma}\mathcal{H}\mathrm{om}(\mathbf{1},\tilde{f}^{\dagger}\omega[-2d])\)\(S_{\ast,2d}(X\times_{Y}X^{\prime})\)\(\mathcal{H}\)\(\mathrm{om}(f_{U}\mathbf{1},g_{U}\mathbf{1})\)\(\mathcal{H}\)\(\mathrm{om}(f_{U}\mathbf{1},g_{U}\mathbf{1})\)\(\mathcal{H}\)\(\mathrm{om}(\tilde{g}^{\star}_{U}\mathbf{1},\tilde{f}^{\dagger}_{U}\mathbf{1})\)\(\xrightarrow{\gamma v}\mathcal{H}\mathrm{om}(\mathbf{1},\tilde{f}^{\dagger}_{U}\omega[-2d])\)\(S_{\ast,2d}(X_{U}\times_{U}X^{\prime}_{U})\) Here the commutativity of the second square is Proposition 6.3. Thus it remains to prove the identification of the fundamental class along this isomorphism.
**Proposition 6.6**.: _Given an isomorphism \(\alpha\) over \(U\), such that \(X_{U},X^{\prime}_{U}\) are smooth:_
\(X_{U}\)\(\alpha\)\(S_{\ast}\)\(X^{\prime}_{U}\)\(\alpha\)\(S_{\ast}\)\(X^{\prime}_{U}\)
_Then for a choice of orientation \(\gamma\) of \(X_{U}\), and induced compatible orientation of \(\Delta_{\alpha}\), the element \([\Delta_{\alpha}]\) in \(H_{2d}(X_{U}\times_{U}X^{\prime}_{U})\) corresponds to \(\alpha_{\ast}\mathbf{1}\) in \(\mathcal{H}\mathrm{om}(f_{\star}\mathbf{1},g_{\star}\mathbf{1})\) via the convolution isomorphism induced by \(\gamma\). We may also assume that this orientation of \(\Delta_{\alpha}\) arises as restriction from an orientation of \(\overline{\Delta_{\alpha}}\)._
Proof.: We first construct the orientations required by choosing an orientation of \(\overline{\Delta_{\alpha}}\) via resolution of singularities. We may then restrict this to \(\Delta_{\alpha}\), and transport structure to \(X_{U}\) to get our desired compatible orientations. From here all convolution morphisms and fundamental classes are with respect to this choice.
First, let's show that this fundamental class morphism \([\Delta_{\alpha}]:\mathbf{1}\to\omega_{X_{U}\times_{U}X_{U}^{\prime}}[-2d]\) is isomorphic (after whiskering) to an evaluation on the constant sheaf of a morphism of functors. That is, there are canonical morphisms of functors
\[\tilde{g}^{\ast}\to\Delta_{\ast}\to\Delta_{\dagger}\to\tilde{f}^{\dagger}\]
such that the following diagram commutes, and the composite is the fundamental class in \(H_{2d}(X_{U}\times_{U}X^{\prime}_{U})\):
\(\Delta_{\ast}\mathbf{1}\)\(\
This reduces the problem to a coherence problem for pseudofunctors, so consider the following diagram.
Going clockwise around the diagram gives the mate of \([\Delta_{\alpha}]\), by the previous discussion, while going anticlockwise yields the mate of the morphism \(f_{!}\xrightarrow{\alpha_{!}}g_{!}\to g_{*}\) which equals the composite \(f_{!}\to f_{*}\xrightarrow{\alpha_{*}}g_{*}\), giving the desired compatibility.
All squares in this diagram commute by naturality, or using that \(S_{!}\) is a pseudofunctor. Only the curved identity morphism is not immediate, this is \(f^{!}g_{!}\) applied to the following diagram:
This then commutes by the definition of the horizontal isomorphisms, and the general unit compatibilities of pseudofunctors.
|